Google DeepMind announced several new developer-centric features and capabilities for Gemini 1.5 and its Gemini API. Developers are now allowed access to the two million context window for Gemini 1.5 Pro, as well as code execution capabilities when using the Gemini API. These features were announced along with Gemma 2.
The added features are a massive boon to developers, as they serve to improve the functionalities of both Gemini 1.5 Pro and the Gemini API.
Code Execution for Gemini API
In March, Google announced additional API features, like video frame extraction and parallel function calling. With the new code execution feature, developers can now generate and run Python on the model. This is now available on both AI Studio and the Gemini API.
However, the execution feature is not connected to the internet and billing will be based on the amount of output tokens from the model.
“Once turned on, the code-execution feature can be dynamically leveraged by the model to generate and run Python code and learn iteratively from the results until it gets to a desired final output,” the company stated.
Access to Gemini 1.5 Pro’s Two Million Context Window
At I/O in May this year, Google announced an expansion to Gemini 1.5 Pro’s context window, going from one million to two million. However, this was behind a waitlist and in private preview. With the latest announcement, Google has opened access to the two million context window, specifically to developers.
In opening access, Google has also announced the launch of context caching in the Gemini API for both Gemini 1.5 Pro and Flash. “Using the Gemini API context caching feature, you can pass some content to the model once, cache the input tokens, and then refer to the cached tokens for subsequent requests,” the company stated.
Red-Teaming Ongoing for Gemini 1.5 Flash Tuning
Additionally, Google also stated that more features will soon be announced for Gemini 1.5 Flash. In particular, the company is working on allowing access to tuning the model for developers. As of June 27, the company is rolling out access to developers in order to red-team the feature. It is expected to release by mid-July.