- Google says Gemini 2.0 Flash is twice as fast as Gemini 1.5 Pro
- The new model is much closer to agentic AI than 1.5 ever was
- CEO Sundar Pichai says 2.0 will help us to understand information better
Google has announced its latest family of AI models – Gemini 2.0 – with outlandish promises such as the ability to outperform previous models while being twice as fast.
Launching in the same month as OpenAI’s 12 Days of OpenAI announcements and new Apple Intelligence features in non-US markets, developers can now get their hands on the Gemini 2.0 Flash experimental model through the Gemini API in Google AI Studio and Vertex AI.
Gemini 2.0 Flash can now generate text, images and audio in a single API call, helping to streamline AI content generation and boosting productivity.
Gemini 2.0 Flash now available to developers
Gemini 2.0 can also execute searches, run rode and interact with third-party applications to expand its usefulness across more areas as a single tool to complete the job, rather than users having to jump between the AI chatbot and other applications.
“Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” Google CEO Sundar Pichai explained.
Apart from the direct improvements we can see and benefit from now, Gemini 2.0 also marks a step closer to agentic AI – models capable of planning, reasoning and taking actions with user guidance (and that bit’s important).
For example, Google has used Gemini 2.0 to power its experimental code agent, Jules, which integrates directly into GitHub workflows despite that platform’s Microsoft ownership (and therefore its OpenAI, GPT affiliation).
Google DeepMind CEO Demis Hassabis and CTO Koray Kavukcuoglu explained that Jules can “tackle an issue, develop a plan and execute it.”
They added that the company is building numerous safeguards to protect users from potential harm, including evaluating and training the model further and introducing mitigations against users unintentionally sharing sensitive information.
Google’s universal AI assistant, Project Astra, is also able to remember more of its previous conversations and work with more tools, like Google Search, Lens and Maps. Project Astra also sees improved latency, allowing it to understand human language “at about the latency of human conversation.”
Pichai added: “If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful.”