Google launches Gemini 2.0 AI with advanced features
Google has announced the release of Gemini 2.0, described as their most capable AI model to date.
The release includes several features aimed at enhancing the capabilities of developers to create more interactive applications. Gemini 2.0 Flash Experimental is now available to developers via Google AI Studio and Vertex AI. It also comes in a chat-optimised version for both desktop and mobile web users.
A new feature called "Deep Research" has been launched alongside Gemini 2.0. This feature promises advanced reasoning, allowing the AI to function as a research assistant. It is currently available in the Gemini Advanced offering.
For developers looking to create more interactive applications, Google has introduced a Multimodal Live API. This API facilitates real-time audio and video streaming and allows for the integration of multiple tools like Google Search and Maps for handling complex use cases.
Gemini 2.0's reasoning capabilities will also enhance AI Overviews, which aim to address more complex topics and inquiries, including advanced mathematical equations, multimodal queries, and coding. This update is expected to be released broadly next year, with testing having commenced this week.
The application of Gemini 2.0 is being explored across Google's research prototypes, including Project Astra, which delves into future capabilities of a universal AI assistant; Project Mariner, which looks into future human-agent interactions starting from browsers; and Jules, an AI-powered coding agent for developers. In addition, there are explorations into the application of these agents within video games and robotics.
Google is taking an "exploratory and gradual" approach to the development of these technologies as it extends Gemini 2.0's applications across different domains.
Trillium, Google's sixth-generation TPU, powers Gemini 2.0 and these advancements. It is available to customers starting today.