Google has announced the release of Gemini 2.0 Pro and 2.0 Flash Thinking Experimental models in the Gemini App as well as on the web. Last week, Google announced the availability of Gemini 2.0 Flash stable for all Gemini users on the mobile and web. Here’s everything to know about the new announcements.
Gemini 2.0 Pro Experimental: Everything to Know
Google’s Gemini 2.0 Pro Experimental is being touted as the brand’s “best model yet for coding performance and complex prompts.” Google says it has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model it has released so far.
It comes with the largest context window Google has ever offered with any of its AI models, at 2 million tokens, which enables it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution. Further, there’s support for multimodal input with text output on release, with more modalities ready for general availability in the coming months
Gemini 2.0 Pro is available now as an experimental model to developers in Google AI Studio and Vertex AI and to Gemini Advanced users in the model drop-down on desktop and mobile.
Gemini 2.0 Flash Thinking Experimental: All Details
This model is currently ranked as the “world’s best model” and is available at no cost. “Built on the speed and performance of 2.0 Flash, this model is trained to break down prompts into a series of steps to strengthen its reasoning capabilities and deliver better responses. 2.0 Flash Thinking Experimental shows its thought process so you can see why it responded in a certain way, what its assumptions were, and trace the model’s line of reasoning,” said Google.
Google is also rolling out a version of 2.0 Flash Thinking that can interact with apps like YouTube, Search and Google Maps. These connected apps already make the Gemini app a uniquely helpful AI-powered assistant, and the company is exploring how new reasoning capabilities can combine with your apps to help you do even more. It further supports  multimodal input with text output on release, with more modalities ready for general availability in the coming months.
This model is also available to use on Gemini web and mobile app. Google says it looks to expand both 2.0 Pro and 2.0 Flash Thinking models to Google Workspace Business and Enterprise customers soon.
Gemini 2.0 Flash-Lite: Google’s most cost-efficient model yet
Building on the success of 1.5 Flash, Google says it wanted to keep improving quality, while still maintaining cost and speed. As a result, it is introducing 2.0 Flash-Lite, a new model that has better quality than 1.5 Flash, at the same speed and cost. It outperforms 1.5 Flash on the majority of benchmarks as per the company.
Like 2.0 Flash, it has a 1 million token context window and multimodal input. For example, it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier. Same as other two models announced, this one also has support for multimodal input with text output.
Gemini 2.0 Flash-Lite is available in Google AI Studio and Vertex AI in public preview.