Google has begun the rollout of Gemini 2.5 Pro as the first model in the Gemini 2.5 lineup. The first 2.5 release is an experimental version of 2.5 Pro, which the company claims is “state-of-the-art” on a wide range of benchmarks and debuts at number one position on LMArena by a significant margin.
Google says Gemini 2.5 Models are thinking models, which means they’re capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy. 2.5 Pro also shows strong reasoning and code capabilities according to Google, leading on common coding, math and science benchmarks.
Gemini 2.5 Pro is available now in Google AI Studio and in the Gemini app for Gemini Advanced users, and will be coming to Vertex AI soon. Google will also introduce pricing in the coming weeks, enabling people to use 2.5 Pro with higher rate limits for scaled production use.
Gemini 2.5 Pro excels in advanced reasoning across multiple benchmarks. Without relying on costly test-time techniques like majority voting, it leads in math and science evaluations, including GPQA and AIME 2025. It also achieves a state-of-the-art 18.8% on Humanity’s Last Exam—an expert-designed dataset that tests the limits of human knowledge and reasoning—without using external tools.
Read More: Google Begins Rolling Out Live Video and Screen-Sharing in Gemini
Google says that 2.5 Pro excels at creating visually compelling web apps and agentic code applications, along with code transformation and editing. On SWE-Bench Verified, the industry standard for agentic code evals, Gemini 2.5 Pro scores 63.8% with a custom agent setup.
2.5 Pro ships today with a 1 million token context window (2 million coming soon), with strong performance that improves over previous generations. It can comprehend vast datasets and handle complex problems from different information sources, including text, audio, images, video and even entire code repositories.
Developers and enterprises can start experimenting with Gemini 2.5 Pro in Google AI Studio now, and Gemini Advanced users can select it in the model dropdown on desktop and mobile. It will be available on Vertex AI in the coming weeks.