Google unveils Gemini 2.5 Flash Image with advanced AI-powered editing and generation
Google has launched Gemini 2.5 Flash Image, its most advanced AI image model, offering character consistency, natural language-based edits, and multi-image fusion. Available via the Gemini API and Google AI Studio, the model includes invisible watermarks and developer-focused tools.
Published Date - 28 August 2025, 04:50 PM
Hyderabad: Google has introduced Gemini 2.5 Flash Image (internally codenamed “nano-banana”), its most advanced image generation and editing model to date. The update brings powerful new features including multi-image fusion, character consistency, and natural language-based photo editing.
Building on Gemini 2.0 Flash, which launched earlier this year, the new model responds to developer requests for higher-quality images and more creative control. Gemini 2.5 Flash Image is now available in preview via the Gemini API, Google AI Studio, and Vertex AI for enterprise. Pricing is set at $30 per million output tokens, with each image averaging 1,290 tokens, or $0.039 per image.
Key capabilities
Character consistency: Preserve the same character or product across multiple prompts and edits, enabling use cases like branded assets, real estate cards, or product catalogs.
Prompt-based editing: Apply precise edits using natural language, such as blurring a background, changing colors, or removing objects.
Native world knowledge: Use semantic understanding to interpret diagrams, assist with educational prompts, or answer real-world questions in visual contexts.
Multi-image fusion: Merge multiple input images into a single scene, restyle rooms, or create composite visuals.
Google AI Studio has also been updated with new “build mode” tools, allowing developers to quickly prototype apps, remix templates, and deploy projects. OpenRouter.ai and fal.ai are partnering with Google to expand access to over three million developers.
To ensure transparency, all AI-generated or edited images include an invisible SynthID watermark. Google said the model will be stabilized in the coming weeks and highlighted ongoing work to improve long-form text rendering, factual accuracy, and character reliability.