Integrate Nano Banana's state-of-the-art AI image editing into your apps. This guide covers authentication, request formats, code examples (Node.js & Python), rate limits, error handling, and production best practices using Google AI Studio and Vertex AI.
Overview & When to Use
API integration unlocks automated, repeatable, and scalable Nano Banana workflows for web apps, creative tools, and production pipelines. Use it when you need consistent edits at scale, user-facing features, or server-side control.
Choose Your Integration Path
- • Quick setup, UI playground, generous free tier
- • API keys via AI Studio; simple client SDKs
- • Ideal for prompt development and testing
- • IAM, VPC-SC, CMEK, audit logs
- • Quotas, monitoring, regional control
- • Suitable for regulated environments
Authentication & Setup
- Create an API key in AI Studio
- Store it in an environment variable (never commit keys)
- Use client SDKs to call Gemini Native Image
- Create a service account with minimal roles
- Authenticate via ADC or JSON key (server-side only)
- Call the Vertex AI Images API with regional endpoints
Security Tip
Use per-environment keys, rotate regularly, and proxy calls from your server to avoid exposing credentials in the browser.
Request Format & Media Inputs
Send your instruction and media. Keep prompts explicit about what to change and what to preserve.
{
"model": "gemini-2.5-flash",
"input": [
"Change the background to a sunset beach while keeping the person identical",
{ "inlineData": { "data": "<base64 image>", "mimeType": "image/jpeg" } }
]
}Quickstart Code (Node & Python)
import { GoogleGenerativeAI } from '@google/generative-ai'
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!)
const model = genAI.getGenerativeModel({ model: 'gemini-2.5-flash' })
export async function editImage(imageBase64: string, prompt: string) {
const result = await model.generateContent([
prompt,
{ inlineData: { data: imageBase64, mimeType: 'image/jpeg' } },
])
return result.response
}from google.cloud import aiplatform
from vertexai.preview.vision_models import ImageGenerationModel
def edit_image(image_bytes: bytes, prompt: str):
aiplatform.init(project='YOUR_PROJECT', location='us-central1')
model = ImageGenerationModel.from_pretrained('imagegeneration@002')
result = model.edit_image(
prompt=prompt,
image=image_bytes,
mime_type='image/jpeg',
)
return resultcurl -X POST \
-H "Authorization: Bearer $ACCESS_TOKEN" \
-H "Content-Type: application/json" \
https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent \
-d '{
"contents": [{
"parts": [
{"text": "Change background to a sunset beach; keep the person identical"},
{"inline_data": {"mime_type": "image/jpeg", "data": "<base64>"}}
]
}]
}'Errors, Retries & Timeouts
401 Unauthorized: invalid or missing credentials
429 Too Many Requests: back off and retry with jitter
413 Payload Too Large: compress or stream uploads
Security & Production Checklist
- • Never expose API keys in the browser
- • Rotate credentials and scope least privilege
- • Validate file types and enforce size limits
- • Log request IDs, not raw media
- • Use regional endpoints close to users
- • Cache non-personal results where allowed
- • Monitor quotas and error rates
- • Add fallbacks for partial outages
Ready for Production
Confirm SLOs, alerts, and data handling policies before launch.
This tutorial is absolutely fantastic! The step-by-step approach and real examples made all the difference.
Excellent content! This has completely transformed my workflow. Thank you for the detailed explanations!
