Your Google Maps API Key Can Now Drain Your Bank Account
Google silently changed API key permissions so that keys meant for Maps can now call Gemini AI. Here's how to audit your GCP projects and lock down exposed keys before someone else finds them.
Laurent Goudet · March 4, 2026 · 7 min read
In February 2026, a developer posted on Reddit that their Google Cloud API key had been compromised, racking up $82,314 in Gemini AI charges in 48 hours against a normal monthly spend of $180. They were facing bankruptcy.
The post went viral. But the real story isn’t about one stolen key — it’s about a silent change Google made to how API keys work, one that turned millions of previously harmless keys into potential financial liabilities overnight.
🚨 Google told devs: API keys aren’t secrets. Gemini changed that.
— Truffle Security (@trufflesec) February 25, 2026
😱 We found ~3,000 public keys silently authenticating to Gemini - exposing private files, cached data & charging for LLM usage
💥Even Google’s own keys were vulnerable.
🔗 https://t.co/UYRE9uCozc pic.twitter.com/PYQrwn2oe9
What Changed
For over a decade, Google told developers that API keys are not secrets. Their documentation explicitly said so. The reasoning was sound: API keys for services like Google Maps were designed to be embedded in client-side JavaScript, shipped in mobile apps, and checked into public repositories. They identified your project for quota and billing purposes, but they couldn’t access sensitive data or perform privileged operations.
Then Google launched Gemini and changed the rules.
Google Cloud uses a single API key format — strings starting with
AIza — for all services. The same key that authenticates a Maps
JavaScript request can authenticate a Gemini API call. The only thing that
controls which APIs a key can access is its
API target restrictions — and many keys, created when
restrictions weren’t necessary, have none.
The Gemini API (generativelanguage.googleapis.com) can be enabled
on any GCP project. Once enabled, every unrestricted API key in that project
can call it. If your Maps key is unrestricted and Gemini is enabled on the
same project, anyone with that key can run Gemini queries and bill them to
your account.
Truffle Security scanned the internet and found nearly 3,000 Google API keys exposed in client-side code that now have Gemini access. Separately, Quokka found over 35,000 unique Google API keys embedded in Android apps. These keys were safe to expose for years — they’re not safe anymore.
Why This Is Dangerous
The Gemini API is expensive at scale. A compromised key can generate thousands of dollars in charges per hour. Unlike Maps API abuse, which is rate-limited and relatively cheap per request, LLM inference is compute-intensive and priced accordingly.
The $82,000 Reddit incident isn’t an outlier — it’s the predictable consequence of making a billable AI service accessible through keys that were designed to be public. The attacker doesn’t need to steal credentials from a server. They just need to find one of the thousands of API keys already sitting in public GitHub repos, client-side JavaScript bundles, or decompiled mobile apps.
Google has since responded by defaulting new AI Studio keys to Gemini-only access and blocking leaked keys discovered being used with the Gemini API. But the fundamental issue remains: existing keys created before this change have no restrictions, and Google can’t retroactively restrict them without breaking legitimate integrations.
How to Check If You’re Exposed
The risk depends on two factors: whether the Gemini API is enabled on your GCP
project, and whether your API keys have API target restrictions. You can check
both with the gcloud CLI.
Manual Check for a Single Project
List all API keys and their restrictions:
gcloud services api-keys list --project=YOUR_PROJECT_IDGet details for a specific key — look for restrictions.apiTargets
:
gcloud services api-keys describe KEY_ID --project=YOUR_PROJECT_IDCheck if Gemini is enabled:
gcloud services list --enabled --project=YOUR_PROJECT_ID \
| grep -i generativelanguageIf the key has no apiTargets and Gemini is enabled, that key can
call Gemini.
Automated Audit Across All Projects
If you manage more than a handful of GCP projects, checking them one by one isn’t practical. I wrote a script that scans every project in your account, checks both API target restrictions and application restrictions (IP, referrer, Android/iOS), and classifies each unrestricted key by risk level:
CRITICAL — Key can call Gemini AND is publicly accessible (no restrictions, or client-side restrictions like referrer/Android/iOS which are easily bypassed)
HIGH — Key can call Gemini but is IP-restricted (server key — lower risk since only authorized IPs can use it)
MEDIUM — Key has no API restrictions but Gemini isn’t enabled on the project yet (one click away from risk)
The script opens with a clear verdict — YOU ARE AT RISK,
NOT AT RISK TODAY, or YOU ARE NOT AT RISK — so you
know immediately whether you need to act. It checks projects in parallel (20
at a time by default) and includes the gcloud
command to fix each flagged key.
Grab it from the gist and run it:
curl -sL https://gist.githubusercontent.com/laurentgoudet/8cd3956be4c594bd9f88c9a16dc2f46b/raw/check_gemini_exposure.sh -o check_gemini_exposure.sh
chmod +x check_gemini_exposure.sh
# Scan all projects
./check_gemini_exposure.sh
# Scan only specific projects
./check_gemini_exposure.sh "freelancer|escrow"
# Increase parallelism
MAX_PARALLEL=40 ./check_gemini_exposure.shYou need the gcloud CLI authenticated and python3
available. The script automatically detects whether stdout is a terminal and
disables color codes when piped to a file.
The full script is available as a
GitHub gist
.
The script makes two API calls per project (one to check enabled
services, one to list API keys). Auto-generated sys-*
projects (created by Google Apps Script) are filtered out automatically.
For organizations with hundreds of remaining projects, the default
parallelism of 20 keeps you well within Google’s rate limit of 240 API
key reads per minute. You can increase it with
MAX_PARALLEL=40 if needed, or use the filter argument to
narrow the scan to specific projects.
How to Fix It
Once you’ve identified exposed keys, there are two complementary fixes.
Restrict API Keys to Specific Services
Every API key should be restricted to only the APIs it actually needs. A Maps key should only be able to call Maps APIs:
gcloud services api-keys update KEY_ID \
--project=YOUR_PROJECT_ID \
--api-target=service=maps-backend.googleapis.com \
--api-target=service=geocoding-backend.googleapis.com \
--api-target=service=places-backend.googleapis.comThis is the correct long-term fix. Even if Gemini isn’t enabled today, restricting keys protects you against any future API that Google adds to the platform.
Disable the Gemini API
If no one in your organization is using Gemini through a particular project, disable it:
gcloud services disable generativelanguage.googleapis.com \
--project=YOUR_PROJECT_IDThis removes the immediate risk but doesn’t address the underlying problem of unrestricted keys. Do both.
Set Billing Alerts and Budget Caps
Billing alerts notify you when spending exceeds a threshold, but they don’t stop charges. Budget caps can actually halt API usage. For any project with API keys exposed to the internet — which includes every project with a Maps key in client-side code — configure both.
The Bigger Problem
Google’s response — defaulting new keys to Gemini-only access and blocking discovered leaked keys — addresses the symptoms. The root cause is a design decision made years ago: using a single credential format for both “public identifier” and “sensitive authentication token” use cases.
When Google Maps API keys were introduced, treating them as non-secret was reasonable. They identified a project for billing and could be scoped by HTTP referrer or IP address. The security model assumed these keys would only ever access low-sensitivity, rate-limited, cheap-per-request APIs.
That assumption broke the moment the same key format gained access to Gemini — a service where a single compromised key can generate tens of thousands of dollars in charges in hours. Google essentially changed the threat model for every existing API key without notifying the key owners.
If you manage GCP projects, run the audit. The five minutes it takes to restrict your keys is considerably cheaper than an $82,000 surprise.
Frequently Asked Questions
Can my Google Maps API key really be used to call Gemini?
Yes, if the key has no API target restrictions and the Gemini API (generativelanguage.googleapis.com) is enabled on the same GCP project. Google uses a single key format (AIza...) for both public-facing APIs like Maps and sensitive APIs like Gemini. An unrestricted key can call any enabled API.
How do I know if my API key is at risk?
Check two things: whether the Gemini API is enabled on your GCP project (gcloud services list --enabled --filter=generativelanguage), and whether your API keys have API target restrictions (gcloud services api-keys describe KEY_ID). If Gemini is enabled and a key has no restrictions, it's exposed.
Did Google auto-enable Gemini on existing projects?
Google hasn't confirmed blanket auto-enablement, but the Gemini API can be enabled through AI Studio, which creates gen-lang-client projects automatically. The core issue is that keys created years ago with no restrictions — which Google's own documentation said was fine — now have access to a billable AI service they were never intended for.
What should I do right now to protect my GCP projects?
Restrict every API key to only the specific APIs it needs (e.g., Maps JavaScript API only). Disable the Gemini API on projects that don't need it. Set up billing alerts and budget caps. The audit script in this article can scan all your projects automatically.
Is disabling the Gemini API enough, or do I also need to restrict keys?
Both. Disabling Gemini removes the immediate risk, but unrestricted keys are a ticking time bomb — any new API enabled on the project becomes accessible through them. Restricting keys to specific APIs is the correct long-term fix.
Other deep-dives
TLS vs mTLS Handshake
Comparing standard and mutual TLS authentication flows
Network SecurityTLS 1.2 vs TLS 1.3 Handshake
Comparing handshake efficiency and security improvements
Network EngineeringIPv6-Only Network with NAT64/464XLAT
Running an IPv6-only local network while maintaining IPv4 internet connectivity
CDN EngineeringThe Fastly VCL == false Trap
How using == false instead of ! in Fastly VCL compound conditions can silently break your logic
AI & IndustrySomething Big Is Happening — But It's Not What You Think
Why AI is an abstraction layer shift, not an extinction event — a practitioner's response to the viral AI essay
AI & IndustryAI Agent Orchestration at Scale — What Actually Works in Production
Patterns and hard lessons from running multi-agent systems at 80M+ user scale: routing, fallback chains, context management, and why most agent architectures fail.
Network SecurityDNSSEC: Chain of Trust from Root to This Domain
How DNSSEC builds a cryptographic chain of trust from the DNS root to this zone — with Pulumi setup and live dig verification
Network SecurityRolling Out DMARC Enforcement at Scale
A practical guide to deploying DMARC across a large platform — SPF, DKIM, and alignment fixes across AWS SES, Google Workspace, Postfix relays, and dozens of domains