Skip to content
Supply ChainAI Securitycredential-theft

Vercel Got Popped Through an AI Tool Nobody Was Tracking as a Vendor

5 min read
Share

First time I saw a Google Workspace OAuth app inventory in a real company, there were 430 approved applications. The security lead told me maybe 40 of those were genuinely reviewed. The rest were approved in a batch, years ago, by a CISO who had since left. That org is not special. That org is normal.

Vercel disclosed on Sunday, April 19, that a threat actor got into their internal systems through exactly this gap. The initial access was not a phishing page, not a stolen laptop, not a password spray. It was an AI agent platform, Context.ai, that one Vercel employee had connected as a Google OAuth application against their corporate Workspace account. Context.ai got compromised. The attacker used the token chain to take over the employee's Google Workspace account. From Workspace, they walked into Vercel's environments and read a set of environment variables that had not been marked "sensitive" in Vercel's own UI.

Vercel did three things right. The sensitive-flagged values were stored encrypted and, per their disclosure, were not read. CEO Guillermo Rauch confirmed Next.js, Turbopack, and their open-source projects were not affected. And they shipped a new environment-variable overview page and a better UI for flagging values sensitive within a day. Good response, all things considered.

The Gap That Keeps Not Getting Closed

Here is what is not getting enough airtime. The breach did not start at Vercel. It started at an AI vendor that most Vercel security reviewers almost certainly did not know was installed.

Google Workspace OAuth apps are a shadow-IT paradise. An employee clicks "Sign in with Google" on an AI productivity tool. Google shows them a consent screen. They click Allow. The tool now holds a long-lived OAuth refresh token with scopes like Calendar, Gmail, Drive, sometimes more. There is a workspace admin setting to require approval for third-party apps. Most orgs have it enabled for "high risk" scopes and not enabled for everything else. Most orgs do not audit the approved list. Most orgs do not have the vendor that the OAuth app points to in their TPRM program. Most orgs could not tell you, in under a week, which AI tools are currently sitting on their Workspace domain.

I have walked clients through this exercise a dozen times. The answer is always the same. A pile of AI writing tools, meeting note-takers, calendar assistants, CRMs, and "AI productivity" apps none of the security team knew had ever been installed. Two or three are from vendors so small they do not have a privacy policy page that renders. One or two used to exist and have since shut down, but their OAuth tokens are still valid.

That is the attack surface.

Why AI Tools Specifically

Because the category is booming, the tools are shipping fast, the review cycles are short, and the scopes they ask for are broad. An AI meeting assistant wants full calendar read, email drafts, Drive read. An AI writing assistant wants everything you have ever written. An AI "second brain" wants the whole thing. When one of these vendors gets popped, every customer org with the OAuth app installed inherits that blast radius. And none of these vendors got the enterprise procurement conversation that a traditional SaaS vendor would have gone through in 2015.

Context.ai was, by all appearances, a legitimate AI tool. It was not a typosquat or a malicious app. It just got compromised. That is the point. The supply chain does not have to be hostile to be dangerous. It just has to be in the blast path.

What to Do This Week

Four things. They are not hard.

One, pull your Google Workspace OAuth app inventory. In admin.google.com, under Security, then API Controls, then App Access Control. Export the list. Count how many are AI tools. Count how many you recognize. Count how many have enterprise contracts. The gap between those numbers is your exposure.

Two, kill dormant OAuth tokens. Anything with zero logins in the last 90 days comes off. Anything from a vendor you cannot identify comes off. Yes, you will break a handful of people's workflows. They will survive.

Three, put AI tools in your TPRM workflow even when they are free. Especially when they are free. The review does not have to be heavy. It just has to confirm the vendor exists, has a security page, has an incident disclosure policy, and someone at your org can reach them in an incident. If the vendor flunks any of those, the OAuth app should not be approved.

Four, treat OAuth tokens like credentials in your detection pipeline. Session takeover via stolen OAuth token does not look like a password compromise. It comes from the vendor's IP. It uses the vendor's app user-agent. If you are not logging OAuth app activity at the Workspace tier, you cannot see this attack when it runs.

The breach that starts with "an employee installed an AI tool nobody reviewed" is now the breach that ends with "someone walked into our production environment". That is not a hypothetical. That is Vercel's Sunday.

Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch if you'd rather know about your weak spots from a friendly face.