Co-Author: David Stone
In our last blog, we observed fraud losses to be rapidly-escalating, challenging organizations’ financial health, customers’ trust and brand reputation. These activities are a major revenue stream for organized crime, driving a complex ecosystem where illicit activities intersect and fuel each other. We noted the overlap between fraudulent activity and cybersecurity, and in this blog, will delve into the intersection of fraud, cybersecurity, and genAI, which is ushering in a new era of deception. A recent report by the World Economic Forum confirms this trend, observing that “cyber-enabled fraud is CEOs’ top concern.”
We’re facing a fundamental shift in this threat landscape. AI is no longer simply a tool for innovation and productivity; it is being weaponized to make fraud more scalable, sophisticated, and utterly convincing. This is a game-changer that demands a re-evaluation of defensive strategies.
Traditional scams often relied on mass-market approaches. Recall the classic phishing emails with glaring grammatical errors, awkward phrasing, strange attachments and generic requests for personal information. In principle, these red flags made it relatively easy for even moderately vigilant users to spot and discard the threat.
New Era of Deception
The rapid advances in genAI capabilities, coupled with its widespread adoption, have fundamentally changed the risk landscape by enabling fraud campaigns that are elaborate, scalable, grammatically flawless, deeply personalized, and highly realistic, proving to be effective in scams that result in billions of stolen funds. For instance, GASA’s State of Scams in the USA 2025 Report indicated criminals had stolen $64.8 billion in the past year. Likewise, in 2025 the FTC released data that consumers reported losing more than $12.5 billion to fraud in 2024, representing a 25% increase from the prior year. They did so by infiltrating the channels people use most, namely texts and email.
Phishing emails today convincingly mimic human tone and style using data scraped from social media and other sources. Likewise, AI-generated deepfakes (both video and audio) are used to create realistic impersonations of real people, mimicking their voice, appearance, and mannerisms with shocking accuracy. Deepfakes impersonating CEOs asking an employee to transfer funds, or a cloned voice of a loved one calling for ransom, are chilling examples that prey on human psychology, instilling a false sense of urgency and imminent harm. These scams bypass traditional skepticism, preying on emotion and eroding trust.
GenAI is exploited not just to improve the quality of the attack, but also to transform the scale, velocity, and sophistication of the operation, giving rise to Fraud-as-a-Service providers who enable malicious campaigns to create malicious ads, websites, and targeted multi-channel outreach (such as through email, SMS, and robo calls). We are already successfully taking action against these bad actors, including through litigation, such as our lawsuit against Lighthouse in November that resulted in the operation going dark. Unfortunately, these occurrences are becoming common-place, and are coming up in industry discussions, such as at this fall’s FS-ISAC conference. This is an area to watch, and the right time to consider how AI defensive capabilities can be leveraged to thwart these attacks.
The AI Counter-Offensive
To effectively combat the wave of AI-enhanced fraud and deepfakes, defenders will need to adopt a multi-layered, adaptive strategy that leverages the very technology being weaponized by bad actors. The focus has shifted from simply blocking known signals to detecting and acting against malicious intent and anomalous behavior in real-time.
- Detecting Spam and Scams
Google has embedded scam-fighting technology across its suite of products. With the sophistication of online scams on the rise, our safeguards keep the overwhelming majority of scams out of Search, blocking billions of potentially scammy results every day. Our classifiers utilize machine learning algorithms to identify patterns, anomalies, and linguistic cues indicative of fraudulent activity. However, the tactics employed by scammers are constantly shifting and evolving. Staying one step ahead of the scammers requires that we understand emerging threats and proactively develop countermeasures.
Over the last few years, we’ve launched new AI-powered versions of our anti-scam systems to identify and defend against scammy search results. These advancements enable us to analyze vast quantities of text and identify subtle linguistic patterns and thematic connections that might indicate coordinated scam campaigns or emerging fraudulent narratives. For example, our systems are able to identify interconnected networks of deceptive websites that might appear legitimate in isolation. This deeper understanding of the nuances and trends within the scam ecosystem allows for the development of more targeted and effective detection mechanisms, providing a crucial edge in this ongoing battle.
In addition, we use AI-enhanced spam filtering to detect and block spam emails in Gmail. Many malware and phishing attacks start with an email. Gmail blocks more than 99.9% of spam, phishing attempts, and malware from reaching the recipient. Similar to Gmail, which blocks 99% of spam messages, Drive also automatically classifies content into spam view, protecting users from seeing dangerous or unwanted files.
Google also enables proactive alerting so Gmail warns users before they download an attachment that could put security at risk and protects users’ accounts against suspicious logins and unauthorized activity by monitoring multiple security signals. We also offer the Advanced Protection Program for accounts most at risk of targeted attacks.
Google’s Priority Flagger Program for financial services was a significant step in further enhancing our scam detection and prevention efforts. Launched in partnership with FS-ISAC in the spring of 2025, the program has been tackling the most common challenges reported by financial services organizations: scam ads, phishing emails, and executive impersonation, by helping to streamline the process of identifying, reporting, and mitigating fraud threats related to potentially harmful content impacting Google platforms. Not only are the signals submitted by program participants used to inform tactical actions, they are also used to improve future detection, creating a flywheel effect that supports proactive efforts to weed out abuse.
In addition to blocking scams through search and email, Google’s Android platform uses the best of Google AI and multi-layered defenses to protect users around the world from over 10 billion suspected malicious calls and messages every month.. Google Messages, the default messaging app on Pixel and most Android devices, uses real-time spam detection to automatically identify and filter out harmful content. Additionally, Google Message’s Scam Detection uses on-device AI to identify conversational patterns associated with common fraud schemes and alert users in real time, all while keeping your conversations private. Phone by Google can similarly screen incoming calls automatically, blocking known spam before phones even ring. And our latest innovation added Circle to Search and Lens anti-scam capabilities that allow users to simply circle a suspicious text or image to receive an AI-powered breakdown of whether and why it may be a scam.
- Enabling Brand Protection
Google Safe Browsing helps protect users by displaying warnings when they attempt to navigate to dangerous sites or download potentially malicious files. It also notifies webmasters when their websites are compromised by malicious actors and helps them diagnose and resolve the problem so that their visitors stay safe. Safe Browsing protections are enabled by default across Google products and power safer browsing experiences across the Internet. Refer to our Site Status diagnostic tool to see whether a site currently contains content that Safe Browsing has determined to be dangerous.
Web Risk extends Safe Browsing capabilities, using AI to proactively protect users from unsafe sites. It enables client applications to check URLs against Google's constantly updated lists of unsafe web resources. It actively identifies threats like phishing sites, deceptive websites, and pages hosting malware or unwanted software that are targeting your brand. When a threat is detected, Web Risk can provide warnings to users accessing these unsafe resources, leveraging its reach across billions of devices. Customers gain access to a dashboard to monitor the solution's impact and acquire insights into the specific threats targeting their brand.
- Combating Deepfakes
We’ve also made significant strides in combating deepfakes. Google is a steering committee member of the Coalition for Content Provenance and Authenticity (C2PA), an industry-backed project focused on the development of open, global technical standards for establishing content provenance and authenticity. The C2PA seeks to set specifications for content credentials which convey a rich set of information about how media such as images, videos, or audio files were made, protected by the same digital signature technology that has secured online transactions and mobile apps for decades.
Content credentials empower users to identify AI-generated (or altered) content, helping to foster transparency and trust in generative AI and can be complemented by technologies such as SynthID, which watermarks and identifies AI-generated content by embedding digital watermarks directly into AI-generated images, audio, text or video, making it easier to detect when AI-generated content has been used for malicious purposes. These credentials can be used as a standalone detection mechanism or in combination with other approaches to give better coverage across content types and platforms.
As noted in a recent Google Security blog, “the traditional approach to classifying digital image content has focused on categorizing content as “AI” vs. “not AI.” Research shows that if only synthetic content is labeled as “AI”, then users falsely believe unlabeled content is “not AI.” This is why Google is taking a different approach to applying C2PA Content Credentials.” Google Pixel 10 phones now support C2PA in pixel camera and Google Photos, taking a further step toward greater digital media transparency. As noted in the Security blog, “Instead of categorizing digital content into a simplistic “AI” vs. “not AI”, Pixel 10 takes the first steps toward implementing our vision of categorizing digital content as either i) media that comes with verifiable proof of how it was made or ii) media that doesn't.” Additionally, since November 2025, we’re making it easier for everyone to verify if an image was generated with or edited by Google AI right in the Gemini app, using SynthID, our digital watermarking technology that embeds imperceptible signals into AI-generated content.
The Human Firewall
Resilient organizations combine AI-enabled defenses and operational processes with a well trained workforce. For instance, by implementing mandatory call back verification procedures for high risk actions. This is part of a non-negotiable "pause and verify" protocol which requires that any urgent request to transfer funds, change vendor banking details, or share sensitive data received via email, text, or even voice call must be confirmed through a secondary, trusted channel (e.g., a call-back to a pre-verified phone number or a separate secure chat channel).
In addition, for financial transactions above a certain threshold, dual control protocols are enforced for an additional layer of authorization. Employees in fraud-susceptible roles are also trained to recognize and challenge a sense of urgency in out of band requests, such as by instituting "safe word" protocols that must be used during any unexpected or urgent verbal request for funds transfer or other sensitive actions.
By integrating these technological and procedural controls into processes and workflows, organizations can build a defense robust enough to identify and neutralize the sophisticated, multi-channel attacks that are powered by AI and coming at greater scale and velocity.
For further context on the steps Google is taking in the fraud space, refer to:
- Google's paper on Tackling scams and fraud
- Google's Threat Intelligence: Advances in Threat Actor Usage of AI Tools
- Scams Advisory #1
- Scams Advisory #2
- Scams Advisory #3