AI Deepfakes and Digital Cloaking Power New Investment Scams That Drain Victim Bank Accounts
A sophisticated new wave of online fraud is blending artificial intelligence, deepfake media, and commercial cloaking infrastructure to push highly convincing investment scams at scale, according to new research from Infoblox Threat Intel and Confiant. The campaign is built around an old scam formula with a modern upgrade: hide the malicious content from scanners, personalize the lure for real users, and pressure victims into handing over contact details or moving money before they realize they are being manipulated.
At the center of the operation is Keitaro, a commercial advertising tracker that is increasingly being abused as a traffic distribution system, or TDS. Instead of merely tracking ad performance, attackers use it to decide who sees what. A security crawler, sandbox, or automated scanner may be shown a benign page or harmless redirect. A real user arriving from a search result, social media ad, spam message, or compromised site may instead be routed to an investment scam, a fake browser update, or another persuasive fraud page tailored to their location, device, and language.
Infoblox and Confiant said they studied four months of activity starting October 1, 2025 and identified thousands of instances of malicious Keitaro cloaking content. Across that period, they found about 15,500 domains actively used for malicious Keitaro instances, with roughly 9,000 of those domains registered before their observed malicious use. The researchers said the traffic was driven from compromised websites, spam, social media, and advertising, underscoring both the scale and durability of the abuse.
What makes the current campaigns more dangerous is how smoothly generative AI fits into the fraud pipeline. Infoblox and Confiant said threat actors are now creating high-fidelity AI-generated creatives localized to the target, while also using deepfake audio and video to impersonate trusted representatives or media personalities. In practice, that means attackers are no longer constrained by poor design, awkward language, or limited production capacity. They can mass-produce ads, landing pages, headlines, visuals, and fake endorsement material that feel polished, local, and credible.
The researchers argue that this combination of AI-generated marketing and older fraud themes is what gives the scams their unusual persistence. Investment fraud still dominated the malicious Keitaro activity they observed, but many of the lure pages now claim to use advanced AI or AI-driven algorithms that supposedly automate trading and generate outsized returns. Several campaigns also incorporated deepfake imagery or video to further boost trust. In other words, attackers are not just using AI behind the scenes. They are making AI itself part of the sales pitch.
The attack flow is engineered to feel seamless to the victim and opaque to defenders. A user first encounters an ad or search listing for a get-rich-quick investment platform, financial opportunity, or in some cases a fake browser or security update. The Keitaro-based cloaking layer then evaluates the visitor's IP address, device characteristics, browser fingerprint, geography, and language. If the visitor looks like a security tool, the chain serves a clean page, a decoy, or a dead end. If the visitor matches a target profile, the system routes them to the real scam page, often localized and dressed up with AI-generated visuals and persuasive copy. From there, the objective is to increase trust, collect contact information, and escalate the interaction into phone-based pressure or financial transfers.
Infoblox's deeper analysis suggests the abuse is not random. The researchers found that regardless of visitor location and device type, many of the final lure pages were presented in a relatively limited set of languages, predominantly Russian and English. They also noted that while many campaigns were global, some actors observed by Confiant in the advertising ecosystem specifically targeted the United States. That points to a level of operational optimization where fraudsters are not just spraying generic content at the internet, but carefully tuning traffic flows to the audiences most likely to engage.
The report also highlights why cloaking remains such a stubborn problem. Traditional scanners and automated analysis tools are built to inspect a page and judge whether it is malicious. But cloaking breaks that assumption by making the page conditional. The malicious content may only appear when the right person arrives with the right browser, from the right region, at the right stage of the redirect chain. This makes conventional detection weaker and gives attackers more time to rotate domains, landing pages, ad creatives, and targeting rules before defenders can fully map the infrastructure.
That operational advantage helps explain why researchers described the abuse as a persistent stream rather than an isolated set of campaigns. Keitaro is self-hosted, feature-rich, and easy to deploy on multiple hosting platforms, making it attractive for threat actors who want flexibility and scale. Infoblox said the level and persistence of the malicious use it observed was staggering, and stressed that the problem is underreported despite repeated appearances of Keitaro in previous cybercrime operations, including fake browser update chains and other scam ecosystems.
There is, however, one promising defensive angle. Because many of these scam operations still rely on commercial or semi-commercial infrastructure, coordinated reporting can make a difference. The researchers said coordinated abuse reports have already contributed to infrastructure and account takedowns, giving defenders a viable remediation path even as threat actors continue cycling through new domains and fresh ad creatives. That does not solve the problem permanently, but it does show that pressure applied at the adtech and service-provider layer can disrupt at least part of the fraud supply chain.
The broader lesson is that online fraud is becoming more adaptive, more personalized, and harder to inspect with traditional tools alone. Attackers are now combining cloaking, conditional traffic routing, generative AI, and deepfake media to create scams that can look legitimate to victims while appearing harmless to security systems. For banks, ad platforms, publishers, and security teams, that means the challenge is no longer just blocking fake pages. It is identifying an entire decision engine designed to show different realities to different viewers.
Reference Links and Sources