VERSE PRESS

Crypto News, Global First.

AI Is Arming Both Sides of the Smart Contract Security War

Smart contract security experts gathered in a Sui Foundation livestream on April 10 to confront a problem the industry can no longer defer: artificial intelligence is simultaneously making it cheaper to attack blockchain protocols and cheaper to defend them, the outcome of the contest still unclear.

AI Is Arming Both Sides of the Smart Contract Security War
|

The panel, hosted by Sam Blackshear (co-founder and CTO of Mysten Labs and creator of the Move programming language), included Ben Samuels (Blockchain Engineering Director, Trail of Bits), Cosmin Radoi (CEO, Asymptotic), Seth Hallem (CEO, Certora), and Robert Chen (Founder, OtterSec).

Their central argument was blunt: smart contracts sit at the front line of an AI security arms race precisely because they are public by default and hold real money. Any advance in AI code analysis benefits attackers and defenders at the same time.

The Numbers Behind the Urgency

The context is not abstract. Crypto losses across all categories, including scams and fraud, reached $17 billion in 2025, the worst year on record. Direct theft totaled $3.4 billion. The single largest incident, a $1.5 billion breach at Bybit, was a supply chain attack rather than a smart contract exploit, which underscores a point the panelists raised directly: the most dangerous vulnerabilities in 2025 were human and operational, not purely technical.

Mitchell Amador, CEO of Immunefi, put it plainly after the Bybit incident: "Despite 2025 being the worst year for hacks on record, those hacks stem from Web2 operational failures, not onchain code. The human factor is now the weak link that onchain security experts must prioritize."

The human-factor problem extends well beyond operational failures. Impersonation scams rose 1,400 percent year-over-year in 2025, a trend that is especially dangerous in high-adoption markets where user education is still developing.

Despite that, code-level risk is escalating fast. The EVMbench benchmark, released jointly by OpenAI and Paradigm in February 2026, tests AI agents against 117 real smart contract vulnerabilities drawn from 40 prior audits. GPT-5.3-Codex now scores 71.0 percent in exploit mode (a setting in which the agent attempts to actively exploit a vulnerability rather than merely identify it). Six months earlier, GPT-5 scored 33.3 percent on the same benchmark. AI agents are currently better at exploiting vulnerabilities than at finding or patching them, which is a significant asymmetry for defenders to manage.

What AI Is Actually Doing in Audits

Trail of Bits offered the most concrete operational data. Before integrating AI natively into its workflow, the firm's auditors found roughly 15 bugs per week. AI-augmented auditors now find approximately 200 bugs per week on suitable engagements. About 20 percent of all reported bugs are now first identified by AI systems.

The firm started with internal buy-in from only 5 percent of staff, with 95 percent showing passive skepticism to active resistance to AI tools. That position has since reversed, with AI now embedded across core audit workflows. Trail of Bits has built 94 plugins, 201 skills, and 84 agents internally to support that integration.

Certora's Seth Hallem reported a finding that sharpened the room's attention: in a comparison of AI-generated vulnerability reports against independent human analysis, AI identified seven critical steal-funds vulnerabilities. Only three appeared in both sets. Four were found by AI alone.

The panel also reached consensus on one question about source code disclosure: smart contract source code should remain public, since bytecode can be reverse-engineered by AI regardless of whether source is disclosed. Formal verification specifications, however, are a different matter. As panelists noted, AI agents reason about specs with particular effectiveness, which makes those specifications sensitive if made public.

The agentic capabilities on display extend well beyond automated code review. During testing, an AI agent independently modified the Sui compiler to add debugging output while solving verification problems. That example illustrates concretely how far these tools have advanced beyond simple autocomplete.

The Cost Gap Hitting Emerging Markets Hardest

The dual-use nature of AI tools carries its sharpest implications for developers in South Asia and Africa. A pre-launch security audit for a mid-complexity DeFi protocol currently costs between $60,000 and $120,000. That figure is prohibitive for bootstrapped projects, which represent a significant portion of the Web3 startup landscape in both regions, where community-funded and self-funded projects are common.

India alone has more than 450,000 blockchain and Web3 professionals, according to the most recently available estimates from 2025. Crypto adoption in sub-Saharan Africa remains among the highest globally, with Nigeria at 84 percent and South Africa at 66 percent. Globally, fewer than 10 percent of projects deploy AI-based detection tools, and under 1 percent use on-chain firewalls. Those rates reflect worldwide averages, but their consequences fall most heavily on high-growth regions where security infrastructure has not kept pace with adoption.

Meanwhile, AI-powered contract scanning costs attackers as little as $1.22 per scan. Projects that skip formal audits because of cost are not invisible; they are inexpensive targets. The threat surface is expanding faster than defensive capacity in the regions where much of the new growth is happening.

The developer pipeline in these regions is also growing, which makes the security gap more urgent to close. Universities in Cape Town, Nairobi, and Lagos now offer blockchain coursework. The Africa Blockchain Institute has launched the continent's first master's degree in blockchain, in partnership with the University of Namibia. These programmes are producing the developers who will need accessible, affordable security tools as they build.

Certora's March 2026 decision to open-source its Prover for EVM, Solana, and Stellar chains could change the calculus for underfunded teams. The Prover has already secured over $100 billion in total value locked across protocols including Aave, MakerDAO, Uniswap, Lido, and EigenLayer, establishing it as one of the most battle-tested formal verification tools in production. Access to such tools (which use mathematical proofs to check that code behaves as specified) was previously limited to projects that could pay enterprise licensing fees.

What Comes Next

The panel's prescription for the industry was structural rather than tactical. Audits should shift focus toward process and long-term resilience rather than point-in-time bug hunting, ensuring codebases can withstand threats that do not yet exist.

James Wickett, CEO of DryRun Security, framed the underlying tension plainly: "AI coding agents can produce working software at incredible speed, but security isn't part of their default thinking." His firm's research gives that observation measurable weight: 87 percent of pull requests from AI coding agents contained at least one security issue, across a sample of 30 pull requests and 143 total issues identified.

Tools are emerging to close the gap. Certora launched its AI Composer in December 2025, combining AI-assisted specification generation with formal verification in the generation loop so that correctness proofs are built alongside the code rather than applied after the fact. For developers in emerging markets working with newer smart contract languages, Move's structural design properties enforce explicit resource ownership and eliminate entire classes of common vulnerabilities, offering protective foundations that reduce the burden of post-deployment auditing.

The arms race dynamic the panelists described does not resolve cleanly in either direction. Defenders are gaining powerful tools. So are attackers. For the developer communities in emerging markets that are building the next layer of global crypto infrastructure, the question of which side gains the advantage first is not theoretical.