Bots are buying and smart businesses are cashing In

A burgeoning AI economy is now seeing agents used to shop, book and execute transactions on our behalf but many of these bots are being blocked, resulting in businesses losing valuable revenue in the process.

The traditional assumption – that these processes must be completed by a human – no longer holds true. But facilitating this access poses a major challenge. How do you differentiate between good and bad agents to enable these transactions to take place autonomously?

Less than half of web traffic currently constitutes human traffic, with the rest made up of bots. Good bots include search engines, website monitoring, and chatbots while grey bots which are legal but can prove to be disruptive or a drain on resources, include scraper bots that do not observe site limits. Bad bot traffic, which is unreservedly malicious in nature, typically includes DDoS bots, credential stuffing and scalper bots.

Sorting the good from the bad

With most of the bad bots being financially motivated, it’s easy to see why transactional processes have elected to block this traffic. But as AI agents evolve it’s clear that these processes need to change if businesses are to cash in on autonomous purchasing. It’s the reason why many of the big credit card companies have already begun to explore how they can enable AI to make purchases online. Yet it’s a multi-faceted problem. Not only do is it necessary to sort the good agents from the bad but also to create a secure means of access and then to govern those agents to ensure they don’t turn rogue.

However, real world tried and tested methods are now emerging. For instance, it’s possible to verify an AI agent using an identifier in the form of a token and then assigning a programmable wallet, user identity credentials and payment rules. The agent can then access, pay for, and interact with the majority of valuable digital content that sits behind login walls, anti-bot protections, or compliance layers.

Yet what happens if the agent then attempts to scrape, defraud and abuse that process? It’s here where bot management and Application Programming Interface (API) security comes into play. An autonomous AI agent must interface with APIs to complete these transactional processes so monitoring those calls to see if the agent is interacting with the API correctly and is not abusing its business logic is key. By using fingerprinting technology to analyse the context, behaviour, and intent of the agent during the interaction, any unexpected behaviour can be automatically detected and blocked.

New revenue streams

By ensuring AI agents can enroll, get verified, and receive digital tokens that represent their identity and purpose we can create a safe way for them to talk to one another and to use ecommerce channels. It then becomes possible for the seller, be that a publisher, website owner, content aggregator or retailer, to allow that information to be purchased and consumed by agents. But this also opens up other interesting possibilities in that these providers can begin to control access and the pricing of services that were previously free and/or were scraped by unauthorised bot traffic.

Retailers might decide to charge price comparison sites for real-time access to product data, for example. Financial services firms could choose to enable verified AI agents to access curated datasets to give market specific information. We could see media platforms decide to monetise high value content. Or perhaps travel aggregators might choose to charge AI agents for access to real-time availability data. Such scenarios conjure up the possibility of a future AI economy where agents act on behalf of consumers, hedge funds, data aggregators, or digital assistants to browse, compare prices, gather product info, or execute transactions all without the need for a human.

Creating a trusted AI ecosystem will allow businesses producing content or offering API-based services to price, publish and make available their data to these agents in a secure, controlled way. This in turn will see new revenue streams flourish so that in the near future, agent-based transactions will fast outstrip those made by humans. The question is whether businesses can adapt their business models fast enough to roll out and capitalise on these new revenue streams.


About the Author

James Sherlow is Systems Engineering Director, EMEA at Cequence Security. Cequence is a pioneer in API security and bot management, protecting the applications and APIs that organizations depend on from attacks, business logic abuse, and fraud. Our unique Unified API Protection platform unites discovery, compliance, and protection capabilities, providing unmatched real-time security in the face of sophisticated threats. Demonstrating value in minutes rather than days or weeks, Cequence offers a flexible deployment model that requires no app instrumentation or modification. Cequence solutions scale to meet the demands of the largest and most demanding private and public sector organizations, protecting more than 8 billion daily API interactions and 3 billion user accounts. The company is led by industry veterans that previously held leadership positions at Palo Alto Networks and Symantec.

more insights