The gig economy is thriving but it owes its success to the digital ecosystem it is built upon and Application Programming Interfaces (APIs) in particular.
The real-time services we associate with platforms such as Uber, Lyft, and Fiverr all use APIs to enable seamless interactions between users, gig workers, and service providers. But APIs are also highly susceptible to abuse with attacks such as scraping, account takeovers (ATO), and fraud now becoming an even greater threat due to Generative AI.
Ride sharing and delivery platforms, for example, use APIs to facilitate real-time matching between drivers and customers and using GenAI attackers can use advanced scraping techniques to extract pricing data. A denial-of-service (DoS) attack could also be orchestrated to overwhelm the platform’s systems by using AI-powered bots to simulate customer requests.
GenAI attacks
Job posting marketplaces can also be disrupted and their integrity compromised if AI is used to create fake jobs or to manipulate proposals. Furthermore, the automated scraping of sensitive freelancer information could enable competitors to undercut prices or steal business from these platforms. Similarly, sites where customers advertise work for tradesmen, which see APIs used to manage job postings, worker profiles, and payments, could also be subverted by using GenAI-enabled bots to create fraudulent service requests, fake customer reviews, or manipulate the rating system.
Online staffing agencies can also be subjected to automated job application fraud or even see worker accounts hijacked, leading to fraudulent claims for job completions or the manipulation of availability slots. Tutoring platforms could see the creation of fake tutoring sessions, the manipulation of payment structures or refund systems, all through APIs that handle transactions, communications, and scheduling. And content creation platforms could see attackers create massive bot networks to siphon ad revenue or manipulate engagement metrics like views or likes.
Each of these attack types seeks to abuse the way the API functions which means defending against them requires the business to secure its APIs. If we consider content scraping, for instance, this sees AI-enabled bots used to scrape data on pricing, profiles, and job postings at a rapid pace, enabling competitors to undercut the business. It’s an issue that can be mitigated by using machine learning to detect abnormal scraping patterns and block them in real-time.
ATO is also a known attack method but when used in conjunction with GenAI it can see the launch of sophisticated credential-stuffing attacks. These try to take over gig worker or user accounts by trying out millions of login combinations on these platforms. Again, it’s the behaviour on the network that is the give away, so using entity behaviour analytics can enable the business to recognise suspicious login attempts and stop ATO attempts before they succeed.
AI can also be used to enhance payment fraud. For instance, an attacker using GenAI could mimic payment behaviours and manipulate API endpoints to steal funds. But by continuously monitoring payment behaviours for anomalies and using machine learning to detect and block fraudulent activities it’s possible to stop attacks initiated by GenAI bots.
Business logic abuse
But perhaps one of the most difficult attacks to prevent is business logic abuse. Strictly speaking, it isn’t an attack at all. Business logic abuse sees the functionality of the API used against it, so that a task it is supposed to execute is then used to carry out an attack. It might be use to subvert access control, for instance, with attackers manipulating URLs, session tokens, cookies, or hidden fields to gain advanced privileges and access sensitive data or functionality. Or bots may attempt to repeatedly sign up, login, or execute purchases in order to validate credentials, access unauthorised data, or commit fraud. Perhaps flaws in session tokens or poor handling of session data allows the attacker to hijack sessions and escalate privileges. Or the attacker may try to bypass built-in constraints to business logic by reviewing points of entry such as form fields and coming up with inputs that the developers may not have planned for.
These types of attack are notoriously hard to detect using traditional application security solutions. Legacy app defences rely on embedding javascript code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. Some of this code, such as CAPTCHAs, also introduces customer friction. They’re also designed for end-user interactions so struggle with API-to-API communications, which is a problem for gig economy platforms which often involve API transactions between businesses. As GenAI technology advances continue, attackers can bypass these systems by using AI-generated scripts that mimic human behaviour.
What is needed is a solution that can see all the traffic to a given API and is able to detect anomalies based on multiple behavioural-based criteria. By analysing the tools, infrastructure, credentials, and behaviour to detect malicious intent, it becomes possible to identify AI-automated business logic abuse attacks and prevent them natively, safeguarding the platform’s operations. And it’s imperative that businesses within the gig economy take steps now to address the threat posed by GenAI augmented attacks because these threats are only just getting started. The NCSC has warned that as threat actors begin to harness data training sets and build out their AI capabilities, we can expect such attacks to become more sophisticated.
For the gig economy, which is a relatively nascent and therefore fragile market, such attacks could prove devastating, damaging customer trust, causing loss of revenue and ceding market share. Unless these businesses recognise the threat and seek to adopt comprehensive API security strategies, they will inevitably find it difficult to combat content scraping, ATO, payment fraud, and business logic abuse powered by GenAI.
About the Author
James Sherlow is Systems Engineering Director, EMEA at Cequence Security. Cequence Security is an API security and bot management company that protects customers’ applications and APIs against attacks, business logic abuse, and fraud. Our comprehensive Unified API Protection solution offers discovery, compliance, and protection across all internal, external, and third-party APIs without needing any instrumentation, SDK, or JavaScript deployments. Cequence solutions scale to handle the most demanding government, Fortune and Global 500 organizations, securing more than 8 billion daily API interactions and protecting more than 3 billion user accounts across these customers. The company is led by industry veterans that previously held leadership positions at Palo Alto Networks and Symantec.
Featured Image: Adobe Stock