Companies Enlist AI in Battle Against AI Fraud
Digital fraud has been a ubiquitous threat since the invention of the internet, but it has grown significantly over time in its pervasiveness and impact. Data from the Federal Trade Commission shows that consumers in the United States reported almost $8.8 billion in fraud losses in 2022, a 30% increase from the prior year. Actual monetary losses, however, are estimated to be much greater.
This dramatic rise in financial losses is attributed to several factors, including the growing prevalence of eCommerce and the heightened sophistication of malicious actors. One of the most troubling developments in recent years has been the use of AI to augment fraud techniques, posing a significant threat to consumers and businesses alike.
- Fraudsters Leverage AI to Create Fake Companies and Clients
- Legacy AP Fraud Prevention Is Inadequate for Countering AI Fraud
- Organizations Find Themselves Alone in Fight Against Deepfakes
- Businesses Must Harden Their Defenses Against AI Fraud
Fraudsters Leverage AI to Create Fake Companies and Clients
Synthetic identity fraud presents a serious threat, as malicious actors use various methods to create new identities for the purpose of scamming companies. AI tools have augmented this technique, allowing fraudsters to create phony companies and clients and thwart fraud prevention strategies.
of businesses say it is
difficult to onboard new vendors,
thanks to generative AI.
AI fraud complicates vendor onboarding.
A recent study reveals that the growing prevalence of generative AI is encumbering the onboarding process. Because of this challenge, 61% of businesses find onboarding new vendors moderately or extremely difficult, while 52% express the same challenge regarding new clients. Engines like ChatGPT allow bad actors to create and populate authentic-looking fake websites, significantly complicating businesses’ due diligence checks when onboarding. These new AI techniques are contributing to a growing share of the 5% of business revenues lost to fraud each year.
Deepfakes thwart voice recognition 99% of the time.
Existing prevention measures are proving inadequate for fighting AI fraud, with a recent study finding that deepfakes have the potential to fool voice authentication systems with up to 99% reliability. A deepfake system needs just five minutes of recorded audio before it has enough data to replicate the subject’s voice and generate convincing, albeit fraudulent, content. Anyone with a video presence online is susceptible to this advanced threat. Experts note that companies relying entirely on voice authentication could find themselves critically compromised by deepfake technology and should consider additional authentication measures.
Legacy AP Fraud Prevention Is Inadequate for Countering AI Fraud
Fraudsters are engaged in an unending arms race with those pioneering technologies to stop them, as both sides continually strive to outmaneuver one another’s techniques. Legacy accounts payable (AP) systems have proven ineffective at countering this latest AI threat, and companies will need to upgrade their systems to mitigate the associated risks.
can easily bypass existing fraud prevention
measures, thanks to its sophistication
and ability to deploy at scale.
65% of organizations reported falling victim to payment fraud in 2022.
Of the victims that experienced payment fraud, 71% were impacted by business email compromise, a method by which fraudsters pretend to be a company’s supplier and trick accounting teams into paying them instead of their actual vendors. AI technologies have enabled criminal enterprises to execute this scheme on a much larger scale, orchestrating hundreds of scams simultaneously, with the goal of deceiving just a fraction of their targets. Nearly half of fraud victims said they were unable to recover stolen funds.
B2B payment fraudsters are experts at exploiting security loopholes.
Legacy fraud prevention systems still largely rely on methods that can be easily duped by AI: For example, 70% of companies still confirm changes to their suppliers’ bank account information by phone. Fraudsters’ increasing adeptness at replicating voices could easily render this verification method useless, resulting in massive losses as criminals redirect funds.
Organizations Find Themselves Alone in Fight Against Deepfakes
Governments are working at a frustratingly slow pace to protect their economies from AI attacks, leaving organizations to fend for themselves against bad actors. More than 40% of businesses or their customers have already encountered AI fraud attacks, with deepfake software duping voice authentication systems 99% of the time. AP departments will need to deploy their own AI software to beat fraudsters at their own game.
of businesses have
encountered AI fraud.
A recent UN report details how the deepfake threat could spell danger worldwide.
In June, United Nations Secretary-General António Guterres told press outlets that “alarm bells” concerning generative AI like ChatGPT are “deafening” and ring “loudest from the developers who designed it.” To respond to the crisis, the UN is drafting its Code of Conduct for Information Integrity on Digital Platforms. However, this initiative is not expected to be published until September 2024. Further, the UN has minimal legal authority over the regulations of individual nations, making it unlikely that this initiative will effectively counter AI fraud.
U.K. regulator warns that firms must boost their protection against AI fraud.
The chief executive of the United Kingdom’s Financial Conduct Authority (FCA), Nikhil Rathi, warns that AI fraud techniques such as identity fraud and cyberattacks will increase in scale and sophistication in the coming years, saying that senior managers of financial firms will ultimately be responsible for damages if these attacks are left unchecked. “As AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously,” Rathi notes.
Businesses Must Harden Their Defenses Against AI Fraud
The surging popularity of AI-assisted fraud poses a massive threat to all businesses, as AI offers fraudsters the ability not only to augment existing types of fraud or develop new ones but also to deploy their tactics on an unprecedented scale. With the push of a button, a fraudster can target thousands of companies all at once, and they need only a handful of successful attacks to harvest large sums of money or terabytes of valuable data that they can then sell on the darknet.
It is imperative for AP departments to harden their defenses against AI fraud. Certain techniques have achieved notable success against these attacks:
- Virtual cards have built-in controls that prevent unauthorized transactions as well as prohibit overspending and improve cash management.
- Multifactor and knowledge-based authentication can keep fraudsters from impersonating vendors, including answering questions generated from public records, a task that becomes much harder for them to accomplish, even with AI.
- AI and machine learning can be deployed by organizations to analyze spending patterns and identify anomalies that could be the result of fraud.
- Working with trusted payments solution partners with expertise in safeguarding payments can offload complex security burdens and processes from your existing team, so they can focus on protecting other threat vectors in your business.
Organizations looking to make a dent in AI fraud will need to balance strengthening security with facilitating a smooth experience for vendors, however. Money lost to fraud could quickly be surpassed by lost revenue if vendors take their business elsewhere due to tedious verification requirements.
The rise in digital payments demands a new era of security. With AI, GPT models and advanced technologies, we are proactive, not reactive, in safeguarding B2B AP payments.”
Chief Strategy Officer
The post Companies Enlist AI in Battle Against AI Fraud first appeared on PYMNTS.com.