2024 is a unique year for risk teams: they face increasing amounts of regulatory scrutiny, novel instances of fraud, and lower budgets and headcount. Many risk teams are asking themselves: how can we support the growth of the company while operating efficiently?
At Coris, we’ve been hyper-focused on empowering risk teams to automate SMB risk and fraud processes while staying lean. Today, we’re excited to introduce Risk AI, an autonomous agent that acts and makes decisions like a risk analyst or underwriter. Our initial use case is tackling false positives, and we have several more use cases launching in the next few weeks.
Read on to learn more, and reach out to get started today.
As companies grow, their risks grow non-linearly. New types of data sources and innovative fraud methods mean that risk analysts face an exponentially increasing number of challenges. This is especially true with manual reviews, which take up a disproportionate amount of risk teams’ time and resources. Manual reviews require lots of human judgment and contextual awareness in order for accurate decisioning. The conventional solution has been to throw humans (aka, more risk analysts) at this problem, but in the age of tech austerity, this is no longer a viable option.
In the SMB world, false positives make up the majority of manual reviews. There isn’t enough structured, high-quality data for machine learning (ML) models to process, so many models cannot automatically decision on edge cases and subsequently forward them to manual review. This leads to higher manual review caseloads for risk analysts and larger risk teams, but doesn’t meaningfully prevent bad actors or activity.
“Risk teams are constantly facing new fraud threats, and business’s expectations for our outcomes keep increasing while still being operationally efficient. I want to make sure my team can focus on the most tangible and complex threats facing our business, but this is difficult to do given the persistent volume of false positives we need to analyze. I can see Coris’s new Risk AI being a game-changer for us.”
- Jason Fransua, Chief Compliance Officer at Hearth
Recent advances in LLMs offer a unique opportunity to cut down on false positives. False positives are probabilistic problems, and generative AI models excel in these situations due to their ability to make context-aware decisions based on unstructured data.
We’ve already implemented LLM-powered risk tools that automate onboarding and risk management for large enterprises. Customers kept flagging the false positive problem to us, so we wondered: how can we use LLMs to make sure these alerts are reviewed efficiently, and allocate human risk decisioning to the most complex cases?
Risk AI is an autonomous, conversational agent that automates decisioning for 95% of false positive cases. As a 24/7 conversational agent on Slack, it can translate natural language prompts into targeted risk insights and actions.
Risk AI’s first two use cases focus on business status changes and negative merchant reviews:
For example, the business status might have changed to closed on the merchant’s Google listing, but they might have moved and might still be operating as normal with an updated address on Yelp or the merchant’s website. This check typically requires manual analysis, so Risk AI cuts down on the false positive workload for human risk analysts.
Sample actions / queries include:
Using Risk AI is straightforward:
No conversation about AI agents can be complete without discussing methods for mitigating hallucination risk. At Coris, we’ve been building with LLMs for 1.5 years and make sure there is a human in the loop for any Gen AI functionality to make sure responses are not hallucinated. For Risk AI, we’re running human-in-the-loop models to verify outcomes. In addition, by printing an audit log of the agent’s actions, we have a clear paper trail of what it’s doing and why - it’s not a black box.
Risk AI is already receiving positive feedback from our customers. We’re actively building out new use cases and exploring additional integrations to process more unstructured information.
For example, in the near future Risk AI could automatically send an email to a merchant under manual review through Zendesk, and process the merchant’s response in real-time.
Risk AI is a natural extension of our mission to modernize SMB risk infrastructure. We started with better SMB intelligence via MerchantProfiler, and added Fuzio to automate risk decisioning. Risk AI takes SMB risk automation a step further by streamlining judgment and decision-making for the most common risk management workflows.
Reach out if you’d like to learn more, or if you have use cases you’d like us to automate with the agent.