Related projects
Related projects
AI-Augmented Fraud Detection
Victor Soussan · Product Design

Agentic interfaces raise a question I find genuinely interesting from a product design perspective: when an AI is part of a decision process, how do you keep the person making the call actively engaged? If the AI does too much, the human disengages. If it does too little to explain itself, trust erodes.
Fraud detection in European banking was a natural context to explore this. Analysts work under time pressure, 80% of their alerts turn out to be false alarms, and the final decision is always theirs. A setting where the balance between AI assistance and human judgment has immediate, measurable consequences.
RiskOS is the prototype I built to test ideas around that balance.
A fraud analyst at a European neobank handles 80 to 150 alerts per shift. Most are harmless. Every minute spent on a false alarm is a minute not available for a real case.
In many teams, the daily tool is still a spreadsheet and a set of fixed rules. No surrounding context, no sorting by relevance, no memory of previous cases.
Same data, two readings.
On the left, the alert feed as most institutions receive it: raw spreadsheet, flat columns. On the right, the same information organized in RiskOS.
The central question was whether the AI could handle the preparation work while the analyst retained full ownership of the decision.
Three principles guided the design:
Where RiskOS sits in the process.
A suspicious transaction hits the bank's automated rules. If flagged, the alert goes into a queue. The AI analyzes it. The analyst reviews and decides. RiskOS is the workspace for that last step.
The analyst opens their session. Five cases are waiting, sorted by risk level. They can filter by priority and track their progress with a live counter.
Triage view
Frustration: Without sorting, the analyst scrolls through the whole list looking for the urgent ones.
Benefit: Color-coded priorities and a live counter. Sorting takes a few seconds.
The AI writes its analysis in real time, word by word. Relevant details like amounts, locations, and devices are highlighted as they appear. The data sources used light up progressively, and a confidence score indicates the level of certainty.
The action buttons remain hidden until the analysis is complete. The analyst reads the full reasoning before any decision is possible.
AI analysis, streaming
Frustration: Usually the analyst gets a risk number with no explanation. They reconstruct the reasoning themselves.
Benefit: Here the AI writes out what it found, step by step. The analyst reads the reasoning, then decides.
The analyst picks an action: block the card, pass the case to a senior, or keep it under watch. A confirmation screen recaps what happened. Then two things appear that usually stay invisible: the Slack message to the fraud team, and the SMS to the customer.
For handoffs, the AI pre-writes a note the analyst can edit before sending. The case arrives with context instead of landing cold.
Confirmation and what happened next
Frustration: Usually the analyst acts and never sees the result. Handoffs seem to disappear.
Benefit: Every action has a visible trace: the Slack message, the customer SMS, the ticket. The analyst sees it went through.
A medium-risk alert arrives: score 45, a 450 euro payment. The AI reviews the transaction history and finds nothing unusual. The analyst confirms with one click. Total time from open to resolved: eight seconds.
False alarm, resolved
Frustration: False alarms take as long as real cases, even though they need no action.
Benefit: The AI catches the harmless ones in seconds. The analyst keeps their attention for the rest.
The analyst works through five cases in sequence. After each resolution, the next case loads automatically. A progress bar and running timer track the session. At the end: five cases resolved, 92 seconds total, 18 seconds on average.
Case flow and session recap
Frustration: Switching cases usually means starting over mentally each time.
Benefit: The cases chain without interruption. Running totals keep the pace visible.
Two findings from this project that I believe apply well beyond fraud detection.
Streaming the reasoning builds trust in a way that scores don't.
When the AI writes its analysis word by word, the analyst reads along and forms their own view at the same time. They can agree, push back, or notice something the AI missed. A confidence score after the fact just says "trust me" without showing the work.
Hiding the buttons until the analysis is done changes how people read.
The decision buttons in RiskOS only appear once the AI finishes writing. It adds a few seconds, but those seconds are the difference between scanning and actually reading. Under pressure, people click the first thing available. This small constraint gives the reasoning a chance to land.
I see the same dynamics in other contexts where AI supports decisions under time pressure: compliance, medical triage, content moderation, incident response.
Working prototype. React 18, Vite, Tailwind CSS. Dark, desktop-first. On Vercel.