
Executive Summary
Fraud Analysts are working every day combatting fraud, but every day the task gets larger and more complex.
​
How are we able to help our analysts be more efficient in their efforts?
PROBLEM
-
Evidence is scattered, making investigations cumbersome.
-
Current explanations are rigid and rules-based, lacking context.
SOLUTION
-
Aggregate key evidence, patterns, and similar past cases.
-
Apply Explainable Boosting Model (EBM) + Shapley Additive Explanations (SHAP) to surface clear, defensible risk drivers.
-
Deliver concise, data-anchored rationales for every decision.
CONTEXT
-
Detecting possible fraud is mature; post-flag review is a bottleneck.
-
Analysts manually gather data- a slow, tedious, and leads to alert fatigue.
EXPECTED
-
30–50% faster fraud investigations.
-
$1M in labor savings per 100M transactions.
-
Auditable, transparent AI outputs for compliance.
Fraud Analysts Surveyed
What is Your Biggest Bottleneck in Fraud Alert Processing?
Gathering supporting information
67%
Where Do You Feel AI Could Save You Time?
Summarize Alerts & Provide Explanation
44%
Architecture Diagram

Explainability
Local Case-Level Explanation
What FraudLens displays when an analyst queries a transaction.
Chatbot Explainability Output
• Highlights top contributing features via ranked logit impacts and a SHAP/EBM waterfall plot.
• Shows the model’s decision and EBM-predicted fraud probability.
• Provides a concise, RAG-enhanced narrative to translate model outputs into analyst-ready insight.
Includes traceable sources to support transparency and auditability.

• Each bar quantifies how a specific feature pushed the fraud score higher or lower for this transaction.
• EBM + SHAP deliver defensible, case-level transparency by revealing the exact factors driving the model’s decision.
Quality Impacts
Faster Investigation Turnaround
Automated retrieval and evidence generation reduces review time by up to 30-50%.
Improved Consistency of Outputs
Template-driven narrative summaries ensure uniformity across post-flag cases.
Enhanced Analyst Trust
Transparent model reasoning supports explainable, defensible case documentation.
Monetization

Demo
Risks & Mitigations
Data privacy
Implements encryption and role-based access, ensuring no sensitive data exposure.
Model Bias
Initial model explains decisions transparently; formal AIR-based fairness testing will be introduced to evaluate demographic impact.
Overreliance on AI
Empowers analysts with contextual insights instead of automated verdicts.
Auditability
Ensures every decision path is reproducible for compliance and governance.
Next Steps

1. Expand behavioral profiling:
Add customer-level behavioral profiles for deeper contextual accuracy.
2. Add AIR-based fairness testing:
Introduce Adverse Impact Ratio (AIR) to evaluate demographic fairness and ensure responsible model behavior.
​
3. Strengthen RAG grounding:
Tighten evidence retrieval to reduce hallucinations.
​
4. Enhance behavioral anomaly detection: Combine sequential features (merchant chains, spending trajectories) with network-level relationships to expose complex, realistic fraud patterns.
_edited_edited_edited.png)