Live Production Data

Enrichment Quality, Measured in Production

Real benchmarks from real bank transactions. Not synthetic tests — actual data processed by our API from paying customers across 13 countries.

Updated from production database

Avg Confidence

75.5%

Across all merchants

52% score ≥0.95

Avg Latency

287ms

With cache (71.7% hit rate)

p50 uncached: 5.2s

Merchants Enriched

129

Unique merchants identified

20+ categories

API Calls Served

1,874

Total production requests

13 countries

Confidence Score Distribution

How confident is the AI across all 129 production enrichments? Higher is better.

0.95 – 1.00
48 (37.2%)
0.85 – 0.94
19 (14.7%)
0.70 – 0.84
38 (29.5%)
0.50 – 0.69
7 (5.4%)
< 0.50
17 (13.2%)

81.4% of enrichments score ≥ 0.70 confidence

51.9% score ≥ 0.95

Real Input → Output Examples

These are actual bank transaction descriptions enriched by our API in production. Click to explore.

Enrichment Result98% confidence

Raw Input

AMZN MKTP US*2K4X9Y1Z0

{
  "merchant_name": "Amazon Marketplace",
  "category": "ecommerce",
  "domain": "amazon.com",
  "confidence": 0.98,
  "mcc_code": "5417",
  "country": "US",
  "is_subscription": false
}
MCC 5417USecommerce

Category Coverage

20+ merchant categories identified

Restaurant
19
E-commerce
14
Grocery
9
Retail
8
Financial
8
Transfer
7
Streaming
6
Cafe
6
Subscription
4
Other
48

Geographic Coverage

13 countries in production data

Brazil
53
United States
35
Italy
4
Spain
3
Argentina
1
Mexico
1
Germany
1
Portugal
1
Chile
1
Canada
1
Greece
1
China
1
Indonesia
1

Latency & Performance

Measured from production API responses. Cached requests return in milliseconds.

Average Response Time

287ms

Across all 1,874 requests (cached + uncached)

Cache Hit Rate

71.7%

1,343 of 1,874 requests served from cache

Uncached AI Enrichment

5.2s

p50 for cold enrichment with AI + web search

Uncached Latency Percentiles

p50

5.2s

p75

6.0s

p90

7.3s

p95

8.2s

p99

12.7s

Cold enrichments use Claude AI with web search for maximum accuracy. After the first call, results are cached for 7 days and served in <50ms.

Quality Score Breakdown

Each enrichment is scored 0–1 based on field completeness.

15%

Merchant Name

Clean brand name identified

15%

Domain

Website domain resolved

15%

Confidence

AI self-reported certainty

10%

Category

Business category assigned

10%

MCC Code

ISO 18245 code mapped

10%

Phone

Customer service number

10%

Email

Support email found

5%

Support URL

Help page URL

5%

Categories

Multiple categories assigned

5%

Description

Business description generated

Average quality score across production: 0.733 — Entries scoring ≥0.70 are cached for 7 days. Lower scores expire in 24 hours and get re-enriched.

Methodology

All data on this page comes from our production MongoDB database. These are real transactions from real customers — not synthetic benchmarks or cherry-picked examples.

Enrichment pipeline: Each transaction hits our cache first. On a miss, we call Claude AI (Haiku 4.5) with optional web search to identify the merchant, then cache the result for 7 days. 97% of enrichments use web search for maximum accuracy.

Confidence scores are self-reported by the AI model based on how certain it is about the identification. Scores below 0.50 typically indicate obscure local businesses or heavily abbreviated transaction codes.

Latency: The 287ms average includes cached responses (~12ms) and uncached AI calls (~5.2s). As the cache grows, average latency drops. At 71.7% cache hit rate, most requests are already fast.

Try it with your own transactions

20 free API calls. No credit card required. See the enrichment quality for yourself.