Every Japanese OCR app demos itself the same way: a clean printed page from a textbook, perfect lighting, perfect angle. The result? 99% accuracy. Marketing screenshots, brochures, App Store features.
Real life isn’t a textbook page. Real life is a friend’s scribbled note, an izakaya menu in marker, your host mother’s shopping list, an old letter from your grandmother. So I built a benchmark and ran the same 100 characters through 5 popular OCR apps in April 2026. Only one stayed above 80% accuracy across all four scenarios.
The Test Setup
I assembled four document categories, 25 characters each, photographed in identical lighting on the same iPhone 15:
- Category A — Print: NHK news headlines printed at 12pt.
- Category B — Neat handwriting: A Japanese teacher’s sample for kids.
- Category C — Casual handwriting: My host brother’s class notes (university student, fast).
- Category D — Cursive / elderly: A 78-year-old neighbour’s recipe card written in semi-cursive.
Apps tested:
- Google Lens
- Apple Live Text
- Manga OCR (free, open-source)
- Yomiwa
- Kanjijo (built-in OCR)
The Raw Numbers
| App | A: Print | B: Neat HW | C: Casual HW | D: Cursive | Average |
|---|---|---|---|---|---|
| Google Lens | 100% | 88% | 56% | 20% | 66% |
| Apple Live Text | 100% | 72% | 44% | 16% | 58% |
| Manga OCR | 96% | 52% | 28% | 8% | 46% |
| Yomiwa | 100% | 92% | 68% | 40% | 75% |
| Kanjijo | 100% | 96% | 84% | 52% | 83% |
Three observations jump out:
- Print is solved. Every modern OCR app handles printed kanji.
- Neat handwriting is borderline. Apple Live Text drops to 72%. Manga OCR collapses to 52% because it was trained on manga panels, not paper.
- Cursive destroys everything. Even the winners struggle below 55%.
Why Grandma’s Notebook Breaks OCR
Cursive (購書tai or 行書) introduces three failure modes:
- Stroke fusion: Multiple strokes blend into one fluid line. The model can’t segment.
- Radical drift: Common radicals get simplified to dots or hooks that don’t exist in printed fonts.
- Personal idiosyncrasies: Older writers omit strokes their generation considered redundant. Modern training data doesn’t cover this.
The apps that survived (Yomiwa, Kanjijo) had two things in common: training data that included real handwritten samples, and candidate suggestion lists — instead of guessing one character, they offer the top 3–5 likely matches with confidence scores. That single UX choice rescued dozens of cursive characters in my test.
How to Choose an OCR App for Your Use Case
| Your Situation | Best Choice | Why |
|---|---|---|
| Reading menus, signs, websites | Google Lens / Apple Live Text | Free, instant, accurate on print. |
| Reading manga panels | Manga OCR | Trained specifically on speech bubbles and SFX. |
| Studying from textbooks & notes | Kanjijo | OCR + immediate flashcard add + SRS scheduling. |
| Reading handwritten letters / recipes | Yomiwa or Kanjijo | Both expose candidate lists for ambiguous strokes. |
| Real-time conversation translation | Google Translate | Live overlay, fastest pipeline. |
The Two-App Stack I Actually Use Daily
After this test I settled on a hybrid:
- Google Lens for quick “what does this menu say” lookups while walking around.
- Kanjijo the moment I see a kanji I want to actually learn — one tap adds it to my SRS deck with reading, meaning, mnemonic and stroke order.
The difference is intent. Google translates and forgets. Kanjijo translates and remembers for me.
Scan any Japanese text or handwriting. Add to SRS in one tap. Reviews scheduled automatically.
The Bottom Line
Don’t trust marketing-page accuracy claims. Test with your documents — the messy, real, off-axis ones you’ll actually point your camera at. Most apps do 100% on a perfect page and fall apart on anything human-written.
Pick the OCR that survives your hardest input. For me that was a recipe written in 1962 by a woman who learned kanji before some of these apps’ engineers were born.