A React Native Vision Camera frame processor for on-device text recognition (OCR) and translation using ML Kit.
โจ Actively maintained fork of react-native-vision-camera-text-recognition, with modern improvements, bug fixes, and support for the latest Vision Camera and React Native versions.
The original packages are no longer actively maintained.
This fork provides:
- โ Ongoing maintenance and compatibility with React Native 0.76+ and Vision Camera v4+
- ๐ง Translation support (not just OCR) powered by ML Kit
- ๐ Improved stability and error handling
- ๐ Faster processing and frame optimization
- ๐ฆ TypeScript definitions included
- ๐งฉ Consistent API that works seamlessly with modern React Native projects
- ๐งฉ Simple drop-in API
- โก Fast, accurate on-device OCR
- ๐ฑ Works on Android and iOS
- ๐ Built-in translation via ML Kit
- ๐ธ Recognize text from live camera or static photos
- ๐ช Written in Kotlin and Swift
- ๐ง Compatible with
react-native-vision-cameraandreact-native-worklets-core - ๐ฅ Compatible with Firebase
Peer dependencies:
You must havereact-native-vision-cameraandreact-native-worklets-coreinstalled.
npm install react-native-vision-camera-ocr-plus
# or
yarn add react-native-vision-camera-ocr-plusIf you have Firebase in your project, you will need to set your iOS Deployment Target to at least 16.0.
On Apple Silicon Macs, building for the iOS Simulator (arm64) may fail after installing this package.
This is a known limitation of Google ML Kit, which does not currently ship an arm64-simulator slice for some iOS frameworks.
The library works correctly on physical iOS devices and on the iOS Simulator when running under Rosetta.
๐ Full context and discussion
| Previous Package | Replacement | Notes |
|---|---|---|
react-native-vision-camera-text-recognition |
โ
react-native-vision-camera-ocr-plus |
Drop-in replacement with fixes and updates |
vision-camera-ocr |
โ
react-native-vision-camera-ocr-plus |
Actively maintained alternative |
๐ See the example app for a working demo.
import React, { useState } from 'react';
import { StyleSheet } from 'react-native';
import { useCameraDevice } from 'react-native-vision-camera';
import { Camera } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const [data, setData] = useState(null);
const device = useCameraDevice('back');
return (
<>
{!!device && (
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive
mode="recognize"
options={{ language: 'latin' }}
callback={(result) => setData(result)}
/>
)}
</>
);
}import React, { useState } from 'react';
import { StyleSheet } from 'react-native';
import { useCameraDevice } from 'react-native-vision-camera';
import { Camera } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const [data, setData] = useState(null);
const device = useCameraDevice('back');
return (
<>
{!!device && (
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive
mode="translate"
options={{ from: 'en', to: 'de' }}
callback={(result) => setData(result)}
/>
)}
</>
);
}import React from 'react';
import { StyleSheet } from 'react-native';
import { Camera, useCameraDevice, useFrameProcessor } from 'react-native-vision-camera';
import { useTextRecognition } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const device = useCameraDevice('back');
const { scanText } = useTextRecognition({ language: 'latin' });
const frameProcessor = useFrameProcessor((frame) => {
'worklet';
const data = scanText(frame);
console.log('Detected text:', data);
}, []);
return (
<>
{!!device && (
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive
frameProcessor={frameProcessor}
mode="recognize"
/>
)}
</>
);
}| Option | Type | Values | Default | Description |
|---|---|---|---|---|
language |
string |
latin, chinese, devanagari, japanese, korean |
latin |
Text recognition language |
mode |
string |
recognize, translate |
recognize |
Processing mode |
from, to |
string |
See Supported Languages | en, de |
Translation languages |
scanRegion |
object |
{ left, top, width, height } |
undefined |
Define a specific region to scan (values are string percentage proportions 0-100) |
frameSkipThreshold |
number |
Any positive integer | 10 |
Skip frames for better performance (higher = faster) |
useLightweightMode |
boolean |
true, false |
false |
(Android Only) Use lightweight processing for better performance |
You can specify a specific region of the camera frame to scan for text. This is useful for improving performance, focusing on specific areas, or reducing false positives from background text.
Important: All scanRegion values are percentage proportions from 0 to 100
import React from 'react';
import { StyleSheet } from 'react-native';
import { Camera, useCameraDevice, useFrameProcessor } from 'react-native-vision-camera';
import { useTextRecognition } from 'react-native-vision-camera-ocr-plus';
export default function App() {
const device = useCameraDevice('back');
const { scanText } = useTextRecognition({
language: 'latin',
scanRegion: {
left: '5%', // Start 5% from the left edge
top: '25%', // Start 25% from the top edge
width: '80%', // Span 80% of frame width
height: '40%' // Span 40% of frame height
}
});
const frameProcessor = useFrameProcessor((frame) => {
'worklet';
const data = scanText(frame);
console.log('Detected text in region:', data);
}, []);
return (
<>
{!!device && (
<Camera
style={StyleSheet.absoluteFill}
device={device}
isActive
frameProcessor={frameProcessor}
/>
)}
</>
);
}For better performance on Android devices, especially mid-range phones, you can adjust these options:
// Higher performance (recommended for real-time scanning)
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 10, // Process every 10th frame
useLightweightMode: true // Skip detailed corner points and element processing
});
// Balanced performance/accuracy
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 3, // Process every 3rd frame
useLightweightMode: true
});
// Maximum accuracy (slower)
const { scanText } = useTextRecognition({
language: 'latin',
frameSkipThreshold: 1, // Process every frame
useLightweightMode: false // Full detailed data
});You can also improve the performance by using runAtTargetFps in your frame processor:
const frameProcessor = useFrameProcessor(
(frame) => {
'worklet';
runAtTargetFps(2, () => {
const data = scanText(frame);
});
},
[scanText],
);Performance may also be better in production builds than in dev.
- Higher
frameSkipThreshold= better performance, less CPU usage useLightweightMode: true= faster processing, reduced memory usage- These optimizations are especially beneficial on Android devices
import { PhotoRecognizer } from 'react-native-vision-camera-ocr-plus';
const result = await PhotoRecognizer({
uri: asset.uri,
orientation: 'portrait',
});
console.log(result);
โ ๏ธ Note (iOS only):
Theorientationoption is available only on iOS and is recommended when using photos captured via the camera.
| Property | Type | Values | Required | Default | Platform |
|---|---|---|---|---|---|
uri |
string |
โ | โ Yes | โ | Android, iOS |
orientation |
string |
portrait, portraitUpsideDown, landscapeLeft, landscapeRight |
โ No | portrait |
iOS only |
import { RemoveLanguageModel } from 'react-native-vision-camera-ocr-plus';
await RemoveLanguageModel('en');| Language | Code | Flag |
|---|---|---|
| Afrikaans | af |
๐ฟ๐ฆ |
| Arabic | ar |
๐ธ๐ฆ |
| Bengali | bn |
๐ง๐ฉ |
| Chinese | zh |
๐จ๐ณ |
| English | en |
๐บ๐ธ๐ฌ๐ง |
| French | fr |
๐ซ๐ท |
| German | de |
๐ฉ๐ช |
| Hindi | hi |
๐ฎ๐ณ |
| Japanese | ja |
๐ฏ๐ต |
| Korean | ko |
๐ฐ๐ท |
| Portuguese | pt |
๐ต๐น |
| Russian | ru |
๐ท๐บ |
| Spanish | es |
๐ช๐ธ |
| ...and many more. |
Contributions, feature requests, and bug reports are always welcome!
Please open an issue or pull request.
If this library helps you build awesome apps, consider supporting future maintenance and development ๐
Your support helps keep the package updated and open source โค๏ธ
MIT ยฉ Jamena McInteer