Integrate on-device AI into React Native apps using React Native ExecuTorch, which provides APIs for LLMs, computer vision, OCR, audio processing, and embeddings without cloud dependencies, as well as a variety of pre-exported models for common use cases. Use when user asks to build AI features into mobile apps - AI chatbots, image classification, object detection, style transfer, OCR, document parsing, speech processing, or semantic search - all running locally without cloud dependencies. Use when user mentions offline support, privacy, latency or cost concerns in AI-based applications.
Use this skill when you need to:
React Native Executorch is a library developed by Software Mansion that enables on-device AI model execution in React Native applications. It provides APIs for running machine learning models directly on mobile devices without requiring cloud infrastructure or internet connectivity (after initial model download). React Native Executorch provides APIs for LLMs, computer vision, OCR, audio processing and embeddings without cloud dependencies, as well as a variety of pre-exported models for common use cases. React Native Executorch is a way of bringing ExecuTorch into the React Native world.
Trigger: User asks to build a chat interface, create a conversational AI, or add an AI assistant to their app
Steps:
Result: Functional chat interface with on-device AI responding without cloud dependency
Reference: ./references/reference-llms.md
Trigger: User needs to classify images, detect objects, or recognize content in photos
Steps:
Result: App that understands image content without sending data to servers
Reference: ./references/reference-cv.md
Trigger: User wants to extract text from photos (receipts, documents, business cards)
Steps:
Result: OCR-enabled app that reads text directly from device camera
Reference: ./references/reference-ocr.md
Trigger: User wants to add voice commands, transcription, or voice output to app
Steps:
Result: App with hands-free voice interaction
Reference: ./references/reference-audio.md
Trigger: User needs intelligent search, similarity matching, or content recommendations
Steps:
Result: Smart search that understands meaning, not just keywords
Reference: ./references/reference-nlp.md
Run text generation, chat, function calling, and structured output generation locally on-device.
Supported features:
Reference: See ./references/reference-llms.md
Perform image understanding and manipulation tasks entirely on-device.
Supported tasks:
Reference: See ./references/reference-cv.md and ./references/reference-cv-2.md
Extract and recognize text from images with support for multiple languages and text orientations.
Supported features:
Reference: See ./references/reference-ocr.md
Convert between speech and text, and detect speech activity in audio.
Supported tasks:
Reference: See ./references/reference-audio.md
Convert text to numerical representations for semantic understanding and search.
Supported tasks:
Reference: See ./references/reference-nlp.md
Use useLLM hook or LLMModule with one of the available language models.
What to do:
useLLM hook or LLMModule to load the modelReference: ./references/reference-llms.md
Model options: ./references/reference-models.md - LLMs section
Use useLLM hook or LLMModule with tool definitions to allow the model to call predefined functions.
What to do:
Reference: ./references/reference-llms.md - Tool Calling section
Use useLLM hook or LLMModule with structured output generation using JSON schema validation.
What to do:
Reference: ./references/reference-llms.md - Structured Output section
Use useClassification hook or ClassificationModule for simple categorization or use useObjectDetection hook or ObjectDetectionModule for locating specific objects.
What to do:
Reference: ./references/reference-cv.md
Model options: ./references/reference-models.md - Classification and Object Detection sections
Use useOCR hook or OCRModule for horizontal text or use useVerticalOCR hook or VerticalOCRModule for vertical text (experimental).
What to do:
Reference: ./references/reference-ocr.md
Model options: ./references/reference-models.md - OCR section
Use useSpeechToText hook or SpeechToTextModule for transcription or use useTextToSpeech hook or TextToSpeechModule for voice synthesis.
What to do:
Reference: ./references/reference-audio.md
Model options: ./references/reference-models.md - Speech to Text and Text to Speech sections
Use useImageEmbeddings hook or ImageEmbeddingsModule for images or useTextEmbeddings hook or TextEmbeddingsModule for text.
What to do:
Reference:
Use useStyleTransfer hook or StyleTransferModule to apply predefined artistic styles to images.
What to do:
Reference: ./references/reference-cv-2.md
Model options: ./references/reference-models.md - Style Transfer section
Use useTextToImage hook or TextToImageModule to create images based on text descriptions.
What to do:
Reference: ./references/reference-cv-2.md
Model options: ./references/reference-models.md - Text to Image section
Before using any AI model, you need to load it. Models can be loaded from three sources:
1. Bundled with app (assets folder)
2. Remote URL (downloaded on first use)
3. Local file system
Model selection strategy:
Reference: ./references/reference-models.md - Loading Models section
Not all models work on all devices. Consider these constraints:
Memory limitations:
Processing power:
Storage:
Guidance:
Reference: ./references/reference-models.md
Audio must be in correct sample rate for processing:
Reference: ./references/reference-audio.md
Images can be provided as:
Image preprocessing (resizing, normalization) is handled automatically by most hooks.
Reference: ./references/reference-cv.md and ./references/reference-cv-2.md
Text embeddings and LLMs have maximum token limits. Text exceeding these limits will be truncated. Use useTokenizer to count tokens before processing.
Reference: ./references/reference-nlp.md
The library provides core utilities for managing models and handling errors:
ResourceFetcher: Manage model downloads with pause/resume capabilities, storage cleanup, and progress tracking.
Error Handling: Use RnExecutorchError and error codes for robust error handling and user feedback.
useExecutorchModule: Low-level API for custom models not covered by dedicated hooks.
Reference: ./references/core-utilities.md
Model not loading: Check model source URL/path validity and sufficient device storage
Out of memory errors: Switch to smaller model or quantized variant
Poor LLM quality: Adjust temperature/top-p parameters or improve system prompt
Audio issues: Verify correct sample rate (16kHz for STT and VAD, 24kHz output for TTS)
Download failures: Implement retry logic and check network connectivity
Reference: ./references/core-utilities.md for error handling details, or specific reference file for your use case
| Hook | Purpose | Reference |
|---|---|---|
useLLM | Text generation, chat, function calling | reference-llms.md |
useClassification | Image categorization | reference-cv.md |
useObjectDetection | Object localization | reference-cv.md |
useSemanticSegmentation | Pixel-level classification | reference-cv.md |
useStyleTransfer | Artistic image filters | reference-cv-2.md |
useTextToImage | Image generation | reference-cv-2.md |
useImageEmbeddings | Image similarity/search | reference-cv-2.md |
useOCR | Text recognition (horizontal) | reference-ocr.md |
useVerticalOCR | Text recognition (vertical, experimental) | reference-ocr.md |
useSpeechToText | Audio transcription | reference-audio.md |
useTextToSpeech | Voice synthesis | reference-audio.md |
useVAD | Voice activity detection | reference-audio.md |
useTextEmbeddings | Text similarity/search | reference-nlp.md |
useTokenizer | Text to tokens conversion | reference-nlp.md |
useExecutorchModule | Custom model inference (advanced) | core-utilities.md |
Use this when building AI features with ExecuTorch:
Planning Phase
Development Phase
Testing Phase
Deployment Phase
Model not loading or crashing
RnExecutorchErrorOut of memory errors
Poor quality results from LLM
Audio not processing
Slow inference speed
Model Selection
Error Handling
User Experience
Resource Management
Performance Optimization