NomeVoiceKit
Custom wake words
for iOS apps.
Drop-in Swift package for on-device wake word detection. Train any phrase. Runs on Apple Neural Engine. No Porcupine fees. No cloud. No subscriptions. You own the model.
<20ms
Inference latency
840KB
Model size
0.0/hr
False positives
100%
On-device
Capabilities
Everything you need for voice activation
Custom Wake Words
Train any phrase — "Hey MyApp", "OK Jarvis", anything. Our openWakeWord pipeline generates 15,000+ synthetic samples and trains a DNN classifier in under 2 hours.
Apple Neural Engine
CoreML models run on the Neural Engine at Float16 precision. Sub-20ms inference per frame. Battery-efficient enough for continuous listening.
100% On-Device
Zero cloud dependency. No audio leaves the phone. No API calls. No recurring server costs. The model runs entirely in your app's process.
Two-Stage Verification
Optional biometric verifier gates activations to a specific speaker. Eliminates false positives from TV, other people, or similar-sounding phrases.
Multi-Language Ready
Train wake words in any language. The Piper TTS synthesis pipeline supports 20+ languages with accent and prosody variation.
Full Voice Pipeline
Not just wake words. Includes continuous listening mode, voice command routing, workout timer, and conversational state machine.
Integration
Five minutes to
voice activation.
Add the Swift package. Load your model. Set a callback. That's it. NomeVoiceKit handles audio capture, mel spectrograms, feature extraction, and classification automatically.
import NomeVoiceKit
let engine = NomeVoiceEngine(config: .default)
// Load your custom-trained wake word model
try engine.loadModels(classifierModel: "hey_myapp")
// Start listening
engine.onWakeWord = { confidence in
print("Activated! (\(confidence))")
startVoiceSession()
}
try engine.start()How it works
From phrase to production in a day
Train your wake word
Run our training pipeline on a GPU. Piper TTS generates 15,000+ synthetic samples of your phrase with diverse voices, accents, and noise conditions. Training takes ~2 hours.
Convert to CoreML
The pipeline exports an ONNX model and converts it to a CoreML .mlpackage optimized for Apple Neural Engine at Float16 precision. Total model size: ~840KB.
Add to your app
Add NomeVoiceKit via Swift Package Manager. Drop in your .mlpackage. Initialize the engine, set a callback, call start(). Five minutes to wake word detection.
Ship it
Your app now has a branded voice trigger that works offline, respects privacy, and runs on the Neural Engine. No API keys, no recurring fees, no cloud dependency.
Comparison
NomeVoiceKit vs the alternatives
| Feature | NomeVoiceKit | Porcupine | Vocal Shortcuts |
|---|---|---|---|
| Custom wake words | -- | ||
| No recurring fees | -- | ||
| On-device only | |||
| Neural Engine optimized | -- | ||
| Open training pipeline | -- | -- | |
| Biometric verifier | -- | -- | |
| Full conversation loop | -- | -- | |
| Voice command routing | -- | -- | |
| Source code available | -- | -- | |
| Price (per app/year) | $99 | $6,000+ | Free* |
Pricing
One price. No surprises.
Pay once per year per app. No per-device fees, no usage limits, no cloud costs. You own the model forever.
Starter
For indie devs shipping voice-first apps.
Get Starter- 1 custom wake word model
- CoreML + ONNX Runtime engines
- Foreground detection
- Basic VAD gate
- Email support
- Swift Package Manager
Pro
For production apps that need the full pipeline.
Get Pro- Everything in Starter
- 3 custom wake word models
- Two-stage biometric verifier
- Background audio mode support
- Continuous listening + voice commands
- Multi-language wake words
- Priority support + Slack
Enterprise
White-label, source access, dedicated training.
Contact Us- Everything in Pro
- Unlimited wake word models
- Full source code access
- Custom model training service
- On-prem training pipeline
- SLA + dedicated support
- Volume licensing
Ready to add a wake word
to your app?
Stop paying Porcupine $6,000/year. Own your voice activation layer. Train it on your phrase. Ship it in your app. Keep 100% control.