OiPer desktop / issue 01

A privacy-first voice tool for people who do not want their desktop workflow rearranged around transcription.

Editorial product spread

benchmark leader

1.5s

The promise

Your voice stays yours. The speed stays yours too.

Workflow in one sentence

Hold a global hotkey to capture audio, release to transcribe, and inject the result into the app already in front of you.

Privacy-first voice-to-text desktop application

The fastest path from spoken thought to finished text.

OiPer keeps the interaction almost invisible: a hold, a short phrase, a release, and the copy appears where you were already working. Local transcription is the default, not an upsell.

A fast writing rhythm

The core loop is simple enough to become muscle memory: hold a hotkey, speak, release, continue typing.

A native performance profile

OiPer is written in native code, keeps latency low, and uses GPU acceleration when the machine supports it.

A clear privacy stance

Speech data stays on-device by default. Online optimization only appears when the user deliberately enables it.

Benchmark spread

Speed without leaving the machine behind

Measured on a 30-second English clip. OiPer finishes first while keeping the default path local.

OiPer Desktop1.5s
Lemonfox API3.27s
Python Faster-Whisper3.55s
OpenAI Whisper 1 API6.46s

privacy and optional services

Local processing

Transcription, activity logs, and captured audio remain on the device. That is the baseline, not the premium tier.

Online optimization

Optional cleanup uses your API key, your provider choice, and your chosen model. It can be turned off at any time.

advanced accuracy

When technical language matters, route through an LLM

OiPer supports transcription through LLMs for stronger handling of proper nouns, engineering terms, medical language, or other specialized vocabulary. Lightweight options such as Gemini 2.5 Flash Lite are useful here.

settings and configuration

Speech model downloads and selection by size or purpose
Backend choice for auto, CPU-only, or GPU acceleration
Provider setup with custom base URL, API key, and model name
Text optimization through local processing or online cleanup
LLM transcription mode for technical or specialized language
Codex-5.2 1Codex-5.2 2Codex-5.2 3Codex-5.2 4Codex-5.2 5
Codex-5.3 1Codex-5.3 2Codex-5.3 3Codex-5.3 4Codex-5.3 5
Codex-5.3 1Codex-5.3 2Codex-5.3 3Codex-5.3 4Codex-5.3 5
GPT-5.4 1GPT-5.4 2GPT-5.4 3GPT-5.4 4GPT-5.4 5
Gemini-3.1 Pro 1Gemini-3.1 Pro 2Gemini-3.1 Pro 3Gemini-3.1 Pro 4Gemini-3.1 Pro 5
Sonnet-4.6 1Sonnet-4.6 2Sonnet-4.6 3Sonnet-4.6 4Sonnet-4.6 5
Sonnet-4.6 1Sonnet-4.6 2Sonnet-4.6 3Sonnet-4.6 4Sonnet-4.6 5
Opus-4.6 1Opus-4.6 2Opus-4.6 3Opus-4.6 4Opus-4.6 5
GLM-5 1GLM-5 2GLM-5 3GLM-5 4GLM-5 5
Kimi-K2.5 1Kimi-K2.5 2Kimi-K2.5 3Kimi-K2.5 4Kimi-K2.5 5
Qwen-3.5 1Qwen-3.5 2Qwen-3.5 3Qwen-3.5 4Qwen-3.5 5
Lovable 1Lovable 2Lovable 3Lovable 4Lovable 5
V0-Max 1V0-Max 2V0-Max 3V0-Max 4V0-Max 5