Jiwei Yuan's Thoughts and Writings

The Architecture Behind Typeless — A Voice-to-Text AI Assistant

Co-authored with Claude

Reverse engineering Typeless.app with lightweight static analysis — no decompiler, no disassembler, just asar extract, grep, strings, and nm.

Typeless Voice Keyboard

Introduction

Typeless is a voice-first productivity tool that transcribes speech and types it directly into any application — not just a browser tab, but Slack, VS Code, Figma, or any macOS text field. It hooks into macOS at the system level through five custom Swift dynamic libraries, uses the Accessibility API to read and write text in other apps, and runs an Opus audio compression pipeline before sending audio to a cloud AI backend.

I spent an evening reverse engineering Typeless v1.0.2 (build 83, Electron 33.4.11) on macOS using only standard CLI tools. The design decisions are driven entirely by the core problem: Typeless is a voice tool that needs audio processing, keystroke interception, and text insertion into arbitrary applications.

The Three-Layer Process Model

 LAYER 1: RENDERER              LAYER 2: MAIN PROCESS           LAYER 3: SWIFT NATIVE
 (4 Chromium Windows)            (Node.js)                       (5 dylib + 1 binary)
┌─────────────────────┐        ┌──────────────────────┐        ┌─────────────────────┐
│ React 18            │        │ Drizzle ORM + libSQL │        │ libContextHelper    │
│ MUI (Material UI)   │        │ koffi (FFI bridge)   │        │  → AXUIElement     │
│ Recoil              │  IPC   │ electron-store       │  FFI   │  → getFocusedApp   │
│ ECharts             │◄────►│ electron-updater     │◄────►│ libInputHelper      │
│ i18next (58 langs)  │        │ node-schedule        │ koffi  │  → insertText      │
│ Framer Motion       │        │ Sentry               │        │  → simulatePaste   │
│ Floating UI         │        │ Opus Worker Pool     │        │ libKeyboardHelper   │
│ diff                │        │ Mixpanel             │        │  → CGEventTap      │
│ Shiki               │        │ undici               │        │ libUtilHelper       │
└─────────────────────┘        └──────────────────────┘        │  → audio devices   │
          ⬇                              ⬇                   │ libopusenc          │
  10.6 MB JS bundle              254 KB main process           │  → WAV→OGG/Opus   │
  57 KB CSS                      3.5 KB Opus worker            └─────────────────────┘
  65 lazy chunks                 SQLite via libSQL               5 universal dylibs
  4 HTML entry points            3 electron-store files          (x86_64 + arm64)

The native layer isn’t a CLI binary — it’s five dynamically loaded Swift libraries called via koffi, a Node.js FFI (Foreign Function Interface) library. This means the main process can call Swift functions synchronously from JavaScript without spawning child processes.

Core Design 1: The Multi-Window Architecture

Most Electron apps have one window. Typeless has four, each serving a distinct interaction model:

WindowHTML EntryPurposeBehavior
Hubhub.htmlMain dashboard, history, settingsStandard app window
Floating Barfloating-bar.htmlAlways-on-top recording indicatorTransparent, pointer-events: none, click-through
Sidebarsidebar.htmlDocked to screen edge, pinned panelAlways-on-top, 600×700, snaps to left screen
Onboardingonboarding.htmlFirst-run setup flowAllows Microsoft Clarity scripts

The floating bar is the most interesting one architecturally. It’s a transparent overlay window that floats above all other apps while recording. The CSS explicitly sets pointer-events: none and background: transparent — this means mouse events pass through to the app underneath. When the user hovers over the bar itself, the main process toggles setIgnoreMouseEvents(false) to make it clickable, then re-enables click-through when the mouse leaves:

// Main process IPC handler
case "mouse-enter":
  floatingBarWindow.setIgnoreMouseEvents(false);
  break;
case "mouse-leave":
  floatingBarWindow.setIgnoreMouseEvents(true, { forward: true });
  break;

This is how Typeless stays visible while you work in other apps without interfering with your workflow. The sidebar does something similar — it docks to the leftmost screen edge and clamps its vertical position so it can’t be dragged off-screen.

There’s also a media app detection system: when Apple Music, TV, or iTunes are in the foreground, the floating bar and sidebar automatically hide to avoid interfering with media playback. They reappear when you switch back to a regular app.

Core Design 2: The Swift FFI Bridge

The most architecturally significant decision is using koffi — a Node.js FFI library — to call Swift code directly from the main process. This avoids the overhead of spawning child processes or using N-API native addons.

Five Swift dylibs are loaded at runtime:

 

Each dylib is a universal binary (x86_64 + arm64), compiled from Swift source. Here’s what strings and nm reveal about each:

libContextHelper — The App Awareness Layer

This library answers the question: “What app is the user working in, and what text is visible?”

Exported functions (from nm -g):

The key detail: all functions have both sync and async variants (e.g., getFocusedAppInfoAsync). The async versions take a C callback pointer registered via koffi.register(). This matters because Accessibility API calls can block — querying a frozen app could hang the main process. The async variants run on a separate thread with a 500ms timeout, falling back to an empty result if the target app doesn’t respond.

libInputHelper — The Text Insertion Engine

This is where transcribed text gets typed into the target app. Four insertion strategies:

  1. insertText(text) — Direct AXUIElement text insertion via Accessibility API
  2. insertRichText(html, text) — HTML-aware insertion for rich text fields
  3. simulatePasteCommand() — Simulates ⌘V by generating CGEvents for the V key
  4. performTextInsertion(text) / performRichTextInsertion(html, text) — Higher-level wrappers

The paste simulation is a fallback for apps that don’t support direct AX text insertion. The library saves the current clipboard contents (savePasteboard()), replaces them with the transcribed text, simulates ⌘V, then restores the original clipboard (restorePasteboard()). This is why Typeless works with apps that have non-standard text fields — it falls back to “paste” when direct insertion fails.

libKeyboardHelper — Global Keyboard Hooks

Uses CGEventTap to intercept keyboard events system-wide. The strings output reveals error handling for a critical macOS issue:

CGEvent Tap disabled by user input!
CGEvent Tap disabled by timeout! This is the root cause of keyboard monitoring failure!

macOS automatically disables CGEventTaps if the tap callback takes too long to process events, as a safety mechanism to prevent system-wide input lag. The library includes a watcher timer to detect when this happens and restart the tap. The main process polls accessibility permission status every 2 seconds and restarts the keyboard listener if it was re-granted.

libUtilHelper — System Integration

Handles audio device enumeration, system mute/unmute, microphone latency testing, lid open/close detection (for laptop microphone behavior), and device fingerprinting. The testAudioLatency function measures the round-trip time from recording to playback — critical for calibrating the voice input experience.

Core Design 3: The Audio Pipeline

The voice-to-text pipeline is the core architecture of the app. Here’s how audio flows from microphone to text insertion:

 

Key details of this pipeline:

1. Audio is captured in the renderer using the Web Audio API, not a native module. The renderer records WAV buffers and sends them to the main process via IPC.

2. Opus compression happens in a Worker Pool. The main process maintains a pool of up to 2 Node.js worker threads (opusWorker.js). Each worker loads the libopusenc_unified_macos.dylib via koffi and calls opus_convert_advanced() — a C function that converts WAV to OGG/Opus at 16kbps with 20ms frame size, VBR enabled, voice signal type. This off-main-thread design prevents audio encoding from blocking the UI.

3. The compressed audio is sent as a FormData POST to api.typeless.com/ai/voice_flow with extensive context:

audio_file:      compressed OGG/Opus file
audio_id:        unique recording identifier
mode:            "transcript" | "ask_anything" | "translation"
audio_context:   JSON with text_insertion_point, cursor_state
audio_metadata:  audio_duration, format info
parameters:      { selected_text, output_language }
device_name:     microphone hardware label
user_over_time:  usage duration metric

4. The response contains the refined text and optionally delivery instructions, web_metadata, or external_action — indicating the server can direct the app to perform actions beyond text insertion (like opening a URL or executing a command).

5. Text insertion uses Swift FFI — the keyboard:type-transcript IPC handler calls Ae.insertText(transcript) which routes through the koffi-bound libInputHelper.

Core Design 4: Context-Aware Transcription

Typeless doesn’t just transcribe audio — it sends the full application context to the server so the AI can produce contextually appropriate text. This context gathering is what makes Accessibility permission essential.

How macOS Accessibility Text Capture Works

The macOS Accessibility API (AXUIElement) is the mechanism Typeless uses to read text from other applications. Here’s the OS-level flow:

  1. The app calls AXUIElementCreateSystemWide() to get a system-wide accessibility handle
  2. It queries kAXFocusedUIElementAttribute to find the currently focused text field in any app
  3. From that element, it reads kAXValueAttribute (full text content), kAXSelectedTextAttribute, and kAXVisibleCharacterRangeAttribute

The critical detail: kAXValueAttribute returns the entire text content of a field — a 100K-word document returns all 100K words. There is no OS-level truncation. macOS’s only protection is a single binary gate: the user either grants Accessibility permission to the app, or doesn’t. There is no per-app or per-field granularity — once granted, Typeless can read text from any application.

The Three Context Data Sources

On every recording, Typeless gathers context from three distinct sources via its native Swift libraries:

 
Data SourceWhat It CapturesAPI Used
Visible screen contentUp to 10K tokens of text visible on screengetFocusedVisibleTextAsync(10000, {timeout: 500})
Surrounding context1K tokens before + 1K tokens after the input areagetFocusedElementRelatedContentAsync(1000, 1000, {timeout: 500})
Cursor stateText before/after cursor, selected textgetCurrentInputState() via AXUIElement + AXTextMarkerRange

The audio_context JSON

Here’s what the assembled context looks like in practice (reconstructed from field names in the extracted source):

{
  "active_application": {
    "app_name": "Slack",
    "app_identifier": "com.tinyspeck.slackmacgap",
    "window_title": "#engineering - Acme Corp",
    "app_type": "native_app",
    "app_metadata": { "process_id": 1234, "app_path": "/Applications/Slack.app" },
    "visible_screen_content": "Alice: Can you look into the CI failure?\nBob: On it.\nYou: Let me check the deployment status for "
  },
  "text_insertion_point": {
    "input_area_type": "plain_text",
    "accessibility_role": "AXTextArea",
    "input_capabilities": { "is_editable": true, "supports_markdown": false },
    "cursor_state": {
      "cursor_position": 42,
      "has_text_selected": false,
      "selected_text": "",
      "text_before_cursor": "Let me check the deployment status for ",
      "text_after_cursor": ""
    },
    "surrounding_context": {
      "text_before_input_area": "Channel: #engineering",
      "text_after_input_area": ""
    }
  },
  "context_metadata": {
    "is_own_application": false,
    "capture_timestamp": "2025-02-16T00:00:00.000Z"
  }
}

This context serves two purposes:

  1. Better transcription — knowing you’re in a code editor vs. Slack helps the AI format the output appropriately
  2. App-specific behavior — the server returns external_action or web_metadata for certain apps, enabling actions like opening URLs

Core Design 5: The Privacy Architecture

Typeless requires macOS Accessibility permission — the most privileged user-space permission the OS grants. With it, the app can read text from any application, intercept keystrokes system-wide, and insert text into arbitrary text fields. Combined with full network access via a Node.js runtime, this creates a unique privacy surface that deserves its own architectural analysis.

What Leaves Your Machine

For non-blacklisted apps, the data sent on every recording includes:

There is no user-facing toggle to disable context capture while keeping voice transcription. It’s all or nothing.

The Blacklist: Configuration and Purpose

The blacklist is Typeless’s mechanism for marking certain apps and URLs as “sensitive.” When an app is blacklisted, Typeless will not read its visible screen content or surrounding text — only minimal cursor-level state is still collected.

app_blacklist: {
  macos: {
    exact: ["com.sublimetext.4", "com.tencent.xinWeChat",
            "com.microsoft.Excel", "com.kingsoft.wpsoffice.mac",
            "dev.zed.Zed"]
  }
}
url_blacklist: {
  prefix: ["https://docs.google.com/document/d",
           "https://docs.qq.com/doc/",
           "https://docs.qq.com/sheet/"]
}
app_whitelist: {
  macos: {
    exact: ["com.todesktop.230313mzl4w4u92", "com.tinyspeck.slackmacgap",
            "com.apple.mail", "com.figma.Desktop", "com.openai.atlas"]
  }
}

This configuration is hardcoded as a default and also fetched from the server (POST /app/get_blacklist_domain, encrypted with CryptoJS AES) and cached for 24 hours. If the server returns a newer version, it overrides the local defaults.

The Blacklist-Before-Read Design

The most important privacy design decision in the codebase: the blacklist check happens before the AX text read, not after. The three data sources from Core Design 4 have different blacklist behaviors:

Data SourceBlacklist Gated?
Visible screen contentYes — AX API never called for blacklisted apps
Surrounding contextYes — AX API never called for blacklisted apps
Cursor stateNo — always collected, no blacklist check

Here’s the concrete code flow. For visible screen content:

// Step 1: Get lightweight app metadata (bundleId, appName) — no text read yet
const appInfo = getFocusedAppInfo();

// Step 2: Check blacklist BEFORE calling AX API
const appConfig = await getAppConfig(appInfo.bundleId);
const urlConfig = await getUrlConfig(appInfo.webURL);

// Step 3: Only read visible text if NOT blacklisted
if ((appConfig.isWhitelist || !appConfig.isBlacklist) &&
    (urlConfig.isWhitelist || !urlConfig.isBlacklist)) {
  visibleText = await getFocusedVisibleTextAsync(10000, {timeout: 500});
}
// Blacklisted → visibleText stays undefined → AX API never called

The same guard protects surrounding context:

if (element.editable && relatedContentParams && !isSelfApp) {
  const appConfig = await getAppConfig(appInfo.bundleId);
  const urlConfig = await getUrlConfig(appInfo.webURL);
  if ((appConfig.isWhitelist || !appConfig.isBlacklist) &&
      (urlConfig.isWhitelist || !urlConfig.isBlacklist)) {
    relatedContent = await getFocusedElementRelatedContentAsync(1000, 1000, {timeout: 500});
  }
}
// Blacklisted → relatedContent stays empty → AX API never called

But cursor state has no blacklist check:

// getCurrentInputState is called regardless of blacklist
if (inputStateParams && !isSelfApp) {
  cursorState = getCurrentInputState(...inputStateParams);
}
// This collects: text_before_cursor, text_after_cursor, selected_text

Despite its name, getCurrentInputState() is not a “keyboard helper” — it’s a full Accessibility API call. The strings output from libInputHelper.dylib reveals it uses AXFocusedUIElement, AXSelectedTextRange, and AXStringForTextMarkerRange — the same AX API family as the blacklist-protected functions, just without the blacklist gate.

Excel vs Slack: The Blacklist in Practice

To make this concrete, here’s what happens when you record in Microsoft Excel (blacklisted: com.microsoft.Excel) vs Slack (whitelisted: com.tinyspeck.slackmacgap):

Excel (blacklisted) — what gets sent:

{
  "active_application": {
    "app_name": "Microsoft Excel",
    "app_identifier": "com.microsoft.Excel",
    "visible_screen_content": undefined  // ← AX API never called
  },
  "text_insertion_point": {
    "cursor_state": {
      "text_before_cursor": "Q3 Revenue: $",  // ← still collected via AX API (no blacklist check)
      "text_after_cursor": "",
      "selected_text": ""
    },
    "surrounding_context": {
      "text_before_input_area": "",  // ← AX API never called
      "text_after_input_area": ""
    }
  }
}

Slack (whitelisted) — what gets sent:

{
  "active_application": {
    "app_name": "Slack",
    "app_identifier": "com.tinyspeck.slackmacgap",
    "visible_screen_content": "Alice: Can you check the deployment?\nBob: On it.\nYou: Let me ..."  // ← up to 10K tokens
  },
  "text_insertion_point": {
    "cursor_state": {
      "text_before_cursor": "Let me check the deployment status for ",
      "text_after_cursor": "",
      "selected_text": ""
    },
    "surrounding_context": {
      "text_before_input_area": "Channel: #engineering",  // ← up to 1K tokens
      "text_after_input_area": ""
    }
  }
}

The blacklist effectively blocks the two largest data sources (visible_screen_content and surrounding_context), which together can contain up to 12K tokens. But cursor_state — which uses the same AX API under the hood — is always collected regardless of blacklist status. The scope of cursor_state is limited to the focused text field (e.g., the current cell in Excel, the message input in Slack), not the entire screen. But in apps with large text fields (code editors, document editors), the “text before cursor” could contain significant content.

Additional Privacy Guards

Notion hardcoded exception. Even if Notion is not blacklisted, when the AX role is AXWebArea (the entire page rather than a specific text field), all cursor context is cleared:

if (appInfo.isWebBrowser &&
    appInfo.webURL?.startsWith("https://www.notion.so/") &&
    element.role === "AXWebArea") {
  cursorState.startIndex = 0;
  cursorState.endIndex = 0;
  cursorState.beforeText = "";
  cursorState.afterText = "";
}

Design Tradeoffs

Core Design 6: The Data Model

Typeless uses Drizzle ORM with libSQL (Turso’s SQLite fork) for local storage — a more modern stack than raw SQLite. The schema is defined in the main process JavaScript:

 

The schema reveals a fascinating design choice: heavy denormalization. The focused_app_name, focused_app_bundle_id, focused_app_window_title, focused_app_window_web_title, focused_app_window_web_domain, and focused_app_window_web_url columns duplicate data from the focused_app JSON column. This exists to support the six database indexes:

idx_history_user_created_at
idx_history_user_status_created_at
idx_history_user_app_name_status_created_at
idx_history_user_app_bundle_id_status_created_at
idx_history_user_app_name_web_domain_status_created_at
idx_history_user_app_bundle_id_web_domain_status_created_at

These indexes enable fast filtering by app, by website domain, by status — all queries needed for the history view in the Hub. Without denormalization, every query would require JSON extraction on the focused_app blob, which SQLite handles poorly for indexed lookups.

The three electron-store JSON files provide fast key-value access for settings that don’t need SQL queries:

StoreKeysPurpose
app-onboardingisCompleted, onboardingStepOnboarding progress
app-settingskeyboardShortcut, selectedLanguages, selectedMicrophoneDevice, autoSelectLanguages, enabledOpusCompression, dynamicMicrophoneDegradationEnabled, preferredBuiltInMicId, translationModeTargetLanguageCode, microphoneDevices, preferredLanguageUser preferences
app-storageruntime dataEphemeral state

Core Design 7: Module Boundaries Revealed by 77 IPC Methods

Typeless uses a centralized IPC handler registry with 77 methods. What’s interesting is what the namespace prefixes reveal about how the codebase is organized into modules. Each namespace:method prefix groups related functionality behind a clear boundary.

Typeless’s 77 handlers decompose into 13 IPC modules:

ModuleMethodsCore Responsibility
audio11The core module — voice capture, Opus compression, AI voice flow, abort control. Owns the entire pipeline from microphone to refined text.
db15Persistence — CRUD for transcription history, audio blob storage, cleanup policies. The history table is the central entity.
page14Presentation — window lifecycle (open/close/minimize), routing, devtools, onboarding flow. No business logic, pure UI orchestration.
permission5Platform integration — accessibility, microphone, screen capture permissions. Bridges macOS security model to app state.
keyboard4Input handling — start/stop keyboard monitoring, type transcript into target app, watcher interval tuning. Owns the CGEventTap lifecycle.
user4Identity — login, logout, session state. Thin wrapper around token-based auth with api.typeless.com.
updater4Delivery — check, download, install. Manages the idle→checking→downloading→downloaded→installed state machine.
store1Configuration — single generic handler that dispatches to 3 electron-store files. Hides which store holds which key from the renderer.
file5Filesystem — manages the Recordings/ directory on disk (open, read size, save logs, clear). Separate from db because it owns filesystem, not SQLite.
i18n3Localization — get/set/reset language across 58 locales.
device1Hardwareis-lid-open — laptop lid detection for microphone behavior.
microphone1Audio hardwaredelay-test — latency measurement via Swift testAudioLatency.
context1App awarenessget-app-icon — extracts icon from target app’s Info.plist.

The namespace convention (audio:mute, db:history-upsert, page:open-hub) serves as a contract surface between the renderer and main process. Each prefix groups a cohesive set of operations that could, in theory, be extracted into a separate module without touching the others. The audio module depends on db (to fetch the WAV blob before compression) and keyboard (to insert the result), but page depends on nothing — it’s pure window management.

One design detail worth noting: store:use is a single handler that accepts { store, action, key, value } and dispatches to one of three electron-store files. This hides the storage topology from the renderer — it doesn’t need to know that settings, onboarding state, and runtime storage live in separate files. It just calls store:use with a store name.

Core Design 8: Three Voice Modes, One Pipeline

Typeless doesn’t have three features — it has one audio pipeline with a polymorphic mode parameter. The same microphone capture, the same Opus compression, the same /ai/voice_flow endpoint, but the mode field changes the server’s behavior entirely:

ModeAPI ValueTriggerServer Behavior
Voice TranscripttranscriptHold Fn (push-to-talk) or Fn+Space (hands-free)Transcribe speech → refine text → insert into focused app
Voice Commandask_anythingSeparate shortcutTranscribe speech → interpret as instruction → return external_action (open URL, execute command)
Voice TranslationtranslationFn+ShiftTranscribe speech → translate to output_language → insert translated text

This is the Strategy pattern at the API boundary. The client doesn’t implement three different pipelines — it sends different mode + parameters payloads and the server returns different response shapes:

The command mode is architecturally different from the other two — it reads selected_text from the cursor state (via audio_context.text_insertion_point.cursor_state.selected_text) and uses it as context for the AI instruction. This means “ask anything” operates on what you’ve highlighted, while transcript and translation operate on what you’ve spoken.

The shortcut system itself is configurable and stored in electron-store with full key mapping for every physical key on the keyboard — 100+ entries including numpad, function keys, and international keys like Eisu and Kana for Japanese keyboard layouts. This is necessary because the keyboard hooks operate at the CGEventTap level (raw key codes), not at the character level.

The Complete Tech Stack

Layer 1: Renderer (10.6 MB JS + 57 KB CSS + 65 lazy chunks)

LibraryEvidencePurpose
React 18170+ refs across bundlesUI framework
MUI (Material UI)640+ refs (Select, Drawer, Chip, Tabs, Dialog, etc.)Component library
Recoil152 refsState management
i18next15 refs in shared bundleInternationalization (58 locales)
ECharts29 refs + echarts-for-reactUsage analytics charts
Framer Motion36 refsAnimations and transitions
Floating UI@floating-ui/react in depsTooltip/popover positioning
Shiki5 refsSyntax highlighting
Sentry107 refsError tracking (browser SDK)
Mixpanel5 refsProduct analytics
Microsoft Clarity16 refs (onboarding CSP allows clarity.ms)Session replay / heatmaps
diffdependencyText difference computation
Markdown (remark + rehype + marked)30+ refsMarkdown rendering
Jotai3 refsLightweight atom state (possibly for sidebar)
Immer10 refsImmutable state updates
@tanstack/react-virtualdependencyVirtualized list rendering for history
notistackdependencySnackbar notifications
compare-versionsdependencyVersion comparison for updates

Layer 2: Main Process (254 KB, Node.js, Electron 33.4.11)

PackagePurpose
drizzle-orm v0.44.2Type-safe ORM for SQLite schema and queries
@libsql/client v0.15.9Turso’s libSQL driver (SQLite fork with extensions)
sqlite3 v5.1.7Fallback/legacy SQLite driver
koffi v2.11.0FFI bridge to call Swift dylibs from JavaScript
electron-store v10.0.1JSON file persistence for settings (3 stores)
electron-updater v6.3.9Auto-update via typeless-static.com/desktop-release/
@sentry/electron v7.5.0Error tracking and performance monitoring
node-schedule v2.1.1Scheduled tasks (history/disk cleanup)
undici v7.16.0HTTP client for API calls
crypto-js v4.2.0Encryption (audio context fingerprinting)
dotenv v16.5.0Environment configuration
plist v3.1.0Parse macOS Info.plist files (for app icons)
js-yaml v4.1.0YAML parsing (app-update.yml config)
diff v8.0.2Text diff computation

Layer 3: Swift Native (5 dylibs + 1 binary, all universal x86_64 + arm64)

LibraryFunctionsTechnology
libContextHelpergetFocusedAppInfo, getFocusedElementInfo, getFocusedVisibleText, getFocusedElementRelatedContent, setFocusedWindowEnhancedUserInterfacemacOS Accessibility API (AXUIElement)
libInputHelperinsertText, insertRichText, deleteBackward, getSelectedText, simulatePasteCommand, savePasteboard, restorePasteboard, findKeyCodeForCharacter, getCurrentInputStateCGEvent, NSPasteboard, AX text insertion
libKeyboardHelperKeyboardMonitor.startMonitor, stopMonitor, processEvents, ShortcutDetector, KeyboardUtilsCGEventTap, key code mapping
libUtilHelpergetAudioDevicesJSON, muteAudio, unmuteAudio, isAudioMuted, testAudioLatency, deviceIsLidOpen, checkAccessibilityPermission, getDeviceId, launchApplicationByNameCoreAudio, IOKit, AVAudioInputNode
libopusencopus_convert_advanced, ope_encoder_*libopus + libopusenc (C library)
macosCheckAccessibilitySingle-purpose binaryChecks AX permission from child process

Appendix: Extraction Methodology

Every finding comes from read-only static analysis:

StepCommandWhat it reveals
1cat Info.plistBundle ID (now.typeless.desktop), version 1.0.2, build 83
2cat app-update.ymlUpdate feed: typeless-static.com/desktop-release/, arm64 channel
3ls Contents/Frameworks/Electron + Squirrel + Mantle + ReactiveObjC
4npx @electron/asar extract app.asar /tmp/outFull Node.js source, renderer bundles
5cat package.json21 direct dependencies, Vite build, Node ≥22
6find /tmp/out/dist -type f4 HTML entries, 65 JS chunks, 4 MJS entry points
7grep -oE '"[a-z]+:[a-z][-a-zA-Z_]*"' main.js77 IPC channel names
8find app.asar.unpacked -name "*.node"3 native addons: @libsql, koffi, sqlite3
9find Contents/Resources/lib -name "*.dylib"5 Swift dylibs
10file *.dylibUniversal binaries (x86_64 + arm64)
11strings libContextHelper.dylibAXUIElement, getFocusedAppInfo, Accessibility API
12nm -g libInputHelper.dylibinsertText, simulatePasteCommand, savePasteboard
13strings libopusenc.dylib | grep Users/Build path reveals monorepo name and developer
14grep -oi 'react|mui|recoil' *.jsLibrary identification by string frequency
15grep -oE 'T\.handle\("[^"]*"' main.jsComplete IPC handler registry

The Swift dylibs embed function signatures for panic messages. Minified JavaScript retains library identifiers. Electron apps ship package.json unencrypted. None of this requires a decompiler.

Analysis performed on Typeless v1.0.2, build 1.0.2.83, Electron 33.4.11 (Chrome 130.0.6723.191), macOS 15.6, Apple M4.


I’ve been using Typeless, a Voice keyboard that makes you smarter. Use my link to join and get a $5 credit for Typeless Pro: https://www.typeless.com/refer?code=ZEDSNDM

Share this post on: Share this post on X Share this post on LinkedIn