Electron in 2026: When It's Still the Right Choice

9 min read
electrondesktop-appsarchitecturefalavrasherpa-onnxnative-modules

I shipped DropVox in native Swift. Three weeks later, I shipped Falavra in Electron.

If you follow the internet's default advice, the second decision was wrong. Electron is bloated. Electron is lazy. Electron ships an entire browser engine for what could be a 5 MB binary. Use Tauri. Use native. Use a web app. The discourse is clear and confident and completely uninterested in nuance.

I agree with that advice in most cases. This was not most cases.

What Falavra Actually Needs

Falavra is a language learning app that works through media. Point it at a video or audio file — or a YouTube URL — and it transcribes the content locally, segments it by sentence, and lets you review each sentence with playback, translation, and vocabulary extraction. Everything runs offline. No cloud. No subscriptions. No data leaving your machine.

The technical requirements that dictated the choice:

  1. sherpa-onnx for local speech recognition via sherpa-onnx-node — a Node.js binding around a C++ inference engine
  2. better-sqlite3 for local database — a native Node.js addon
  3. yt-dlp and ffmpeg via child processes for downloading and converting media
  4. A complex UI — waveform visualization, synchronized sentence highlighting, vocabulary panels with search and spaced repetition stats

Four dependencies. Three of them only exist in the Node.js ecosystem. That is the entire argument.

I Tried the Alternatives

Tauri

Tauri is lighter, uses the system webview, and everyone recommends it. I evaluated it seriously.

The problem: Tauri's backend is Rust. sherpa-onnx has no Rust binding. I would have needed to write FFI bindings to the sherpa-onnx C API myself, manage memory across the FFI boundary, and maintain those bindings as sherpa-onnx evolves. better-sqlite3 has no Rust equivalent at the same quality level — rusqlite is excellent but it is a different library with a different API, which means rewriting the data layer, not porting it.

Each dependency could be replaced individually. Replacing all of them simultaneously while learning Rust would have tripled the development time. Tauri's bundle size advantage is real. A 3x timeline is more real.

Native Swift

I just shipped DropVox in Swift. I know the language. Why not Falavra too?

Because the UI requirements are fundamentally different. DropVox is a menu bar utility — a popover, a settings window, a floating drop zone. SwiftUI handles that elegantly. Falavra has a data-dense interface with waveform scrubbing, synchronized highlighting across panels, real-time search filtering, and responsive layouts that rearrange at different window sizes. Building that in SwiftUI is possible. Building it in React — where I have eleven years of muscle memory for exactly this kind of interface — is faster.

And sherpa-onnx's Swift binding is less mature than the Node.js one. Thinner documentation. Smaller community. Fewer answers when things break. I would be debugging ML integration with fewer resources while simultaneously building a complex UI in a framework I know less well.

The honest calculation: Electron lets me use my fastest UI stack with my required dependencies. Productivity won.

What Electron Actually Gives You

The Full Node.js Runtime

This is the advantage. Not "web technologies on desktop." Not "cross-platform." The full Node.js runtime in the main process, with native addons, child processes, filesystem access, and the entire npm ecosystem.

// Spawn yt-dlp to download a YouTube video
import { spawn } from 'child_process';

function downloadVideo(url: string, outputPath: string): Promise<void> {
    return new Promise((resolve, reject) => {
        const ytdlp = spawn('yt-dlp', [
            '--format', 'bestaudio',
            '--output', outputPath,
            '--no-playlist',
            url
        ]);
        ytdlp.on('close', (code) => {
            code === 0 ? resolve() : reject(new Error(`yt-dlp exited with ${code}`));
        });
    });
}
// Transcribe audio locally with sherpa-onnx-node
import { OfflineRecognizer } from 'sherpa-onnx-node';

function transcribe(audioPath: string): string {
    const recognizer = new OfflineRecognizer(config);
    const stream = recognizer.createStream();
    const waveData = readWavFile(audioPath);
    stream.acceptWaveform({
        sampleRate: waveData.sampleRate,
        samples: waveData.samples
    });
    recognizer.decode(stream);
    return stream.result.text;
}
// Query the local database
import Database from 'better-sqlite3';

const db = new Database(dbPath);
const sentences = db.prepare(`
    SELECT * FROM sentences
    WHERE transcription_id = ?
    ORDER BY start_time
`).all(transcriptionId);

Three native integrations — subprocess, native addon, native database — in one process, using best-in-class implementations. No FFI wrappers. No language bridges. No compromises on library quality.

Typed IPC

The renderer talks to the main process through IPC. With TypeScript, you can type the entire channel contract:

export interface IpcChannels {
    'transcribe': {
        args: [audioPath: string, language: string];
        return: TranscriptionResult;
    };
    'download-video': {
        args: [url: string];
        return: { outputPath: string };
    };
}

If the main process handler returns the wrong shape, TypeScript catches it at compile time. IPC feels like calling a regular async function. It is the cleanest part of the Electron developer experience.

electron-builder Does the Dirty Work

I wrote an entire post about the manual macOS distribution pipeline for DropVox — signing order, notarytool invocations, Sparkle integration, GitHub Actions workflows. For Falavra, electron-builder handles all of that with a JSON config:

{
    "build": {
        "appId": "dev.helsky.falavra",
        "mac": {
            "hardenedRuntime": true,
            "notarize": { "teamId": "TEAMID" }
        }
    }
}

That JSON replaces about 200 lines of shell scripts. It is not perfect — I will get to the edge cases — but for the common path, it works.

The Real Costs

I am not going to pretend Electron is free. The costs are genuine.

Memory

Falavra at idle: 200-300 MB. That is Chromium, V8, Node.js, and framework overhead doing nothing. DropVox at idle: 30-50 MB. During active transcription, Falavra peaks at 800 MB to 1.2 GB. A native implementation would probably use half that for the same workload.

200 MB at idle is not catastrophic on a 16 GB machine. But it shows up in Activity Monitor. Users who monitor resource usage will notice and form an opinion.

Bundle Size

The Falavra DMG is 280 MB. DropVox's DMG is 18 MB. Most of that difference is Chromium at roughly 120 MB compressed. You ship an entire browser engine whether your UI needs it or not.

For Falavra, where the UI is genuinely complex, this is a reasonable cost. For a simpler app, it would be absurd.

The V8 Memory Cage

This was the most frustrating bug I have encountered in any framework, ever.

V8 has a security feature called the memory cage that restricts where ArrayBuffers can be allocated. sherpa-onnx-node's readWave() function returns audio data as an external ArrayBuffer allocated by the C++ layer — outside the cage. In Electron 28+, accessing it causes an immediate crash.

The error message: FATAL ERROR: v8::ArrayBuffer::Detach Only ArrayBuffers allocated by V8 can be detached. No stack trace pointing to sherpa-onnx. No indication that a native module was involved. Just death.

I spent two days on this. The fix was to stop using sherpa-onnx's WAV reader entirely and write a pure JavaScript implementation that allocates inside V8's cage:

function readWavFile(buffer: Buffer): { samples: Float32Array; sampleRate: number } {
    const view = new DataView(buffer.buffer, buffer.byteOffset, buffer.byteLength);
    const sampleRate = view.getUint32(24, true);
    const bitsPerSample = view.getUint16(34, true);

    // Find the data chunk, parse samples into a Float32Array
    // that lives inside V8's memory cage
    // ... 40 lines of WAV parsing I never expected to write
    return { samples, sampleRate };
}

This is the kind of bug that only exists in Electron because it is the only framework where C++ code and JavaScript share memory through V8's specific allocation model. It is not documented anywhere obvious. You just discover it at 11 PM when your app crashes and the error message is technically accurate and completely unhelpful.

Native Module Rebuilds

Electron uses its own Node.js version internally. Native modules compiled against your system Node.js will not load. You must run npx electron-rebuild after every install that touches a native module, on the target platform, matching the exact Electron version.

When it works, it is invisible. When it fails, you are debugging C++ symbol mismatches. Not my favorite evening activity.

When Electron Is Wrong

I have now shipped both native and Electron. Based on that experience:

Simple utilities. If your app is a menu bar icon and a settings window, the 200 MB memory overhead is unjustifiable. That is why DropVox went native.

Battery-critical apps. Chromium is not power-efficient. Menu bar apps that run all day need negligible battery impact. Electron cannot deliver that.

Apps without Node.js needs. If you do not need native addons, child processes, or the npm ecosystem for core functionality, there is no reason to ship Node.js. Use Tauri.

Performance-critical rendering. Games, video editors, audio DAWs — if you need consistent 60fps, Chromium's rendering pipeline adds latency that native avoids.

When Electron Is Right

Heavy native module usage. When your core functionality depends on Node.js addons that do not exist in other languages, Electron is the only framework that gives you those modules without friction.

Complex, data-dense UIs. When the interface has dozens of interactive components, complex state, and responsive layouts, React is genuinely faster to develop than native UI frameworks. Not because native cannot do it — because the React ecosystem for this kind of UI is deeper.

Cross-platform from a single codebase. macOS, Windows, and Linux from one codebase with minimal platform-specific code. Native means maintaining two or three separate codebases.

The Point

DropVox is faster, lighter, and more power-efficient. Falavra has a richer UI, deeper functionality, and access to tools that do not exist outside Node.js. Neither choice was wrong. Both were deliberate.

The technology discourse around Electron is dominated by people who have strong opinions about what other developers should use. The Tauri advocates point at memory usage. The native advocates point at bundle size. The web advocates say everything should be a PWA.

They are all correct in their specific context and wrong as general advice.

The right tool depends on your requirements, your skills, your timeline, and your dependencies. Not the zeitgeist. If you need Node.js native modules, complex web-based UI, and cross-platform support, Electron remains the best option in 2026. Not because it is perfect. Because nothing else does what it does.

If you do not need those things, use something else. I did, and it was the right call.

The nuance is in knowing which situation you are in.