In the previous article I described a path from dynamic languages toward TypeScript, and from TypeScript toward stricter compiled languages. Rust is the end of that path — but it took some convincing to get there, and the thing that convinced me wasn't the systems programming story. It was the frontend.

The TypeScript Ceiling

TypeScript is an extraordinary tool within its constraints. The type system is expressive, the ecosystem is vast, and the developer experience — particularly with modern tooling — is genuinely good. But TypeScript erases at runtime. The types you write are a compile-time overlay on JavaScript; they do not exist when the code executes. The compiler can tell you when your model is inconsistent, but it cannot prevent every class of runtime failure.

More practically: TypeScript runs on a JavaScript engine. The performance ceiling is JavaScript’s ceiling. For most web applications that ceiling is never approached — DOM manipulation, API calls, and rendering are not computationally bound. But for anything CPU-intensive — parsing, search, encoding, numerical processing — you are either calling into WebAssembly or you are fighting the garbage collector.

Rust was not an obvious next step. The learning curve is steep and the reputation preceded it: borrow checker, lifetime annotations, a language that refuses to compile until you have proven your memory model is sound. I understood the value proposition intellectually — memory safety without garbage collection, performance comparable to C — but I did not have a problem that felt like it required those properties.

What Changed: The Full-Stack Type Contract

The argument that eventually landed was not about performance. It was about the contract between frontend and backend.

In a TypeScript frontend with a Rust backend — or any backend in a different language — you have two type systems that know nothing about each other. The API boundary is a gap where type safety disappears. You can generate TypeScript types from an OpenAPI schema, you can write shared validation logic, you can use tRPC to thread types across a Node.js boundary. But these are all mechanisms for synchronising two independent systems. The types on the frontend are manually maintained mirrors of the types on the backend, and drift between them is caught at runtime rather than compile time.

Leptos changes this. Leptos is a Rust framework for building web user interfaces — reactive, component-based, with a JSX-like macro syntax. What makes it structurally different is that the frontend and backend share the same codebase and the same type system. Server functions in Leptos are Rust functions annotated with #[server]:

#[server]
pub async fn get_articles() -> Result<Vec<Article>, ServerFnError> {
    // runs on the server
    db::fetch_articles().await
}

That function is callable directly from a component:

#[component]
pub fn ArticleList() -> impl IntoView {
    let articles = create_resource(|| (), |_| get_articles());

    view! {
        <Suspense fallback=|| view! { <p>"Loading..."</p> }>
            {move || articles.get().map(|result| {
                result.map(|articles| view! {
                    <ul>
                        {articles.into_iter().map(|a| view! {
                            <li>{a.title}</li>
                        }).collect_view()}
                    </ul>
                })
            })}
        </Suspense>
    }
}

The Article type is defined once. The compiler verifies that the component handles the response correctly — including the error case — at compile time. There is no OpenAPI schema to maintain, no TypeScript interface to keep in sync, no runtime surprise when the backend adds a field the frontend didn’t account for. The contract is enforced by the same compiler that builds both sides.

This is the property TypeScript pointed toward but could not deliver on its own. A single language, a single type system, a verified boundary between client and server.

WASM as a Natural Fit

Leptos compiles to WebAssembly for the client-side bundle. This is not incidental — it is what makes the full-stack story coherent.

WebAssembly is a binary instruction format designed to run in browsers and other environments at near-native speed. Compared to JavaScript, WASM binaries are compact, parse faster, and execute without the JIT warmup overhead that JavaScript engines require. For a Leptos application, the client bundle is a single .wasm file — typically smaller than an equivalent JavaScript bundle for non-trivial applications, and significantly faster to parse.

Smaller binaries matter in ways that compound. Faster initial parse means faster time-to-interactive, particularly on lower-powered devices. HTTP/2 compression handles WASM efficiently. And unlike JavaScript, there is no source-map indirection between what ships and what executes — the binary is the output.

WASM also runs outside the browser. Cloudflare Workers supports WASM, which means the same Rust code that runs in the browser can run at the edge with no changes to the execution model. A search function, a validation routine, a parsing pipeline — write it once in Rust, compile to WASM, deploy it to the browser, to a Worker, to a server binary. The portability is genuine rather than aspirational.

This is where the connection to the broader stack becomes clear. Tantivy, the Rust search library that underpins Quickwit, compiles to WASM. An NLP preprocessing pipeline written in Rust can be embedded in a Cloudflare Worker for edge-side query processing, in a Leptos component for client-side autocomplete, and in a backend binary for batch indexing — the same logic, the same correctness guarantees, across every deployment target.

The Borrow Checker as a Feature

The aspect of Rust with the worst reputation turns out to be the aspect most directly analogous to what TypeScript’s type system provides.

TypeScript’s type system makes illegal states unrepresentable. The borrow checker makes illegal memory operations unrepresentable. Both work by moving a class of runtime errors to compile time. Both require upfront investment in describing your constraints correctly. Both pay back that investment through confidence that the program, once compiled, cannot fail in the ways they govern.

The borrow checker enforces two rules: a value can have either one mutable reference or any number of immutable references at a time, never both. This eliminates data races, use-after-free, and a broad class of concurrency bugs at compile time. Learning to write programs that satisfy these rules is the steep part of the Rust curve. But it is the same muscle as learning to write programs that satisfy TypeScript’s type constraints — the habit of making your intentions explicit so the compiler can verify them.

Once that reframe settled — borrow checker as compiler-enforced correctness, not as arbitrary restriction — the language stopped feeling hostile and started feeling familiar.

Where This Sits in the Stack

Rust is not a replacement for TypeScript. For UI logic, API integration, and the majority of web application code, TypeScript is the right tool — the ecosystem is unmatched and the developer experience is excellent.

The case for Rust is at the boundaries: performance-critical processing, shared logic between client and server, code that needs to run in constrained environments (WASM, embedded, edge functions), and anywhere that memory safety without garbage collection matters.

For me that means: search and retrieval logic in Tantivy, edge-deployed query processing via Cloudflare Workers, full-stack components in Leptos where the client-server contract needs to be verified rather than trusted, and any pipeline work where TypeScript’s runtime would be a bottleneck.

The path from TypeScript to Rust is not a rejection of the TypeScript values — expressiveness, type safety, explicit constraints, a single language across the stack. It is the logical conclusion of them.