One of the things that drew me to Nim is the degree of control it offers over memory management without forcing you to the extreme of a borrow checker. The default experience is a garbage-collected language that feels like Python. The far end of the dial is manual memory management that compiles to idiomatic C with no runtime overhead. Between those two points is a set of options worth understanding in detail — because the right choice depends heavily on what you’re building.
Nim selects its memory strategy at compile time via the --gc flag. This post walks through the three main postures: automatic, controlled, and none.
The Default: ORC
As of Nim 2.0, the default GC is ORC — a cycle-collecting reference counter. For most code, you simply don’t think about memory:
proc processLines(path: string): seq[string] =
result = newSeq[string]()
for line in lines(path):
if line.len > 0:
result.add(line.strip())
# seq is freed when result goes out of the caller's scope
# no explicit free, no defer, no destructor call required
ORC is built on ARC (Automatic Reference Counting) with a local cycle detector layered on top. Each ref allocation carries a reference count. When the count drops to zero, the object is freed immediately — no stop-the-world collection phase, no GC pause. The cycle detector handles reference cycles that ARC alone would leak.
The key property worth understanding: destruction under ORC is mostly deterministic. Simple object graphs are freed at scope exit. Cyclic graphs wait for the cycle detector to run. For most application code that distinction doesn’t matter. For systems code where timing matters, it does — which is why ARC exists as a separate option.
# Compile with ORC (default in Nim 2.0):
# nim c myapp.nim
type
Node = ref object
value: int
next: Node # Can form a cycle — ORC handles it
proc buildCycle(): Node =
let a = Node(value: 1)
let b = Node(value: 2)
a.next = b
b.next = a # Cycle — ARC alone would leak this
result = a
# ORC detects and collects the cycle when result goes out of scope
If you know a type will never be part of a cycle, the {.acyclic.} pragma tells ORC to skip cycle tracking for it entirely — reducing overhead per allocation:
type
Leaf = ref object {.acyclic.}
data: string
# Leaf objects are never part of cycles — no cycle tracking overhead
ARC: Fully Deterministic Reference Counting
--gc:arc drops the cycle detector. Destruction happens exactly when the last reference drops — no background passes, no jitter, nothing deferred. The tradeoff is that cyclic ref graphs will leak, so you either design them out or break cycles manually with ptr.
# nim c --gc:arc myapp.nim
type
Connection = ref object
fd: int
buffer: seq[byte]
proc newConnection(fd: int): Connection =
Connection(fd: fd, buffer: newSeq[byte](4096))
# When this ref's count reaches zero, buffer is freed,
# then Connection is freed. Deterministic. No surprises.
proc handleRequest() =
let conn = newConnection(accept())
processData(conn)
# conn freed here — exactly here, at scope exit
ARC also enables custom lifecycle hooks. With the older GC, finalizers were called non-deterministically. With ARC, =destroy is called at scope exit — which makes resource management as predictable as RAII in C++:
type
Socket = object # Note: object, not ref object
fd: cint
proc `=destroy`(s: var Socket) =
if s.fd >= 0:
discard close(s.fd)
s.fd = -1
proc `=copy`(dest: var Socket, src: Socket) {.error: "Sockets cannot be copied".}
proc `=dup`(src: Socket): Socket {.error: "Sockets cannot be duplicated".}
proc main() =
var sock = Socket(fd: openSocket())
processSocket(sock)
# `=destroy` called here — fd is closed exactly at scope exit
The =copy and =dup hooks disabled above make Socket a move-only type. You can express ownership semantics without a borrow checker — more loosely enforced than Rust, but sufficient for the common patterns.
Controlled GC: Disabling Collection in Critical Sections
For the older refc GC (still available via --gc:refc), and for code that needs to run without any allocation-related work for a bounded period — game loops, audio callbacks, packet processing at line rate — Nim exposes the GC controls directly:
# nim c --gc:refc myapp.nim
import std/gc
proc warmup() =
# Allocate everything the real-time loop will need upfront
globalBuffer = newSeq[byte](1024 * 1024)
globalCache = initTable[string, seq[float32]]()
populateCache(globalCache)
# Force a full collection to start from a clean heap
GC_fullCollect()
proc realtimeLoop() =
# From this point: no allocations, no GC activity
GC_disable()
while running:
let packet = recvPacket() # reads into pre-allocated buffer
processPacket(packet, globalCache) # no new allocations
sendResponse()
# Collection happens here, at a point we control
GC_enable()
GC_fullCollect()
The contract you’re making: between GC_disable() and GC_enable(), you are responsible for not allocating. Any ref allocation, any string construction, any seq that needs to grow — all of these would bypass the (disabled) GC and accumulate until collection resumes. The pattern only works if the loop body is genuinely allocation-free.
A useful companion is GC_setMaxPause, which limits how long a single GC step is allowed to run — a soft real-time bound rather than a hard pause-free guarantee:
# Limit GC pauses to 2ms — after which it yields and resumes later
GC_setMaxPause(2)
This is a compromise. You get bounded pauses without the discipline of a fully allocation-free loop. For soft real-time systems (targeting 60fps, say, where a 2ms pause is acceptable) it’s a practical middle ground. For hard real-time, you need ARC or none.
The Warm-Up Pattern
The full structure for soft-real-time with the older GC:
proc init() =
# Phase 1: allocate all persistent state
assets = loadAssets("data/")
meshCache = buildMeshCache(assets)
# Phase 2: trigger GC now so the heap is compact before the loop
GC_fullCollect()
# Phase 3: optionally hint the GC to collect more aggressively
# between frames rather than during them
GC_setStrategy(gcOptimizeTime)
proc runLoop() =
GC_disable()
while not windowShouldClose():
beginFrame()
update(dt)
render()
endFrame()
# Controlled collection point: between frames, on our schedule
if frameCount mod 60 == 0:
GC_enable()
GC_fullCollect()
GC_disable()
The 60-frame cadence forces a full collection once per second at a moment you’ve explicitly chosen — between frame end and the next frame begin. Frame time for that frame will be slightly longer; every other frame is clean.
No GC: --gc:none
At the far end: no garbage collector, no reference counting, no cycle detection. Raw allocation and deallocation. The Nim runtime still exists but memory management is entirely manual.
# nim c --gc:none myapp.nim
proc main() =
# alloc0: allocate zeroed memory, returns pointer
let buf = cast[ptr UncheckedArray[byte]](alloc0(4096))
defer: dealloc(buf) # defer still works — it's just a compile-time macro
buf[0] = 0xAB
buf[1] = 0xCD
processBuffer(buf, 4096)
ptr T is Nim’s untraced pointer — no GC involvement, no reference count. ref T is the traced counterpart. With --gc:none, using ref still compiles but is incorrect — nothing will collect it. The convention under --gc:none is to use ptr throughout.
For larger objects, create and destroy are the typed wrappers around alloc/dealloc:
type
Packet = object
header: array[8, byte]
payload: ptr UncheckedArray[byte]
length: int
proc allocPacket(len: int): ptr Packet =
result = create(Packet)
result.payload = cast[ptr UncheckedArray[byte]](alloc0(len))
result.length = len
proc freePacket(p: ptr Packet) =
dealloc(p.payload)
destroy(p)
proc main() =
let pkt = allocPacket(1500)
defer: freePacket(pkt)
fillPacket(pkt)
sendPacket(pkt)
--gc:none is the right choice for WASM targets where you want to control the linear memory entirely, for embedded systems with no heap, and for C library integrations where the C side controls allocation and Nim is handling logic. It’s also useful when integrating Nim into an existing application as a library — the host application manages memory; Nim should not be running a parallel GC on top of it.
ptr vs ref — the practical distinction
ref T |
ptr T |
|
|---|---|---|
| Tracking | GC-managed | Untraced |
| Nil-safe | Yes | No |
| Dereference | x.field |
x[].field or x.field |
--gc:none safe |
❌ | ✅ |
| Suitable for FFI | Careful | Yes |
With --gc:none, the compiler won’t stop you from using ref — it will compile, run, and leak. The type system distinction doesn’t enforce the right choice here; discipline does.
Choosing Between Them
| Scenario | Recommended |
|---|---|
| Application code, web services, tooling | ORC (default) |
| Cyclic graphs not present / avoidable | ARC (--gc:arc) |
| Game loop, audio callback, soft real-time | refc + GC_disable() pattern |
| Hard real-time, no allocation budget | ARC + allocation-free design |
| WASM, embedded, C library integration | --gc:none |
| Debugging a space leak | Start with ORC, profile, switch if needed |
The important thing Nim gets right here is that this is a spectrum, not a binary. You don’t have to commit to manual memory management to get deterministic destruction — ARC gets you there. You don’t have to abandon GC entirely to run a real-time loop safely — GC_disable() gets you there. The escape hatch exists at each level without forcing you to abandon the whole abstraction.
Rust enforces memory correctness at compile time through the borrow checker, which is more powerful but demands a different style of programming. Nim’s approach is lighter — closer to the way Go or Swift handle memory — but with finer-grained control when you need it. Whether that tradeoff suits a project depends on what the project is, and how much of the codebase needs to operate near metal.
For most Nim code I write, ORC is the right answer and never comes up again. For anything touching network I/O at throughput, ARC is worth the discipline of avoiding cycles. --gc:none I reach for when compiling to WASM, where the question of who owns the heap matters in a very concrete way.