Array handling in JS with functional purity

by Alexis Hope,

Functional programming has influenced JavaScript style significantly over the last decade. Pure functions — those that produce no side effects and return the same output for the same input — are now a widely adopted convention. But JavaScript is not a pure functional language, and the path to purity with composite types like arrays and objects has some real nuance worth understanding before you reach for a pattern wholesale.

TLDR

Clone before mutating when your function receives an array or object and side effects would be a problem. Prefer shallow clones for flat structures; reach for deep clones only when you have nested data you genuinely need to isolate. Don’t clone for read-only operations — it’s unnecessary cost with no defensive benefit.


Why References Matter

JavaScript passes primitive types by value and composite types by reference. This is a runtime optimisation: rather than copying a potentially large array into a new scope on every function call, the engine passes a pointer to the original.

Primitive types in JavaScript are:

  • number (covers both integers and floats)
  • bigint
  • string
  • boolean
  • undefined
  • null
  • symbol

Composite types are constructed from primitives:

  • Object
  • Array

For primitives, mutation within a function is scoped — the original is unaffected:

const a = 1;

function increment(n) {
  n++;
  return n;
}

increment(a); // 2
increment(a); // 2
a;            // 1 — unchanged

For composite types, the function receives a reference to the original. Mutations inside the function affect the value outside of it:

const b = [];

function push(arr, value) {
  arr.push(value); // mutates the original
  return arr;
}

push(b, 1); // [1]
push(b, 1); // [1, 1]
b;          // [1, 1] — the caller's array has changed

This behaviour is correct by design. The problem is when it’s unexpected.


Shallow Clones

The spread operator creates a shallow clone — a new array whose top-level elements are copied, not referenced:

const c = [];

function pushPure(arr, value) {
  const clone = [...arr];
  clone.push(value);
  return clone;
}

pushPure(c, 2); // [2]
pushPure(c, 2); // [2]
c;              // [] — original untouched

This is the right pattern when mutating a flat array inside a function. The clone is cheap and the function is now referentially transparent for that input.

When not to clone

A shallow clone costs memory allocation and a full iteration over the array. For read-only operations — predicates, lookups, aggregations — it provides no benefit:

// Unnecessary — Array.some() does not mutate
const result = [...input].some(item => item.active);

// Correct
const result = input.some(item => item.active);

Being indiscriminate about cloning signals a misunderstanding of the pattern. Clone when mutation is happening; don’t clone as a general defensive habit.


Deep Clones and Their Cost

A shallow clone only copies the top-level references. Nested objects and arrays within the clone still point to the same memory as the original:

const budgetData = [
  ['foo'],
  ['bar'],
  ['baz'],
];

const cloned = [...budgetData];
cloned[0][0] = 'abc';

console.log(budgetData[0][0]); // 'abc' — the original was mutated

If you need full isolation of nested structures, a deep clone is required:

const cloned = _.cloneDeep(budgetData);
cloned[0][0] = 'abc';

console.log(budgetData[0][0]); // 'foo' — original preserved

structuredClone is now available natively in all modern environments and removes the Lodash dependency for most cases:

const cloned = structuredClone(budgetData);

Deep cloning is significantly more expensive than shallow cloning — it traverses the entire object graph. Use it only when you have confirmed nested mutation is a real concern, not as a default safety measure.


Enforcing Purity at Scale

Relying on convention alone is fragile. In large codebases it produces inconsistency and PR debates that rarely converge. Two tools are worth knowing if you want to enforce purity structurally:

eslint-plugin-fp — adds a no-mutation rule and other FP constraints at the linting layer. Low overhead, integrates with your existing ESLint setup.

Immutable.js — replaces native arrays and objects with persistent, immutable data structures that enforce purity by construction rather than convention. Higher overhead and a steeper adoption curve, but eliminates the entire class of problem.

Both carry real integration cost. Introducing either into an existing codebase requires buy-in across the team and a clear understanding of the trade-offs.


Semantic vs. Strict Purity

Strict functional purity in JavaScript — every function truly side-effect free — is impractical without immutable data structures. What most teams are actually after is semantic purity: functions that are predictable, don’t surprise their callers, and make mutation explicit when it occurs.

That’s an achievable and valuable goal with native JavaScript. The cloning conventions above are the right tool for it. The key discipline is being intentional: clone when you’re mutating and the caller’s data should be preserved; don’t clone as a reflex.

Tags: