Skip to main content

How React Turns Chaos Into a Commit

How does React turn a storm of updates into one clean commit? This piece breaks down the path from chaos to a predictable UI pipeline.

Code Cracking
30m read
#React#frontend#webdev#softwarearchitecture
How React Turns Chaos Into a Commit - Featured blog post image

MENTORING

1:1 engineering mentorship.

Architecture, AI systems, career growth. Ongoing or one-off.

We’re examining how React’s reconciler turns a flood of updates, async waits, and transitions into a single predictable commit. The core of this behavior lives in ReactFiberWorkLoop.js, the file that coordinates when to render, when to pause, and when to finally mutate the host environment. I’m Mahmoud Zalt, an AI software engineer, and we’ll walk through how this control room uses a small set of states, priorities, and phases to keep React’s UI updates sane—and how you can reuse these patterns in your own architectures.

The work loop as a control room

Inside the React reconciler, ReactFiberWorkLoop.js is the orchestrator. It doesn’t know about DOM APIs or native widgets—that’s delegated to the host config. Instead, it decides when to render, how to schedule work, and when to commit effects into the host.

react-reconciler/
  src/
    ReactFiberWorkLoop.js        <-- work loop & commit orchestrator
    ReactFiberRootScheduler.js   (when to call performWorkOnRoot)
    ReactFiberBeginWork.js       (per-fiber beginWork logic)
    ReactFiberCompleteWork.js    (per-fiber completeWork logic)
    ReactFiberCommitWork.js      (mutation/layout/passive effects)
    ReactFiberLane.js            (lane priorities & operations)
    ReactFiberConfig.js          (host-specific config)
    ReactProfilerTimer.js        (timing & profiling)
The work loop sits between the scheduler above it and the per-fiber/host details below it.

A helpful image is air‑traffic control: updates originate from user events, async completions, or transitions; lanes encode their priority; the work loop decides who lands first, who circles, and who is diverted. Almost everything in this file is that one job under different conditions.

From updates to render strategies

With the control-room role in mind, the next step is to see how work flows through it: a prepare phase (render) that computes the next tree, and a commit phase that applies it. The work loop enforces this split strictly.

Execution context: where are we right now?

The file starts by tracking which phase React is currently in via a tiny bitmask. This guards against illegal re‑entrancy, such as trying to flush work while already committing.

type ExecutionContext = number;

export const NoContext = /*             */ 0b000;
const BatchedContext = /*               */ 0b001;
export const RenderContext = /*         */ 0b010;
export const CommitContext = /*         */ 0b100;

let executionContext: ExecutionContext = NoContext;
let workInProgressRoot: FiberRoot | null = null;
let workInProgress: Fiber | null = null;
let workInProgressRootRenderLanes: Lanes = NoLanes;

Execution context here is a compact state machine: are we currently rendering, committing, or inside a batched update? Many helpers check this flag before acting. For example, sync flushes are refused while already in RenderContext or CommitContext, which prevents subtle re‑entrancy bugs.

The key idea: every major transition in the work loop is guarded by explicit context, not by scattered assumptions. That’s the same pattern the commit pipeline will use later.

From update to scheduled work

All userland state updates eventually reach scheduleUpdateOnFiber. This is the entry gate where the control room hears “new work just arrived” and decides what to do with it.

export function scheduleUpdateOnFiber(
  root: FiberRoot,
  fiber: Fiber,
  lane: Lane,
) {
  // If a render is suspended, this update might unblock it.
  if (
    (root === workInProgressRoot &&
      (workInProgressSuspendedReason === SuspendedOnData ||
        workInProgressSuspendedReason === SuspendedOnAction)) ||
    root.cancelPendingCommit !== null
  ) {
    prepareFreshStack(root, NoLanes);
    const didAttemptEntireTree = false;
    markRootSuspended(
      root,
      workInProgressRootRenderLanes,
      workInProgressDeferredLane,
      didAttemptEntireTree,
    );
  }

  // Mark that the root has a pending update.
  markRootUpdated(root, lane);

  if (
    (executionContext & RenderContext) !== NoContext &&
    root === workInProgressRoot
  ) {
    // Render-phase update: track separately
    workInProgressRootRenderPhaseUpdatedLanes = mergeLanes(
      workInProgressRootRenderPhaseUpdatedLanes,
      lane,
    );
  } else {
    // Normal (event) update path
    ensureRootIsScheduled(root);

    // Legacy sync root: flush right now
    if (
      lane === SyncLane &&
      executionContext === NoContext &&
      !disableLegacyMode &&
      (fiber.mode & ConcurrentMode) === NoMode
    ) {
      resetRenderTimer();
      flushSyncWorkOnLegacyRootsOnly();
    }
  }
}

Two design choices matter here:

  • Lanes encode priority. A Lane is React’s priority unit. Sync lanes are emergencies, transitions are normal traffic, retries and idle work are lower. The work loop never reasons about “this is a click” vs “this is a retry”; it reasons about lanes.
  • Behavior is context-aware. The same function behaves differently if we’re already rendering this root, if we’re suspended, or if we’re idle. Legacy roots short‑circuit to synchronous flushes; concurrent roots stay cooperative.

Driving the render factory line

Once work is scheduled and the root scheduler decides it’s time, control funnels into performWorkOnRoot, the top-level driver for a single render attempt on a root.

export function performWorkOnRoot(
  root: FiberRoot,
  lanes: Lanes,
  forceSync: boolean,
): void {
  if ((executionContext & (RenderContext | CommitContext)) !== NoContext) {
    throw new Error('Should not already be working.');
  }

  const shouldTimeSlice =
    (!forceSync &&
      !includesBlockingLane(lanes) &&
      !includesExpiredLane(root, lanes)) ||
    checkIfRootIsPrerendering(root, lanes);

  const exitStatus: RootExitStatus = shouldTimeSlice
    ? renderRootConcurrent(root, lanes)
    : renderRootSync(root, lanes, true);

  // Handle in-progress, errors, or success; possibly retry synchronously
  // ...
  finishConcurrentRender(
    root,
    exitStatus,
    finishedWork,
    lanes,
    renderEndTime,
  );

  ensureRootIsScheduled(root);
}

Here the control room makes two strategic decisions:

  • Choose a render strategy. Based on lanes and timeouts, React picks renderRootConcurrent or renderRootSync. These are two implementations with the same contract, selected at runtime—a straightforward strategy pattern.
  • Loop and verify. After a render pass, React may re‑do work synchronously if it detects inconsistencies or error retries, before it ever commits. The work loop is “prepare until consistent,” not “prepare once and hope.”

Underneath, both render functions iterate the fiber tree via performUnitOfWork / completeUnitOfWork to produce a finished tree. The interesting part for architecture is how the loop reacts when that straight path is interrupted by Suspense, errors, or pings—which is where we turn next.

Suspense, pings, and controlled retries

Real applications don’t just compute; they wait. Network, images, hydration, user actions—these all introduce pauses. The work loop integrates Suspense into the same state-driven model, so the factory line can “pause with intent” instead of stalling chaotically.

Handle throws by turning them into state

When a component throws—either a real error or one of Suspense’s special exceptions—control flows into handleThrow. Its main task is to classify what happened into a small set of suspended reasons and record the thrown value.

How handleThrow classifies exceptions
function handleThrow(root: FiberRoot, thrownValue: any): void {
  resetHooksAfterThrow();

  if (
    thrownValue === SuspenseException ||
    thrownValue === SuspenseActionException
  ) {
    thrownValue = getSuspendedThenable();
    workInProgressSuspendedReason = SuspendedOnImmediate;
  } else if (thrownValue === SuspenseyCommitException) {
    thrownValue = getSuspendedThenable();
    workInProgressSuspendedReason = SuspendedOnInstance;
  } else if (thrownValue === SelectiveHydrationException) {
    workInProgressSuspendedReason = SuspendedOnHydration;
  } else {
    const isWakeable =
      thrownValue !== null &&
      typeof thrownValue === 'object' &&
      typeof thrownValue.then === 'function';

    workInProgressSuspendedReason = isWakeable
      ? SuspendedOnDeprecatedThrowPromise
      : SuspendedOnError;
  }

  workInProgressThrownValue = thrownValue;

  const erroredWork = workInProgress;
  if (erroredWork === null) {
    workInProgressRootExitStatus = RootFatalErrored;
    logUncaughtError(
      root,
      createCapturedValueAtFiber(thrownValue, root.current),
    );
    return;
  }

  // ... profiling and DevTools markers
}

Instead of letting thrown values bubble as arbitrary control-flow jumps, the work loop normalizes them into SuspendedReason states. The rest of the system switches on these reasons, not on raw exceptions.

This is a reusable pattern: treat exceptional paths as explicit state transitions in your core loop, not as scattered try/catch blocks with ad‑hoc branching.

Pings: the wake-up mechanism

When work is suspended on a promise or resource, React needs to know when to retry. That’s handled by attachPingListener and pingSuspendedRoot. Conceptually, the root subscribes to a “pager” that fires when the resource is ready.

export function attachPingListener(
  root: FiberRoot,
  wakeable: Wakeable,
  lanes: Lanes,
) {
  let pingCache = root.pingCache;
  let threadIDs;
  if (pingCache === null) {
    pingCache = root.pingCache = new PossiblyWeakMap();
    threadIDs = new Set();
    pingCache.set(wakeable, threadIDs);
  } else {
    threadIDs = pingCache.get(wakeable);
    if (threadIDs === undefined) {
      threadIDs = new Set();
      pingCache.set(wakeable, threadIDs);
    }
  }
  if (!threadIDs.has(lanes)) {
    workInProgressRootDidAttachPingListener = true;

    // Memoize by lanes to prevent redundant listeners.
    threadIDs.add(lanes);
    const ping = pingSuspendedRoot.bind(null, root, wakeable, lanes);
    wakeable.then(ping, ping);
  }
}

Each wakeable + lanes combination gets at most one listener. When the promise resolves, pingSuspendedRoot clears the cache entry, marks the root as pinged for those lanes, and schedules work. If the ping affects the currently rendered lanes, the work loop may restart; otherwise, it simply adds lower-priority work.

Retries are throttled, not frantic

The work loop also controls how aggressively to react to pings. In finishConcurrentRender, it distinguishes normal updates from retry-only renders and may throttle commits for retry lanes using constants like FALLBACK_THROTTLE_MS and flags such as alwaysThrottleRetries.

When a render exits as RootSuspended for retry lanes only, React can delay committing a fallback via commitRootWhenReady or a timeout. That balances two UX extremes: constantly swapping in fallbacks (janky) versus waiting too long (feels frozen). Centralizing this policy in the work loop keeps transitions and Suspense behavior coherent across the app.

The commit pipeline as a state machine

Once render produces a consistent tree—or decides to show fallbacks—the work loop hands off to the commit pipeline. This is where the article’s core lesson crystallizes: React treats commit as a multi-phase state machine, not a monolithic “do everything” function.

Capturing commit context

The entry point is commitRoot. Before it performs any new commit work, it flushes any leftover pending effects, ensures we’re not already in a commit, and captures everything needed for this commit into a set of pendingEffects* fields.

function commitRoot(
  root: FiberRoot,
  finishedWork: null | Fiber,
  lanes: Lanes,
  recoverableErrors: null | Array>,
  transitions: Array | null,
  didIncludeRenderPhaseUpdate: boolean,
  spawnedLane: Lane,
  updatedLanes: Lanes,
  suspendedRetryLanes: Lanes,
  exitStatus: RootExitStatus,
  suspendedState: null | SuspendedState,
  suspendedCommitReason: SuspendedCommitReason,
  completedRenderStartTime: number,
  completedRenderEndTime: number,
): void {
  root.cancelPendingCommit = null;

  do {
    flushPendingEffects();
  } while (pendingEffectsStatus !== NO_PENDING_EFFECTS);

  if ((executionContext & (RenderContext | CommitContext)) !== NoContext) {
    throw new Error('Should not already be working.');
  }

  // Capture commit state in module-level "pendingEffects*" fields
  pendingFinishedWork = finishedWork;
  pendingEffectsRoot = root;
  pendingEffectsLanes = lanes;
  pendingEffectsRemainingLanes = remainingLanes;
  pendingPassiveTransitions = transitions;
  pendingRecoverableErrors = recoverableErrors;
  pendingDidIncludeRenderPhaseUpdate = didIncludeRenderPhaseUpdate;
  pendingEffectsStatus = PENDING_MUTATION_PHASE;

  // Decide whether to run gesture/view transition path or regular pipeline
  // ...
}

These pendingEffects* variables form a commit context—a transaction record that all commit phases consume. The file keeps this as module-level state instead of a single object, which is powerful but heavy: it’s a clear candidate for refactoring into a dedicated commit orchestrator.

Five explicit phases, one pipeline

Rather than doing everything in one shot, the commit pipeline advances through a small enum, pendingEffectsStatus. Each value represents a distinct phase with narrow responsibilities.

Phase Status flag Responsibilities
Before mutation PENDING_MUTATION_PHASE (entry) Run snapshot logic (getSnapshotBeforeUpdate), read host tree before changes.
Mutation Run commitMutationEffects, manipulate DOM/native, update root.current.
Layout PENDING_LAYOUT_PHASE Run commitLayoutEffects (class lifecycles, layout effects).
After mutation / spawned work PENDING_AFTER_MUTATION_PHASEPENDING_SPAWNED_WORK Handle spawned work and integrate animation/view-transition hooks.
Passive PENDING_PASSIVE_PHASE Run passive effects (useEffect) via flushPassiveEffectsImpl.

This is a classic pipeline pattern: a shared context flows through a sequence of phases, and a simple enum tracks “we are exactly here.” If an error occurs in any phase, React routes it to error boundaries with captureCommitPhaseError, which itself schedules new work through the same loop.

Performance, operations, and advanced features are just states

The same state-machine approach powers performance tracking and advanced features. Profiling helpers (ReactProfilerTimer, ReactFiberPerformanceTrack) are threaded through render and commit, guarded by feature flags, without changing the core algorithm. The commit and passive phases already expose durations you can export as metrics.

The integration of view transitions and gesture transitions follows the same pattern: they hook into the commit pipeline and its status flags instead of living off to the side. For example, flushPendingEffects() aborts any in-progress view transition before flushing synchronously:

export function flushPendingEffects(): boolean {
  if (enableViewTransition && pendingViewTransition !== null) {
    stopViewTransition(pendingViewTransition);
    pendingViewTransition = null;
    pendingDelayedCommitReason = ABORTED_VIEW_TRANSITION_COMMIT;
  }
  flushGestureMutations();
  flushGestureAnimations();
  flushMutationEffects();
  flushLayoutEffects();
  flushSpawnedWork();
  return flushPassiveEffects();
}

Operationally, that means core invariants win over smooth transitions: a forced synchronous flush will abort an in-flight transition but keep the state machine consistent and emit warnings in development. Even power APIs are constrained by the same central loop.

The file also enforces safety for infinite loops: render and commit increments counters when they schedule new sync updates, and throwIfInfiniteUpdateLoopDetected() throws if thresholds are exceeded—this is the engine behind “Maximum update depth exceeded.” Again, a simple piece of state in the core loop guards an otherwise invisible failure mode.

Architectural lessons you can reuse

Stepping back, ReactFiberWorkLoop.js is not just a collection of tricks; it’s a coherent design for turning asynchronous, conflicting inputs into reliable, observable commits. You don’t need anything as complex in most systems, but the underlying patterns scale down well.

1. Separate prepare from commit

React’s strict render/commit boundary is the basis for concurrency, Suspense, and testing. In your own systems:

  • Have a pure “prepare” phase that computes the next state or plan without touching external systems.
  • Have a “commit” phase that applies that plan in a controlled order.
  • Prevent commit code from reentering prepare arbitrarily; route transitions through a central gate, like the work loop does.

2. Turn exceptions and edge cases into explicit state

The combination of SuspendedReason and RootExitStatus compresses many edge cases into a small set of states. The rest of the code switches on those enums rather than re-decoding every thrown value.

Whenever you see repeated boolean combos like isRetry, isHydrating, hasFallback, consider a closed enum or tagged union. Let one classifier function translate messy inputs into those states, and let your core loop reason in that higher-level vocabulary.

3. Centralize priority policy

React’s lanes give the work loop a single language for priority. Helpers like includesBlockingLane, includesOnlyTransitions, and includesRetryLane encode the app’s scheduling policy once and reuse it everywhere.

For production systems under load:

  • Define a small set of priority classes (sync, interactive, background, retry).
  • Route all work scheduling through a central function that understands those priorities.
  • Give that function enough context (similar to executionContext) to behave differently in test, degraded, or legacy modes.

4. Use a pipeline state machine for multi-step side effects

The commit pipeline’s pendingEffectsStatus is a valuable template for any multi-stage side-effect flow—payments, provisioning, data migrations, rollouts:

  • Define a small enum for your phases.
  • Keep a “transaction context” object (React’s is spread across pendingEffects* fields).
  • Write narrow functions that check and advance the status one step at a time.

This makes it much easier to pause, resume, retry, or partially replay work without inventing a new code path each time.

5. Big core modules need strong guardrails

The report that this article is based on calls out real costs: ReactFiberWorkLoop.js is huge, relies on many globals (workInProgress*, pendingEffects*, counters, flags), and requires deep context to modify safely. React counters that with strong invariants and rich DEV-only warnings.

In your own “control room” modules:

  • Enforce invariants with runtime assertions (“only one active transaction at a time”), not just comments.
  • Expose observability hooks (logs, metrics, traces) at phase boundaries so you can see what the loop is doing under load.
  • Extract focused orchestrators once a single file starts to carry multiple intertwined concerns, like a dedicated commit orchestrator for multi-phase effects.

The primary lesson from React’s work loop is this: treat your update system as an explicit state machine with clear phases, priorities, and exceptional states. Once you do that, coordinating async work, retries, and side effects stops being ad‑hoc glue code and becomes a predictable, debuggable pipeline.

If you’re designing your own control room, start by naming your lanes (priorities), your phases (prepare and commit stages), and your exceptional states (reasons to pause or abort). From there, the rest of the architecture tends to fall into place.

Full Source Code

Direct source from the upstream repository. Preview it inline or open it on GitHub.

packages/react-reconciler/src/ReactFiberWorkLoop.js

facebook/react • main

Choose one action below.

Open on GitHub

Thanks for reading! I hope this was useful. If you have questions or thoughts, feel free to reach out.

Content Creation Process: This article was generated via a semi-automated workflow using AI tools. I prepared the strategic framework, including specific prompts and data sources. From there, the automation system conducted the research, analysis, and writing. The content passed through automated verification steps before being finalized and published without manual intervention.

Mahmoud Zalt

About the Author

I’m Zalt, a technologist with 16+ years of experience, passionate about designing and building AI systems that move us closer to a world where machines handle everything and humans reclaim wonder.

Let's connect if you're working on interesting AI projects, looking for technical advice or want to discuss anything.

Support this content

Share this article

CONSULTING

AI consulting. Strategy to production.

Architecture, implementation, team guidance.