We’re examining how three.js’s WebGLRenderer turns the low-level WebGL API into a coherent rendering engine. three.js is a JavaScript library for building 3D experiences in the browser, and WebGLRenderer is its 1,500‑line core that decides what gets drawn, how, and when. I’m Mahmoud Zalt, an AI solutions architect, and we’ll use this file as a case study in how to design a central "conductor" for a complex, stateful system—how it orchestrates helpers, where complexity leaks, and how to keep a necessary “God class” from becoming unmanageable.
From Scene Graph to Orchestra Pit
WebGLRenderer is best understood as an orchestra conductor. It doesn’t "play" instruments itself—textures, buffers, shaders, and GPU state live in helper modules—but it decides who plays, when, and with which score.
three.js project (simplified)
src/
math/
Color.js
Matrix4.js
Vector3.js
Vector4.js
ColorManagement.js
renderers/
WebGLRenderer.js <-- conductor (high-level facade)
WebGLRenderTarget.js
shaders/
DFGLUTData.js
webgl/
WebGLState.js
WebGLTextures.js
WebGLPrograms.js
WebGLBackground.js
WebGLRenderLists.js
WebGLRenderStates.js
WebGLShadowMap.js
WebGLObjects.js
WebGLGeometries.js
WebGLAttributes.js
WebGLBindingStates.js
WebGLBufferRenderer.js
WebGLIndexedBufferRenderer.js
WebGLMaterials.js
WebGLInfo.js
WebGLCapabilities.js
WebGLClipping.js
WebGLEnvironments.js
WebGLAnimation.js
WebGLUtils.js
WebGLUniforms.js
WebGLUniformsGroups.js
webxr/
WebXRManager.js
Application code
-> creates Scene, Camera, Meshes
-> creates WebGLRenderer
-> calls renderer.render(scene, camera)
-> WebGLRenderer orchestrates helper modules and WebGL2
Architecturally this is a classic Facade: application code touches a small, friendly surface:
render(scene, camera)– draw a framesetSize(),setPixelRatio()– configure outputsetRenderTarget()– render to texturesreadRenderTargetPixels()/readRenderTargetPixelsAsync()– read pixels backcompile()/compileAsync()– pre‑warm shaders
Under the hood, the renderer wires together helpers for capabilities, textures, shader programs, render states, shadows, environments, XR, and more. That division of labor is what lets a central file stay understandable: the conductor talks to sections (modules), not individual musicians (raw GL calls).
How the Frame Pipeline Tells a Story
Once you see WebGLRenderer as a conductor, the render() method becomes the score for each frame. It follows a clear Template Method pipeline: a fixed high‑level sequence with extensibility at specific steps.
In simplified form, each frame does:
- Optionally route output through an internal HDR buffer for post‑processing.
- Update scene and camera matrices.
- Handle XR cameras and array cameras if present.
- Initialize a render state and a render list for this frame.
- Traverse the scene graph (
projectObject) to cull and fill the render list. - Sort opaque / transmissive / transparent items.
- Render the background.
- Render shadows.
- Render main scene passes (including transmission if needed).
- Resolve multisampled targets and generate mipmaps.
- Copy HDR output to the canvas when HDR is used.
- Pop state and render list stacks; coordinate XR and node‑based materials.
This is the renderer’s core story each frame: decide what’s visible, decide in what order, render with the right programs and state, then reset.
Scene traversal as a GPU to‑do list
The heart of this story is projectObject. Think of the render list as a GPU to‑do list. projectObject walks the scene graph, decides which objects matter, and records draw items with enough metadata to render them later.
function projectObject( object, camera, groupOrder, sortObjects ) {
if ( object.visible === false ) return;
const visible = object.layers.test( camera.layers );
if ( visible ) {
if ( object.isGroup ) {
groupOrder = object.renderOrder;
} else if ( object.isLOD ) {
if ( object.autoUpdate === true ) object.update( camera );
} else if ( object.isLightProbeGrid ) {
currentRenderState.pushLightProbeGrid( object );
} else if ( object.isLight ) {
currentRenderState.pushLight( object );
if ( object.castShadow ) currentRenderState.pushShadow( object );
} else if ( object.isSprite ) {
// ...frustum test and push into currentRenderList...
} else if ( object.isMesh || object.isLine || object.isPoints ) {
// ...frustum test, bounding sphere, groups, materials...
}
}
const children = object.children;
for ( let i = 0, l = children.length; i < l; i ++ ) {
projectObject( children[ i ], camera, groupOrder, sortObjects );
}
}
This embeds the main concepts every renderer needs:
- Visibility rules:
visibleflags and layer masks gate participation. - Frustum culling: meshes, lines, and points are tested against the camera frustum via bounding volumes.
- Ordering hints: groups and
renderOrdertweak draw ordering beyond depth. - Per‑type behavior: lights, LODs, sprites, probes each feed different parts of the render state.
Crucially, traversal only collects work. It doesn’t bind programs, buffers, or issue draw calls. That keeps traversal logic testable and hot draw loops lean.
Rendering lists with predictable phases
Once the list is built and sorted, renderScene orchestrates actual drawing in phases:
function renderScene( currentRenderList, scene, camera, viewport ) {
const { opaque, transmissive, transparent } = currentRenderList;
currentRenderState.setupLightsView( camera );
if ( _clippingEnabled === true )
clipping.setGlobalState( _this.clippingPlanes, camera );
if ( viewport ) state.viewport( _currentViewport.copy( viewport ) );
if ( opaque.length > 0 ) renderObjects( opaque, scene, camera );
if ( transmissive.length > 0 ) renderObjects( transmissive, scene, camera );
if ( transparent.length > 0 ) renderObjects( transparent, scene, camera );
state.buffers.depth.setTest( true );
state.buffers.depth.setMask( true );
state.buffers.color.setMask( true );
state.setPolygonOffset( false );
}
Lights are configured once per camera view, clipping is configured once, and item categories are rendered in a fixed order. That separation—"prepare common state" then "render sorted lists"—is what makes later additions (transmission, XR, array cameras) possible without rewriting render().
The Shader Tailor: setProgram & getProgram
Traversal and sorting decide what to draw. The subtle part is deciding how to draw each item: which shader program, which uniforms, and which feature flags are active. That’s the job of getProgram and setProgram.
Think of setProgram as a tailor fitting suits. Every combination of material, geometry features, lights, fog, camera, and environment needs a "suit"—a compiled shader program with specific defines and uniforms. The tailor wants to reuse suits when possible and only sew a new one when something important changes.
Building and caching programs with getProgram
getProgram computes a parameter object that captures all relevant features (lights, shadows, environment maps, fog, clipping, morph targets, instancing, light probe grids, and more) and uses it as a cache key:
function getProgram( material, scene, object ) {
if ( scene.isScene !== true ) scene = _emptyScene;
const materialProperties = properties.get( material );
const lights = currentRenderState.state.lights;
const shadowsArray = currentRenderState.state.shadowsArray;
const lightsStateVersion = lights.state.version;
const parameters = programCache.getParameters(
material,
lights.state,
shadowsArray,
scene,
object,
currentRenderState.state.lightProbeGridArray
);
const programCacheKey = programCache.getProgramCacheKey( parameters );
let programs = materialProperties.programs;
materialProperties.environment =
( material.isMeshStandardMaterial || material.isMeshLambertMaterial || material.isMeshPhongMaterial )
? scene.environment
: null;
materialProperties.fog = scene.fog;
const usePMREM = material.isMeshStandardMaterial ||
( material.isMeshLambertMaterial && ! material.envMap ) ||
( material.isMeshPhongMaterial && ! material.envMap );
materialProperties.envMap = environments.get(
material.envMap || materialProperties.environment,
usePMREM
);
if ( programs === undefined ) {
material.addEventListener( 'dispose', onMaterialDispose );
programs = new Map();
materialProperties.programs = programs;
}
let program = programs.get( programCacheKey );
if ( program !== undefined ) {
if (
materialProperties.currentProgram === program &&
materialProperties.lightsStateVersion === lightsStateVersion
) {
updateCommonMaterialProperties( material, parameters );
return program;
}
} else {
parameters.uniforms = programCache.getUniforms( material );
if ( _nodesHandler !== null && material.isNodeMaterial ) {
_nodesHandler.build( material, object, parameters );
}
material.onBeforeCompile( parameters, _this );
program = programCache.acquireProgram( parameters, programCacheKey );
programs.set( programCacheKey, program );
materialProperties.uniforms = parameters.uniforms;
}
materialProperties.currentProgram = program;
materialProperties.uniformsList = null;
return program;
}
Design choices worth copying:
- Programs cached per material, keyed by a rich parameter object that includes scene and light state.
- Light state versioning (
lights.state.version) to skip work when lights haven’t changed. - Node materials as plug‑ins via a
nodesHandler, allowing custom shader graphs without forking the renderer. - Hook‑based escape hatch (
material.onBeforeCompile) that lets consumers tweak shaders without touching internals.
The complexity smell in setProgram
setProgram is where complexity concentrates. It’s large, and its core is a long, intertwined feature‑check chain that decides whether the program must change:
let needsProgramChange = false;
if ( material.version === materialProperties.__version ) {
if ( materialProperties.needsLights &&
( materialProperties.lightsStateVersion !== lights.state.version ) ) {
needsProgramChange = true;
} else if ( materialProperties.outputColorSpace !== colorSpace ) {
needsProgramChange = true;
} else if ( object.isBatchedMesh && materialProperties.batching === false ) {
needsProgramChange = true;
} else if ( ! object.isBatchedMesh && materialProperties.batching === true ) {
needsProgramChange = true;
} else if ( object.isBatchedMesh && materialProperties.batchingColor === true &&
object.colorTexture === null ) {
needsProgramChange = true;
}
// ...many more else-if blocks for instancing, skinning, morphs,
// envMap, fog, clipping planes, tone mapping, light probe grids...
} else {
needsProgramChange = true;
materialProperties.__version = material.version;
}
This is a hand‑rolled feature key comparison: "did any shader‑affecting feature change since last time?" It works, but it has predictable problems:
- New feature flags are easy to forget in one of the many branches.
- Combinations (instancing + morph + transmission + XR) become hard to reason about.
- Subtle bugs appear when a condition should trigger a program change but doesn’t.
It’s the usual smell: a central function becoming a "feature flag crossroads" instead of delegating to a focused component.
A cleaner direction: feature state helper
The analysis proposes a dedicated helper—conceptually a ProgramFeatureState—that encapsulates these comparisons:
- let needsProgramChange = false;
-
- if ( material.version === materialProperties.__version ) {
-
- if ( materialProperties.needsLights &&
- ( materialProperties.lightsStateVersion !== lights.state.version ) ) {
- needsProgramChange = true;
- } else if ( materialProperties.outputColorSpace !== colorSpace ) {
- needsProgramChange = true;
- }
- // ... many more else-if branches
-
- } else {
- needsProgramChange = true;
- materialProperties.__version = material.version;
- }
+ const featureState = new ProgramFeatureState(
+ materialProperties,
+ {
+ lights,
+ colorSpace,
+ clipping,
+ object,
+ envMap,
+ fog,
+ toneMapping,
+ morphTargets,
+ morphNormals,
+ morphColors,
+ morphTargetsCount,
+ lightProbeGridCount: currentRenderState.state.lightProbeGridArray.length
+ }
+ );
+
+ const needsProgramChange = featureState.needsProgramChange( material );
+
+ if ( material.version !== materialProperties.__version ) {
+ materialProperties.__version = material.version;
+ }
Behavior stays the same, but knowledge moves: "these are the inputs that influence program reuse" becomes data and methods on a dedicated object instead of a tangle of ifs inside setProgram.
Reading and Copying Pixels Safely
Beyond submitting work to the GPU, WebGLRenderer has to read from and copy between textures. These operations are deceptively simple in the API but are common sources of latency and complexity.
Async readback: avoiding frame hitches
readRenderTargetPixels wraps gl.readPixels synchronously and can stall the main thread badly. To avoid this, the renderer exposes readRenderTargetPixelsAsync, which uses WebGL2’s PIXEL_PACK_BUFFER and GPU fences so the CPU doesn’t block while the GPU reads:
this.readRenderTargetPixelsAsync = async function (
renderTarget,
x, y,
width, height,
buffer,
activeCubeFaceIndex,
textureIndex = 0
) {
if ( ! ( renderTarget && renderTarget.isWebGLRenderTarget ) ) {
throw new Error(
'THREE.WebGLRenderer.readRenderTargetPixels: renderTarget is not THREE.WebGLRenderTarget.'
);
}
let framebuffer = properties.get( renderTarget ).__webglFramebuffer;
if ( renderTarget.isWebGLCubeRenderTarget && activeCubeFaceIndex !== undefined ) {
framebuffer = framebuffer[ activeCubeFaceIndex ];
}
if ( framebuffer ) {
if ( ( x >= 0 && x <= ( renderTarget.width - width ) ) &&
( y >= 0 && y <= ( renderTarget.height - height ) ) ) {
state.bindFramebuffer( _gl.FRAMEBUFFER, framebuffer );
const texture = renderTarget.textures[ textureIndex ];
const textureFormat = texture.format;
const textureType = texture.type;
if ( renderTarget.textures.length > 1 )
_gl.readBuffer( _gl.COLOR_ATTACHMENT0 + textureIndex );
if ( ! capabilities.textureFormatReadable( textureFormat ) ) {
throw new Error(
'THREE.WebGLRenderer.readRenderTargetPixelsAsync: renderTarget is not in RGBA or implementation defined format.'
);
}
if ( ! capabilities.textureTypeReadable( textureType ) ) {
throw new Error(
'THREE.WebGLRenderer.readRenderTargetPixelsAsync: renderTarget is not in UnsignedByteType or implementation defined type.'
);
}
const glBuffer = _gl.createBuffer();
_gl.bindBuffer( _gl.PIXEL_PACK_BUFFER, glBuffer );
_gl.bufferData( _gl.PIXEL_PACK_BUFFER, buffer.byteLength, _gl.STREAM_READ );
_gl.readPixels(
x, y, width, height,
utils.convert( textureFormat ),
utils.convert( textureType ),
0
);
const currFramebuffer = _currentRenderTarget !== null
? properties.get( _currentRenderTarget ).__webglFramebuffer
: null;
state.bindFramebuffer( _gl.FRAMEBUFFER, currFramebuffer );
const sync = _gl.fenceSync( _gl.SYNC_GPU_COMMANDS_COMPLETE, 0 );
_gl.flush();
await probeAsync( _gl, sync, 4 );
_gl.bindBuffer( _gl.PIXEL_PACK_BUFFER, glBuffer );
_gl.getBufferSubData( _gl.PIXEL_PACK_BUFFER, 0, buffer );
_gl.deleteBuffer( glBuffer );
_gl.deleteSync( sync );
return buffer;
} else {
throw new Error(
'THREE.WebGLRenderer.readRenderTargetPixelsAsync: requested read bounds are out of range.'
);
}
}
};
Key aspects:
- Strong validation of target type, pixel format, and bounds.
- Framebuffers are restored immediately after queuing
readPixels, not after waiting for completion. - A helper (
probeAsync) pollsgl.clientWaitSyncuntil the GPU finishes. - Only after the fence signals do we pull data into the CPU buffer.
This doesn’t make readback cheaper, but it decouples GPU latency from your main thread. For interactive apps, that decoupling matters more than raw cost.
The analysis pushes this further: in debug builds, the synchronous API could emit warnings for large regions or repeated use, nudging teams toward async paths in hot code.
Texture copying as a separable responsibility
copyTextureToTexture handles a lot of surface area: 2D, 3D, and array textures; depth vs color; compressed and uncompressed formats; CPU‑driven copies; GPU blits; and multi‑render‑target layouts. Currently this logic lives inline on WebGLRenderer, directly manipulating pixel store state:
state.pixelStorei( _gl.UNPACK_ROW_LENGTH, image.width );
state.pixelStorei( _gl.UNPACK_IMAGE_HEIGHT, image.height );
state.pixelStorei( _gl.UNPACK_SKIP_PIXELS, minX );
state.pixelStorei( _gl.UNPACK_SKIP_ROWS, minY );
state.pixelStorei( _gl.UNPACK_SKIP_IMAGES, minZ );
// ...a lot of logic...
state.pixelStorei( _gl.UNPACK_ROW_LENGTH, currentUnpackRowLen );
state.pixelStorei( _gl.UNPACK_IMAGE_HEIGHT, currentUnpackImageHeight );
state.pixelStorei( _gl.UNPACK_SKIP_PIXELS, currentUnpackSkipPixels );
state.pixelStorei( _gl.UNPACK_SKIP_ROWS, currentUnpackSkipRows );
state.pixelStorei( _gl.UNPACK_SKIP_IMAGES, currentUnpackSkipImages );
This is a "God‑method" inside the God class: technically correct, but mixing framebuffer setup, pixel store bookkeeping, and type branching. The suggested fix is to extract a helper like WebGLTextureCopier:
- this.copyTextureToTexture = function ( srcTexture, dstTexture, srcRegion, dstPosition, srcLevel, dstLevel ) {
-
- // ~200 lines of logic mixing pixel store state, framebuffer setup,
- // and texture type branching
-
- };
+ const textureCopier = new WebGLTextureCopier( _gl, state, textures, utils, properties );
+
+ this.copyTextureToTexture = function ( srcTexture, dstTexture, srcRegion, dstPosition, srcLevel, dstLevel ) {
+
+ textureCopier.copy( srcTexture, dstTexture, srcRegion, dstPosition, srcLevel, dstLevel );
+
+ };
Texture copying is:
- Conceptually separate from the render pipeline.
- Easy to test via focused pixel comparisons.
- Likely to evolve as new texture types and hardware tricks appear.
Living With (and Taming) a God Class
WebGLRenderer is a textbook "God class": it knows about context management, pipeline orchestration, programs, readback, texture copying, XR, and more. Yet its maintainability is still judged strong. The reason is that it’s big but disciplined.
Why this God class works better than most
Several practices keep this large file under control:
- Helper‑heavy design. It coordinates helpers (
WebGLTextures,WebGLPrograms,WebGLState,WebGLShadowMap,WebGLEnvironments,WebGLRenderLists,WebXRManager) instead of owning all low‑level details. - Internal sectioning. Within the file, responsibilities are grouped: initialization, sizing/clearing, rendering, program management, targets, copy/read helpers, animation loop, XR.
- Rich documentation. JSDoc and typedefs document the public API and many side effects, which is rare for a renderer core.
- Extension points. Hooks like
setOpaqueSort,setTransparentSort,setNodesHandler,material.onBeforeCompile,scene.overrideMaterial, andonBeforeRender/onAfterRendercallbacks allow customization without touching core logic.
So while the renderer is large by necessity—everything rendering flows through it—its responsibilities still cluster around a single purpose: "coordinate rendering and GPU resources." That cohesion is what saves it.
Encapsulation leaks and how to fix them
Some encapsulation leaks are still worth calling out because they generalize well:
- Direct access to nested internals such as
currentRenderState.state.lights,currentRenderState.state.shadowsArray, orcurrentRenderState.state.transmissionRenderTarget[camera.id]couples the renderer to the exact shape of those internals. - Use of IDs (e.g., camera IDs) as keys for internal maps makes changing ID semantics risky.
The proposed remedy is modest but effective: expose accessors on WebGLRenderStates and related helpers, e.g.:
renderState.getLights()instead ofrenderState.state.lightsrenderState.getTransmissionTarget(camera)instead of indexing arrays directly
That way, WebGLRenderer depends on behavior, not representation. Future reshaping of state objects doesn’t cascade everywhere.
Refactoring a central renderer without breaking the world
The refactor suggestions stay intentionally incremental, which is the only realistic approach for a central, widely used component:
- Extract narrowly focused helpers (e.g.,
WebGLTextureCopier,ProgramFeatureState, possibly a readback helper) while leaving the public API untouched. - Wrap direct state access in methods on existing helper modules instead of introducing new layers.
- Improve debug behavior and documentation for dangerous APIs like synchronous readback.
This is a general pattern: for mature cores, think "carve out organs" instead of "replace the heart." You gradually move complex responsibilities outward, behind new seams, and only later consider changing public contracts.
Operating at Scale: Hot Paths, XR, and Context Loss
Beyond architecture, the renderer is built to survive large scenes, XR sessions, and awkward events like context loss. The analysis highlights a few operational lessons that apply to any performance‑critical system.
Hot paths and scaling behavior
The main hot paths are:
render()– runs every frame; coordinates everything.projectObject()– O(N) traversal over scene objects.setProgram()– somewhat proportional to unique material/object combinations and features.renderBufferDirect()– per draw call; binds buffers and issues GL calls.copyTextureToTexture()and readback helpers – heavy in post‑processing or analysis workflows.
Frame time scales roughly linearly with the number of visible render items plus lights and shadow‑casting lights. Sorting is O(N log N) but tends to matter only for very large N.
What to measure in production
You don’t need to expose every internal counter to run this at scale. A small set of metrics gives a usable "rendering SLO" view:
| Metric | Why it matters | Typical target hint |
|---|---|---|
renderer.info.render.calls |
Draw call count; correlates with CPU overhead and driver latency. | Keep modest per frame, especially on mobile/XR. |
renderer.info.render.triangles |
Geometry complexity; stresses vertex processing and bandwidth. | Track against frame time as scenes grow. |
| Frame time (ms) | End‑to‑end frame duration. | Match your FPS target (e.g., ~16.6 ms for 60 FPS). |
| Shader compile time (ms, aggregate) | First‑frame or on‑demand hitches. | Push compilation into loading or compileAsync() phases. |
| Async readback latency (ms) | Impact of readbacks on responsiveness. | Keep low and out of critical loops. |
Even without deep WebGL knowledge, these few metrics are enough to guide profiling and capacity decisions.
XR integration and the animation loop
The renderer centralizes the animation loop to coordinate regular and XR rendering via setAnimationLoop. Internally it forwards the callback to WebXRManager as well as a WebGLAnimation helper:
const animation = new WebGLAnimation();
animation.setAnimationLoop( onAnimationFrame );
this.setAnimationLoop = function ( callback ) {
onAnimationFrameCallback = callback;
xr.setAnimationLoop( callback );
( callback === null ) ? animation.stop() : animation.start();
};
By owning the loop, WebGLRenderer can coordinate XR frame timing, HDR output, and other pipeline details transparently. Consumers just set a callback.
Surviving context loss
WebGL contexts can be lost and later restored. The renderer prepares for this by registering handlers on the canvas before creating the context:
canvas.addEventListener( 'webglcontextlost', onContextLost, false );
canvas.addEventListener( 'webglcontextrestored', onContextRestore, false );
canvas.addEventListener( 'webglcontextcreationerror', onContextCreationError, false );
function onContextLost( event ) {
event.preventDefault();
log( 'WebGLRenderer: Context Lost.' );
_isContextLost = true;
}
function onContextRestore( /* event */ ) {
log( 'WebGLRenderer: Context Restored.' );
_isContextLost = false;
const infoAutoReset = info.autoReset;
const shadowMapEnabled = shadowMap.enabled;
const shadowMapAutoUpdate = shadowMap.autoUpdate;
const shadowMapNeedsUpdate = shadowMap.needsUpdate;
const shadowMapType = shadowMap.type;
initGLContext();
info.autoReset = infoAutoReset;
shadowMap.enabled = shadowMapEnabled;
shadowMap.autoUpdate = shadowMapAutoUpdate;
shadowMap.needsUpdate = shadowMapNeedsUpdate;
shadowMap.type = shadowMapType;
}
The pattern is simple and reusable: explicitly capture a small set of "semantic" settings you care about across resets (shadow map configuration, info flags), re‑initialize low‑level state from scratch, then restore those semantics.
Architectural Takeaways
The main lesson from WebGLRenderer is that you can have a necessary central class without turning it into an unmanageable blob—if you treat it as a conductor for helpers and constantly carve complexity outward.
-
Centralize orchestration, not implementation. Let your renderer‑equivalent coordinate specialized modules (programs, textures, state, XR) instead of owning all low‑level details. The core file stays readable even as capabilities grow.
-
Separate "collect work" from "execute work". The render list pattern—
projectObjectcollects visible items,renderScenerenders sorted lists—generalizes to any pipeline where discovery and execution can be decoupled. -
Represent feature combinations explicitly. As soon as you have many feature flags influencing behavior (like shader programs), move that knowledge into a dedicated feature state or key object instead of letting a central method accumulate
if/elsetrees. -
Expose slow paths clearly and offer better alternatives. Pair synchronous APIs (
readRenderTargetPixels) with async or buffered equivalents (readRenderTargetPixelsAsync) and make their trade‑offs explicit, ideally with debug‑time guidance. -
Keep observability near the metal. A handful of metrics—draw calls, triangles, frame time, shader compile time, readback latency—are enough to operate a renderer‑class system effectively without drowning users in details.
-
Refactor the center in small, safe steps. For a widely‑used core, start by extracting helpers like
WebGLTextureCopierorProgramFeatureStateand by wrapping internal state behind methods. Only after those seams are proven should you consider changing public APIs.
If you treat WebGLRenderer as a blueprint rather than a curiosity, it shows how to turn a noisy, stateful API like WebGL into a predictable, extensible engine. The next time you face a large central class that "has to" know about everything, the question isn’t "how do we avoid it entirely?" but "how do we make it a conductor with strong sections and clean cues?" This renderer shows that answer in working code.





