We’re examining how Node’s internal HTTP/2 engine turns nghttp2 sessions into familiar Node streams and events. If you’ve ever called http2.connect() or createSecureServer() and everything “just worked”, you were leaning on this adapter. I’m Mahmoud Zalt, an AI solutions architect, and we’ll use lib/internal/http2/core.js as a case study in designing a clean, reliable protocol adapter around a high‑performance native core.
We’ll treat this file as a story about translating low‑level HTTP/2 frames into a developer‑friendly API—and how Node keeps that translation maintainable, efficient, and observable at scale.
The HTTP/2 Engine Mental Model
Node’s HTTP/2 implementation sits between raw sockets and nghttp2 on one side, and the user‑facing HTTP/2 API on the other. Understanding that middle layer is the key to understanding the rest of the file.
project-root/
lib/
internal/
http2/
core.js <-- HTTP/2 sessions, streams, servers, connect()
util.js (header/settings utilities)
compat.js (HTTP/1-style API on top of HTTP/2)
stream_base_commons.js (stream/native bridge helpers)
src/
node_http2.* (native http2 binding, nghttp2 integration)
Call graph (simplified):
createSecureServer/createServer/connect
| | \
v v v
Http2SecureServer Http2Server ClientHttp2Session
| | |
| connectionListener request()
v | |
ServerHttp2Session <----+------> Http2Stream (Server/Client)
| ^ ^ |
| | | |
v | | v
native Http2Session <--------- native Http2Stream
^ ^
| callbacks via binding.setCallbackFunctions
+-- onSessionHeaders, onStreamClose, onSettings, onGoawayData, ...
The main roles:
Http2Session: owns a TCP/TLS socket and all HTTP/2 streams on it. This is the connection‑level dispatcher.Http2Stream: represents a single bidirectional HTTP/2 exchange, exposed as a NodeDuplex.Http2Server/Http2SecureServer: wrap the session layer and expose events like'stream'to your application.connect(): client entry point that builds aClientHttp2Sessionon top of an appropriate socket.
From Socket to Session
With the roles clear, we can follow how a raw socket becomes an HTTP/2 session on both server and client. This is where Node hides TLS, ALPN, and protocol selection behind simple APIs.
Server side: ALPN, fallback, and session creation
On the server, createServer() and createSecureServer() eventually delegate to a connectionListener. That listener decides whether a socket should speak HTTP/2, fall back to HTTP/1.1, or be rejected.
function connectionListener(socket) {
const options = this[kOptions] || {};
if (socket.alpnProtocol === false || socket.alpnProtocol === 'http/1.1') {
// Fallback to HTTP/1.1
if (options.allowHTTP1 === true) {
socket.server[kIncomingMessage] = options.Http1IncomingMessage;
socket.server[kServerResponse] = options.Http1ServerResponse;
return httpConnectionListener.call(this, socket);
}
// Unknown or disallowed protocol: send a minimal HTTP/1.0 response, then close.
return;
}
// HTTP/2: set up the session
const session = new ServerHttp2Session(options, socket, this);
session.on('stream', sessionOnStream);
session.on('error', sessionOnError);
session.on('priority', sessionOnPriority);
session[kNativeFields][kSessionPriorityListenerCount]--;
if (this.timeout)
session.setTimeout(this.timeout, sessionOnTimeout);
socket[kServer] = this;
this.emit('session', session);
}
connectionListener routes a new TLS connection to HTTP/1.1 or HTTP/2 and constructs a ServerHttp2Session when appropriate.Key ideas in this entry point:
- ALPN drives protocol selection: if TLS ALPN reports
h2, the socket becomes an HTTP/2 session. Otherwise, the server may fall back to HTTP/1.1 viahttpConnectionListenerifallowHTTP1is set. - Fallback is explicit, not magical: the same server object can serve HTTP/1.1 and HTTP/2, but only when
allowHTTP1is enabled and the socket actually negotiated HTTP/1.1. - Sessions are tracked per server: each server keeps a set of its sessions (
kSessions), enabling later features like graceful shutdown and resource accounting.
Client side: connect() as protocol router
On the client, connect() plays the same role in reverse. It validates options, resolves authority, chooses TCP vs TLS, and then wires a ClientHttp2Session onto the resulting socket.
function connect(authority, options, listener) {
if (typeof options === 'function') {
listener = options;
options = undefined;
}
assertIsObject(options, 'options');
options = { ...options };
assertIsArray(options.remoteCustomSettings, 'options.remoteCustomSettings');
if (options.remoteCustomSettings) {
options.remoteCustomSettings = [ ...options.remoteCustomSettings ];
if (options.remoteCustomSettings.length > MAX_ADDITIONAL_SETTINGS)
throw new ERR_HTTP2_TOO_MANY_CUSTOM_SETTINGS();
}
if (typeof authority === 'string')
authority = new URL(authority);
const protocol = authority.protocol || options.protocol || 'https:';
const port = '' + (authority.port !== '' ?
authority.port : (authority.protocol === 'http:' ? 80 : 443));
let host = 'localhost';
// host resolution elided...
let socket;
if (typeof options.createConnection === 'function') {
socket = options.createConnection(authority, options);
} else {
switch (protocol) {
case 'http:':
socket = net.connect({ port, host, ...options });
break;
case 'https:':
socket = tls.connect(port, host, initializeTLSOptions(options, net.isIP(host) ? undefined : host));
break;
default:
throw new ERR_HTTP2_UNSUPPORTED_PROTOCOL(protocol);
}
}
const session = new ClientHttp2Session(options, socket);
session[kAuthority] = `${options.servername || host}:${port}`;
session[kProtocol] = protocol;
if (typeof listener === 'function')
session.once('connect', listener);
return session;
}
connect() centralizes URL handling, socket creation, and ClientHttp2Session wiring into a single API.The important part here isn’t the branching itself, but the fact that the branching is contained. Application code works with sessions and streams; connect() is the one place that knows about schemes, ports, TLS options, and custom createConnection() hooks.
Streams: Where HTTP/2 Meets Node Streams
Once a session exists, the core problem becomes: how do we turn nghttp2 callbacks and HTTP/2 frames into Node streams and events? This is where the adapter work happens in earnest.
HEADERS → streams and events
The native binding registers callbacks like onSessionHeaders into JS. Whenever nghttp2 delivers a HEADERS block, this function decides whether to create a new Http2Stream, which events to emit, and how to treat the readable side.
function onSessionHeaders(handle, id, cat, flags, headers, sensitiveHeaders) {
const session = this[kOwner];
if (session.destroyed)
return;
const type = session[kType];
session[kUpdateTimer]();
const streams = session[kState].streams;
const endOfStream = !!(flags & NGHTTP2_FLAG_END_STREAM);
let stream = streams.get(id);
const obj = toHeaderObject(headers, sensitiveHeaders);
if (stream === undefined) {
if (session.closed) {
handle.rstStream(NGHTTP2_REFUSED_STREAM);
handle.destroy();
return;
}
if (type === NGHTTP2_SESSION_SERVER) {
stream = new ServerHttp2Stream(session, handle, id, {}, obj);
if (endOfStream) {
stream.push(null);
}
if (obj[HTTP2_HEADER_METHOD] === HTTP2_METHOD_HEAD) {
stream.end();
stream[kState].flags |= STREAM_FLAGS_HEAD_REQUEST;
}
} else {
stream = new ClientHttp2Stream(session, handle, id, {});
if (endOfStream) {
stream.push(null);
}
stream.end();
}
if (endOfStream)
stream[kState].endAfterHeaders = true;
process.nextTick(emit, session, 'stream', stream, obj, flags, headers);
} else {
// subsequent HEADERS: map to 'headers' | 'response' | 'push' | 'trailers'
}
if (endOfStream) {
stream.push(null);
}
}
onSessionHeaders is the bridge between nghttp2 callbacks and Node’s 'stream'/'headers'/'response' events.The adapter work here is deliberate:
- Raw header pairs become a plain object via
toHeaderObject(). - First HEADERS for an ID create either a
ServerHttp2StreamorClientHttp2Stream, then emit a'stream'event on the session (which the server forwards to your handler). HEADrequests are special‑cased: the writable side ends immediately so no body is sent.END_STREAMis handled by pushingnullinto the readable side, closing it at the right time.
Subsequent HEADERS for an existing stream are mapped to a small set of high‑level events ('headers', 'response', 'push', 'trailers') based on category, status code, and flags. Low‑level HTTP/2 semantics stay inside this adapter; your application sees a predictable event vocabulary.
Write path: data + shutdown as a single operation
The write side of Http2Stream is another subtle adapter: it has to decide when to send the final DATA frame with END_STREAM set, and it has to coordinate that with Node’s writable stream lifecycle.
[kWriteGeneric](writev, data, encoding, cb) {
if (this.pending) {
this.once(
'ready',
this[kWriteGeneric].bind(this, writev, data, encoding, cb),
);
return;
}
if (this.destroyed)
return;
this[kUpdateTimer]();
if (!this.headersSent)
this[kProceed]();
let waitingForWriteCallback = true;
let waitingForEndCheck = true;
let writeCallbackErr;
let endCheckCallbackErr;
const done = () => {
if (waitingForEndCheck || waitingForWriteCallback) return;
const err = aggregateTwoErrors(endCheckCallbackErr, writeCallbackErr);
if (err) {
this.destroy(err);
}
cb(err);
};
const writeCallback = (err) => {
waitingForWriteCallback = false;
writeCallbackErr = err;
done();
};
const endCheckCallback = (err) => {
waitingForEndCheck = false;
endCheckCallbackErr = err;
done();
};
// After the last chunk is buffered, maybe close the writable side.
process.nextTick(() => {
if (writeCallbackErr ||
!this._writableState.ending ||
this._writableState.buffered.length ||
(this[kState].flags & STREAM_FLAGS_HAS_TRAILERS))
return endCheckCallback();
shutdownWritable.call(this, endCheckCallback);
});
const req = writev ?
writevGeneric(this, data, writeCallback) :
writeGeneric(this, data, encoding, writeCallback);
trackWriteState(this, req.bytes);
}
This is representative of how core.js handles complexity: it doesn’t build a huge explicit state machine, but it does treat related async actions (write and shutdown) as a unit by aggregating their errors in one place and using process.nextTick() to order them correctly.
File Responses Without Loading Into RAM
Real HTTP/2 servers serve a lot of files. core.js includes a focused mini‑subsystem for this: respondWithFile(), respondWithFD(), and helpers that stream files directly from disk into HTTP/2 streams without pulling them through JS buffers.
Plugging a file descriptor into an HTTP/2 stream
The core helper, processRespondWithFD(), turns a file descriptor and headers into a native‑driven data flow over the stream.
function processRespondWithFD(self, fd, headers, offset = 0, length = -1,
streamOptions = 0) {
const state = self[kState];
state.flags |= STREAM_FLAGS_HEADERS_SENT;
let headersList;
try {
headersList = buildNgHeaderString(headers, assertValidPseudoHeaderResponse);
} catch (err) {
self.destroy(err);
return;
}
self[kSentHeaders] = headers;
// Close the writable side from the JS perspective.
self._final = null;
self.end();
const ret = self[kHandle].respond(headersList, streamOptions);
if (ret < 0) {
self.destroy(new NghttpError(ret));
return;
}
defaultTriggerAsyncIdScope(self[async_id_symbol], startFilePipe,
self, fd, offset, length);
}
processRespondWithFD() sends headers, ends the JS writable side, then lets native code stream the file contents.Once headers are sent, startFilePipe() uses internal bindings to stream from the file descriptor into the HTTP/2 stream entirely at the native layer. That keeps memory usage bounded and avoids copying large buffers through JS, while still letting your code control headers and status.
User‑facing helpers and a design smell
Two publicish helpers sit on top of this primitive:
respondWithFD(fd, headers, options): respond from an existing file descriptor (caller owns closing it).respondWithFile(path, headers, options): open the path, stat it, then respond and manage the file descriptor lifecycle.
Internally they funnel through doSendFD and doSendFileFD. Both helpers:
- Build a
statOptionsobject. - Verify the descriptor represents a regular file.
- Apply an optional
statCheckhook to validate or modify headers. - Compute and set
Content-Lengthfromstat.size,offset, andlength. - Eventually call
processRespondWithFD().
The report correctly calls out a design smell: much of this logic is duplicated between doSendFD and doSendFileFD. Conceptually, both need the same algorithm for “turn (fd, stat, headers, options) into a streamed response”, but they differ in ownership (who closes the descriptor) and how the fd is obtained.
| Current shape | Cleaner shape |
|---|---|
|
|
Despite the duplication, the design gets the important part right: files are streamed from disk directly to the network, your code gets a statCheck hook for custom behavior, and the HTTP/2 stream abstraction remains intact.
Timeouts, Backpressure, and Reliability
The most interesting parts of core.js aren’t in the happy path; they’re in how sessions and streams die. The file centralizes teardown, treats timeouts as signals of stalled I/O rather than just “too much time passed”, and keeps backpressure visible at both JS and native layers.
Centralized session teardown
Http2Session.destroy() and multiple error paths converge on a single function: closeSession(). This function owns the rules for how a session shuts down, which streams get which errors, and how the native handle and socket are cleaned up.
function closeSession(session, code, error) {
const state = session[kState];
state.flags |= SESSION_FLAGS_DESTROYED;
state.destroyCode = code;
// Clear timeout and remove timeout listeners.
session.setTimeout(0);
session.removeAllListeners('timeout');
// Destroy any pending and open streams.
if (state.pendingStreams.size > 0 || state.streams.size > 0) {
const cancel = new ERR_HTTP2_STREAM_CANCEL(error);
state.pendingStreams.forEach((stream) => stream.destroy(cancel));
state.streams.forEach((stream) => stream.destroy(error));
}
const socket = session[kSocket];
const handle = session[kHandle];
if (handle !== undefined) {
handle.ondone = finishSessionClose.bind(null, session, error);
handle.destroy(code, socket.destroyed);
} else {
finishSessionClose(session, error);
}
}
closeSession() is the authoritative shutdown path for sessions and their streams.This centralization carries several guarantees:
- The destroyed flag and code are set in one place, so higher‑level logic can reliably ask “is this session dead, and why?”
- Pending streams (never got an ID) are cancelled with a specific
ERR_HTTP2_STREAM_CANCEL, distinguishing them from streams that started and then failed. - Socket and native handle cleanup is sequenced through
finishSessionClose(), avoiding dangling references or double‑destroy bugs.
Timeouts that understand progress
Timeouts are implemented with backpressure in mind. Instead of “if this timer fires, kill the session”, core.js asks: is there buffered data, and has any of it actually moved? That logic lives in callTimeout().
function callTimeout(self, session) {
if (self.destroyed)
return;
if (self[kState].writeQueueSize > 0) {
const handle = session[kHandle];
const chunksSentSinceLastWrite = handle !== undefined ?
handle.chunksSentSinceLastWrite : null;
if (chunksSentSinceLastWrite !== null &&
chunksSentSinceLastWrite !== handle.updateChunksSent()) {
self[kUpdateTimer]();
return;
}
}
self.emit('timeout');
}
The behavior is:
- If there is no write backlog (
writeQueueSize == 0), a timeout really means “idle for too long”. - If there is a backlog, Node consults native counters (
chunksSentSinceLastWriteandupdateChunksSent()). If bytes are moving, the timeout is refreshed instead of emitted.
This is a small but powerful adapter pattern: using a tiny bit of native state to implement smarter semantics in JS, without burdening the public API with protocol‑specific concepts.
Backpressure and native/JS coordination
Beyond timeouts, the file tracks backpressure and listener state carefully to keep the HTTP/2 engine efficient under load:
- Per‑session and per‑stream write queue sizes are maintained for smarter timeouts and for observability.
- Hot paths avoid per‑call allocations: helpers like
emit()live at top level instead of allocating closures in loops. - Listener counts and bitfields (e.g.,
kSessionHasPingListeners) let the native side skip expensive JS callbacks when nobody is listening for certain events.
Combined with nghttp2’s multiplexing, this makes the adapter layer scale well beyond typical development loads without protocol logic bleeding into application code.
Lessons You Can Steal For Your Own Code
Stepping back, lib/internal/http2/core.js is an exercise in building a disciplined adapter around a complex native engine. The same patterns apply to databases, queues, or any binary protocol you wrap in Node.
1. Start from a clear mental model
The “socket → session → streams” model shows up in names, data structures, and call graphs. When you wrap a protocol, make sure your JS objects match the mental model you want maintainers to think in. That makes callbacks, flags, and fields easier to justify.
2. Use adapters to hide protocol quirks
Callbacks like onSessionHeaders() absorb the messiness of HTTP/2—categories, flags, HEAD semantics, GOAWAY conditions—and present a tiny vocabulary of events and streams. When you integrate a protocol, resist the urge to surface every flag. Decide what your application needs to know, then encode the rest into your adapter.
3. Centralize lifecycle transitions
Functions like closeSession() and the shared write/shutdown logic on Http2Stream keep lifecycle rules in one place. If your objects can die via timeouts, remote errors, or user calls, route all of those paths through a small number of helpers and give them clear invariants.
4. Treat file and I/O paths as first‑class
Node’s HTTP/2 layer treats static file responses as a core use case, not an afterthought. It streams from disk, sets headers correctly, and gives you hooks like statCheck for customization. In many backends, the “boring” I/O paths drive the majority of traffic—model them explicitly and keep them memory‑efficient.
5. Make timeouts smarter than “sleep then kill”
The write‑aware timeout logic shows how a little extra state can distinguish between a dead connection and a slow but healthy one. If you’re dealing with slow downstreams or variable networks, track whether progress is happening before dropping connections.
Underneath all the internal symbols, core.js is a clean example of turning low‑level frames into high‑level flows. It keeps protocol complexity behind an adapter, centralizes lifecycle and error handling, and treats performance and observability as part of the design—not an afterthought. Those are patterns worth copying into any serious Node system, whether or not you ever touch HTTP/2 internals directly.



