In the MongoDB Go driver, the Client type is not just a connection handle—it’s a control tower coordinating topology, sessions, encryption, and high‑level operations. We’ll dissect how mongo/client.go turns this complexity into a coherent façade, and what design patterns we can reuse in our own client libraries.
The MongoDB Go driver is the official driver for talking to MongoDB from Go applications. At its core is the Client type: the object your application holds onto while it discovers servers, manages sessions, applies encryption, and exposes operations like Ping, ListDatabases, and BulkWrite. I’m Mahmoud Zalt, an AI solutions architect, and we’ll use this file as a concrete case study in treating a client as a deliberate control tower rather than a thin wrapper around sockets.
The Client as façade and control tower
The file under the microscope is mongo/client.go. It defines the public Client type used by applications and wires it to the internal driver stack: topology management, low‑level operations, sessions, and MongoCrypt integration.
mongo-go-driver/
mongo/
client.go <-- Public Client facade
database.go (Database type, uses *Client)
collection.go (Collection type, uses *Database)
x/mongo/driver/
topology/ (Deployment, server selection, connection pool)
operation/ (Low-level operations: ListDatabases, EndSessions, etc.)
session/ (ClusterClock, Session Pool)
mongocrypt/ (MongoCrypt integration)
internal/
logger/ (Logger implementation)
serverselector/ (ReadPref, Latency, Composite selectors)
httputil/ (Default HTTP client helpers)
Application
|
v
mongo.Client (client.go)
|-- deployment (topology.Deployment)
|-- sessionPool (session.Pool)
|-- cryptFLE (driver.Crypt)
|-- logger (internal/logger)
|
+--> Database / Collection / ChangeStream / BulkWrite operations
Client sits between the application and the internal driver components.Conceptually, Client has three core responsibilities:
- Lifecycle: constructing, connecting, and disconnecting the client and its underlying topology.
- Control: managing sessions, read/write semantics, retries, and server selection.
- Integration: hiding encryption and logging complexity behind a simple public API.
The struct fields make these roles explicit:
type Client struct {
id uuid.UUID
deployment driver.Deployment
localThreshold time.Duration
retryWrites bool
retryReads bool
clock *session.ClusterClock
readPreference *readpref.ReadPref
readConcern *readconcern.ReadConcern
writeConcern *writeconcern.WriteConcern
bsonOpts *options.BSONOptions
registry *bson.Registry
monitor *event.CommandMonitor
serverAPI *driver.ServerAPIOptions
serverMonitor *event.ServerMonitor
sessionPool *session.Pool
timeout *time.Duration
httpClient *http.Client
logger *logger.Logger
currentDriverInfo *atomic.Pointer[options.DriverInfo]
seenDriverInfo sync.Map
// encryption-related
isAutoEncryptionSet bool
keyVaultClientFLE *Client
keyVaultCollFLE *Collection
mongocryptdFLE *mongocryptdClient
cryptFLE driver.Crypt
metadataClientFLE *Client
internalClientFLE *Client
encryptedFieldsMap map[string]any
authenticator driver.Authenticator
}
This is a classic façade: a single public type shielding the application from a large internal subsystem. The control tower analogy fits: it owns global knobs, watches the deployment, and routes every operation through consistent policies.
Client construction shows this orchestration role clearly. Connect separates configuration from activation:
func Connect(opts ...*options.ClientOptions) (*Client, error) {
c, err := newClient(opts...)
if err != nil {
return nil, err
}
if err := c.connect(); err != nil {
return nil, err
}
return c, nil
}
Connect configures a client, then brings it online.newClient interprets ClientOptions, builds the authenticator, wires encryption, and constructs the topology. It’s powerful but complex: around 110 SLOC with many branches. That centralization is useful—one place to add features—but it also concentrates risk if responsibilities aren’t factored into smaller helpers.
The activation step, connect(), turns capabilities on based on what’s configured:
func (c *Client) connect() error {
if connector, ok := c.deployment.(driver.Connector); ok {
if err := connector.Connect(); err != nil {
return wrapErrors(err)
}
}
if c.mongocryptdFLE != nil {
if err := c.mongocryptdFLE.connect(); err != nil {
return err
}
}
if c.internalClientFLE != nil {
if err := c.internalClientFLE.connect(); err != nil {
return err
}
}
if c.keyVaultClientFLE != nil && c.keyVaultClientFLE != c.internalClientFLE && c.keyVaultClientFLE != c {
if err := c.keyVaultClientFLE.connect(); err != nil {
return err
}
}
if c.metadataClientFLE != nil && c.metadataClientFLE != c.internalClientFLE && c.metadataClientFLE != c {
if err := c.metadataClientFLE.connect(); err != nil {
return err
}
}
var updateChan <-chan description.Topology
if subscriber, ok := c.deployment.(driver.Subscriber); ok {
sub, err := subscriber.Subscribe()
if err != nil {
return wrapErrors(err)
}
updateChan = sub.Updates
}
c.sessionPool = session.NewPool(updateChan)
return nil
}
connect() activates topology, encryption sub‑clients, and the session pool.Every conditional here reflects a capability: connector deployment, encryption sidecars, and topology notifications feeding the session pool. The overarching design: one high‑level type owns lifecycle and cross‑cutting concerns, and delegates low‑level work to specialized components.
Sessions, bulk writes, and invariants
Once the client is online, the control tower has to keep higher‑level guarantees: session correctness and safe write behavior. This file encodes those rules close to the public API.
Sessions as managed conversations
A MongoDB session is a logical conversation that backs transactions and causally consistent reads. The client maintains a pool and exposes two layers:
- Explicit sessions:
StartSessionreturns a*Sessionyou manage. - Implicit sessions: methods like
ListDatabasesandBulkWritequietly create and end sessions when needed.
StartSession merges client defaults with per‑call overrides:
func (c *Client) StartSession(opts ...options.Lister[options.SessionOptions]) (*Session, error) {
sessArgs, err := mongoutil.NewOptions(opts...)
if err != nil {
return nil, err
}
if sessArgs.CausalConsistency == nil && (sessArgs.Snapshot == nil || !*sessArgs.Snapshot) {
sessArgs.CausalConsistency = &options.DefaultCausalConsistency
}
coreOpts := &session.ClientOptions{
DefaultReadConcern: c.readConcern,
DefaultReadPreference: c.readPreference,
DefaultWriteConcern: c.writeConcern,
}
sess, err := session.NewClientSession(c.sessionPool, c.id, coreOpts)
if err != nil {
return nil, wrapErrors(err)
}
return &Session{clientSession: sess, client: c, deployment: c.deployment}, nil
}
StartSession applies smart defaults, then hands back a managed session.The defaulting logic is deliberate: unless you explicitly ask for snapshot reads, the driver enables causal consistency by default. That’s the kind of policy decision that belongs in the control tower, not at every call site.
At shutdown, endSessions collects open session IDs from the pool and sends batched endSessions commands, up to 10,000 per batch, deliberately ignoring errors. Server‑side cleanup should be best effort; stuck cleanup must not block process termination.
BulkWrite: enforcing write semantics
BulkWrite demonstrates how the client encodes invariants around write concern, transactions, and encryption instead of delegating blindly to lower‑level operations.
func (c *Client) BulkWrite(ctx context.Context, writes []ClientBulkWrite,
opts ...options.Lister[options.ClientBulkWriteOptions],
) (*ClientBulkWriteResult, error) {
// QE unsupported for Client.bulkWrite (for now).
if c.isAutoEncryptionSet {
return nil, errors.New("bulkWrite does not currently support automatic encryption")
}
if len(writes) == 0 {
return nil, fmt.Errorf("invalid writes: %w", ErrEmptySlice)
}
bwo, err := mongoutil.NewOptions(opts...)
if err != nil {
return nil, err
}
if ctx == nil {
ctx = context.Background()
}
sess := sessionFromContext(ctx)
if sess == nil && c.sessionPool != nil {
sess = session.NewImplicitClientSession(c.sessionPool, c.id)
defer sess.EndSession()
}
if err := c.validSession(sess); err != nil {
return nil, err
}
transactionRunning := sess.TransactionRunning()
wc := c.writeConcern
if transactionRunning {
wc = nil
}
if bwo.WriteConcern != nil {
if transactionRunning {
return nil, errors.New("cannot set write concern after starting a transaction")
}
wc = bwo.WriteConcern
}
acknowledged := wc.Acknowledged()
if !acknowledged {
if bwo.Ordered == nil || *bwo.Ordered {
return nil, errors.New("cannot request unacknowledged write concern and ordered writes")
}
sess = nil
}
// ... build selector, writePairs, and execute underlying bulk operation ...
}
BulkWrite is a guard rail: it encodes what combinations are allowed.Key rules are enforced centrally:
- Automatic encryption with client‑level
BulkWriteis currently unsupported, so the method fails fast whenisAutoEncryptionSetis true. - Empty write sets are rejected with an error that wraps
ErrEmptySlice. - If a transaction is already running on the session, you cannot change the write concern; that would violate transactional guarantees.
- Unacknowledged writes cannot be ordered. If you don’t wait for acknowledgements, pretending the order is meaningful would be misleading, so the call is rejected.
Client-side encryption as a security hub
Encryption adds another dimension to the control tower: it needs keys, schemas, KMS providers, and sometimes sidecar processes. client.go centralizes this into a security hub, wired via auto‑encryption options.
Auto‑encryption is assembled through helpers like configureAutoEncryption and newMongoCrypt. The client creates:
- A key vault client and collection with suitable read/write concern.
- A metadata client used to look up schema information for auto‑encryption.
- A
MongoCryptinstance that knows about schemas, encrypted fields, and KMS providers. - Either the shared library (
crypt_shared) or amongocryptdprocess for command marking.
The heavy lifting and validation happen in newMongoCrypt:
func (c *Client) newMongoCrypt(opts *options.AutoEncryptionOptions) (*mongocrypt.MongoCrypt, error) {
// normalize SchemaMap to bsoncore.Document
cryptSchemaMap := make(map[string]bsoncore.Document)
for k, v := range opts.SchemaMap {
schema, err := marshal(v, c.bsonOpts, c.registry)
if err != nil {
return nil, err
}
cryptSchemaMap[k] = schema
}
// normalize EncryptedFieldsMap
cryptEncryptedFieldsMap := make(map[string]bsoncore.Document)
for k, v := range opts.EncryptedFieldsMap {
encryptedFields, err := marshal(v, c.bsonOpts, c.registry)
if err != nil {
return nil, err
}
cryptEncryptedFieldsMap[k] = encryptedFields
}
kmsProviders, err := marshal(opts.KmsProviders, c.bsonOpts, c.registry)
if err != nil {
return nil, fmt.Errorf("error creating KMS providers document: %w", err)
}
cryptSharedLibPath := ""
if val, ok := opts.ExtraOptions["cryptSharedLibPath"]; ok {
str, ok := val.(string)
if !ok {
return nil, fmt.Errorf(
`expected AutoEncryption extra option "cryptSharedLibPath" to be a string, but is a %T`, val)
}
cryptSharedLibPath = str
}
cryptSharedLibDisabled := false
if v, ok := opts.ExtraOptions["__cryptSharedLibDisabledForTestOnly"]; ok {
cryptSharedLibDisabled = v.(bool)
}
bypassAutoEncryption := opts.BypassAutoEncryption != nil && *opts.BypassAutoEncryption
bypassQueryAnalysis := opts.BypassQueryAnalysis != nil && *opts.BypassQueryAnalysis
mc, err := mongocrypt.NewMongoCrypt(&mcopts.MongoCryptOptions{
KmsProviders: kmsProviders,
LocalSchemaMap: cryptSchemaMap,
BypassQueryAnalysis: bypassQueryAnalysis,
EncryptedFieldsMap: cryptEncryptedFieldsMap,
CryptSharedLibDisabled: cryptSharedLibDisabled || bypassAutoEncryption,
CryptSharedLibOverridePath: cryptSharedLibPath,
HTTPClient: opts.HTTPClient,
KeyExpiration: opts.KeyExpiration,
})
if err != nil {
return nil, err
}
var cryptSharedLibRequired bool
if val, ok := opts.ExtraOptions["cryptSharedLibRequired"]; ok {
b, ok := val.(bool)
if !ok {
return nil, fmt.Errorf(
`expected AutoEncryption extra option "cryptSharedLibRequired" to be a bool, but is a %T`, val)
}
cryptSharedLibRequired = b
}
if cryptSharedLibRequired && mc.CryptSharedLibVersionString() == "" {
return nil, errors.New(
`AutoEncryption extra option "cryptSharedLibRequired" is true, but we failed to load the crypt_shared library`)
}
return mc, nil
}
newMongoCrypt normalizes options, validates types, and enforces encryption policies.There are a few reusable patterns here:
- Normalize external configuration into internal representations early (
bsoncore.Documentmaps for schemas and encrypted fields). - Type‑check every dynamic option (e.g.,
ExtraOptions) and fail with precise error messages. - Derive flags like
CryptSharedLibDisabledfrom a small set of inputs so that the rest of the code only sees a clean configuration.
The cryptSharedLibRequired check is a concrete enforcement hook: if the environment or policy requires the shared library, the client refuses to start when it’s not available. That’s exactly the kind of policy the control tower should own.
Operations, lifecycle, and observability
With sessions and encryption in place, the client’s day‑to‑day work is orchestrating operations and managing lifecycle. The code paths for ListDatabases, Ping, and Disconnect illustrate how the control tower pattern extends into performance and observability.
ListDatabases: orchestration over a low‑level operation
ListDatabases is conceptually simple: run a command and return a result. In practice, the method composes session handling, server selection, retries, and encryption on top of a lower‑level operation object.
func (c *Client) ListDatabases(ctx context.Context, filter any,
opts ...options.Lister[options.ListDatabasesOptions],
) (ListDatabasesResult, error) {
if ctx == nil {
ctx = context.Background()
}
sess := sessionFromContext(ctx)
if err := c.validSession(sess); err != nil {
return ListDatabasesResult{}, err
}
if sess == nil && c.sessionPool != nil {
sess = session.NewImplicitClientSession(c.sessionPool, c.id)
defer sess.EndSession()
}
filterDoc, err := marshal(filter, c.bsonOpts, c.registry)
if err != nil {
return ListDatabasesResult{}, err
}
selector := &serverselector.Composite{
Selectors: []description.ServerSelector{
&serverselector.ReadPref{ReadPref: readpref.Primary()},
&serverselector.Latency{Latency: c.localThreshold},
},
}
selector = makeReadPrefSelector(sess, selector, c.localThreshold)
lda, err := mongoutil.NewOptions(opts...)
if err != nil {
return ListDatabasesResult{}, err
}
op := operation.NewListDatabases(filterDoc).
Session(sess).
ReadPreference(c.readPreference).
CommandMonitor(c.monitor).
ServerSelector(selector).
ClusterClock(c.clock).
Database("admin").
Deployment(c.deployment).
Crypt(c.cryptFLE).
ServerAPI(c.serverAPI).
Timeout(c.timeout).
Authenticator(c.authenticator)
if lda.NameOnly != nil {
op = op.NameOnly(*lda.NameOnly)
}
if lda.AuthorizedDatabases != nil {
op = op.AuthorizedDatabases(*lda.AuthorizedDatabases)
}
retry := driver.RetryNone
if c.retryReads {
retry = driver.RetryOncePerCommand
}
op.Retry(retry)
if err := op.Execute(ctx); err != nil {
return ListDatabasesResult{}, wrapErrors(err)
}
return newListDatabasesResultFromOperation(op.Result()), nil
}
ListDatabases composes sessions, selectors, retries, and encryption into one call.Patterns worth copying:
- Draw sessions from context, fall back to implicit sessions, and always
defer EndSession()when the client created them. - Compose server selectors to encode read preference and latency requirements.
- Translate option builders into operation flags at the point of operation construction, not scattered across the codebase.
- Configure retries per operation based on client‑wide knobs.
Ping is an intentionally slimmer variant: choose a read preference (argument or client default), run a ping command against admin. One notable decision is that Connect does not implicitly ping; connectivity is validated explicitly when the caller invokes Ping. That avoids hard‑failing processes when the cluster is temporarily unreachable at startup.
Disconnect: mirroring connect, at scale
Disconnect is the mirror image of connect(), plus resource cleanup. A production‑ready client must make shutdown predictable, even with many sessions and encryption sub‑clients in play.
func (c *Client) Disconnect(ctx context.Context) error {
if c.logger != nil {
defer c.logger.Close()
}
if ctx == nil {
ctx = context.Background()
}
if c.httpClient == httputil.DefaultHTTPClient {
defer httputil.CloseIdleHTTPConnections(c.httpClient)
}
c.endSessions(ctx)
if c.mongocryptdFLE != nil {
if err := c.mongocryptdFLE.disconnect(ctx); err != nil {
return err
}
}
if c.internalClientFLE != nil {
if err := c.internalClientFLE.Disconnect(ctx); err != nil {
return err
}
}
if c.keyVaultClientFLE != nil && c.keyVaultClientFLE != c.internalClientFLE && c.keyVaultClientFLE != c {
if err := c.keyVaultClientFLE.Disconnect(ctx); err != nil {
return err
}
}
if c.metadataClientFLE != nil && c.metadataClientFLE != c.internalClientFLE && c.metadataClientFLE != c {
if err := c.metadataClientFLE.Disconnect(ctx); err != nil {
return err
}
}
if c.cryptFLE != nil {
c.cryptFLE.Close()
}
if disconnector, ok := c.deployment.(driver.Disconnector); ok {
return wrapErrors(disconnector.Disconnect(ctx))
}
return nil
}
Disconnect tears down sessions, HTTP resources, encryption, and topology.A few subtle choices make this robust:
- Default HTTP client resources are explicitly drained to avoid idle connection leaks.
- Sub‑clients used for encryption are disconnected carefully, with identity checks to avoid double‑closing when they alias the main client or each other.
- Session cleanup is best effort;
endSessionsignores errors so that shutdown isn’t blocked by transient network issues.
Observability from the control tower
The way Client routes work suggests a natural set of metrics and traces. Even though the driver doesn’t define these metrics directly in this file, the paths are clear:
| Metric | What it reflects | How it maps to the code |
|---|---|---|
mongo.client.sessions.checked_out |
Current number of sessions in use. | Session pool usage around StartSession, implicit session creation, and endSessions. |
mongo.client.operations.latency_ms |
End‑to‑end latency for client operations. | Timing around calls like op.Execute in ListDatabases, Ping, and BulkWrite. |
mongo.client.bulk_write.error_rate |
Fraction of bulk writes that fail. | Errors returned from BulkWrite after validation and operation execution. |
mongo.client.disconnect.end_sessions_duration_ms |
Time spent ending sessions on shutdown. | Duration of endSessions invoked inside Disconnect. |
Design lessons you can reuse
The primary lesson from mongo/client.go is that a client type should be a deliberate control tower: one cohesive façade that owns lifecycle, semantics, encryption, and guard rails, while delegating low‑level work to specialized components.
This file shows that pattern in practice:
- Construction (
newClientandconnect()) wires topology, authentication, encryption, and the session pool in one place. - Session APIs combine smart defaults with explicit escape hatches, and implicit sessions keep call sites simple.
- High‑level methods such as
BulkWriteandListDatabasesencode invariants and policies before handing off to the operation layer. - Auto‑encryption is treated as a separate security hub, with strict config normalization and policy enforcement in
newMongoCrypt. - Lifecycle is carefully mirrored: what
connect()wires up,Disconnecttears down, including sessions, HTTP resources, and encryption sub‑clients.
Concretely, when you design your own client libraries:
- Centralize cross‑cutting concerns in the client type. Timeouts, retries, read/write semantics, logging, and encryption should live behind a single façade instead of being repeated at every call site.
- Let public methods enforce invariants. Follow the
BulkWritepattern: validate option combinations and session state before invoking low‑level operations. - Normalize and validate configuration up front. Use the
newMongoCryptapproach: convert everything into internal types and check dynamic options early, so the rest of the codebase deals with clean, typed configs.
If we treat our clients as control towers with clear responsibilities, we can make powerful systems safe and predictable to use, while still accommodating features like transactions, retries, and client‑side encryption without overwhelming application code.



