Every popular library eventually hits the same wall: the API grows faster than the startup time budget. The more power you expose, the heavier a simple import becomes. Yet when we run import transformers, it feels surprisingly light for such a massive ecosystem. That is not an accident.
In this article, we’ll use the top-level __init__.py file as a blueprint for how the transformers package turns a huge, multi-backend codebase into a fast, resilient import. Along the way, we’ll extract patterns you can reuse: separating runtime from tooling, using lazy loading, and handling optional dependencies without breaking users.
How a Giant Library Feels Small
The transformers package is a facade: a single, friendly entry point hiding dozens of subpackages and backends. To understand why importing it feels light, we need to see what the top-level __init__.py actually does.
transformers/ (package root)
└── src/
└── transformers/
├── __init__.py # This file: builds lazy import structure and public API
├── utils/
│ ├── __init__.py
│ ├── import_utils.py # define_import_structure, _LazyModule
│ ├── dummy_pt_objects.py
│ ├── dummy_tokenizers_objects.py
│ └── ...
├── models/
│ ├── __init__.py
│ ├── bert/
│ ├── gpt2/
│ └── ... (discovered via define_import_structure)
├── data/
├── generation.py
├── pipelines.py
└── ...
__init__.py file sits at the top, orchestrating imports, not doing model work itself.When Python executes transformers/__init__.py, it:
- Checks dependency versions.
- Builds an
_import_structuremapping of submodule → exported symbols. - Determines which optional backends (PyTorch, tokenizers, vision, etc.) are available.
- Installs a special
_LazyModulethat defers heavy imports until someone actually touches a symbol. - Exposes real imports to static type checkers via a separate branch.
This file’s job is to let users import everything while Python actually imports almost nothing.
To pull this off, the file maintains two views of the same public API—one optimized for runtime behavior, one for tooling—and keeps them aligned.
The core comment at the top makes this explicit:
# When adding a new object to this init, remember to add it twice: once inside the `_import_structure` dictionary and
# once inside the `if TYPE_CHECKING` branch. The `TYPE_CHECKING` should have import statements as usual, but they are
# only there for type checking. The `_import_structure` is a dictionary submodule to list of object names, and is used
# to defer the actual importing for when the objects are requested. This way `import transformers` provides the names
# in the namespace without actually importing anything (and especially none of the backends).
There are two parallel realities:
- Runtime reality – Driven by
_import_structureand_LazyModule; it only imports modules when an attribute is accessed. - Type-checking reality – Driven by
if TYPE_CHECKING:imports; all concrete objects are eagerly imported so tools like MyPy or Pyright can “see” real classes and functions.
In Python, TYPE_CHECKING from typing is False at runtime and treated as True by type checkers. Code inside an if TYPE_CHECKING: block is visible to tools but skipped during execution. This separation is what lets transformers feel light in production while still feeling rich inside an editor.
Lazy Loading and Optional Backends
With the two API views in mind, we can look at how transformers actually achieves fast imports and resilient behavior when dependencies are missing. Both rely on the same idea: declare what exists up front, decide what to load and how at the last possible moment.
Declaring the import map
The runtime view is driven by _import_structure, a dictionary mapping submodule names to the symbols each should export:
# Base objects, independent of any specific backend
_import_structure = {
"audio_utils": [],
"cli": [],
"configuration_utils": ["PreTrainedConfig", "PretrainedConfig"],
"convert_slow_tokenizers_checkpoints_to_fast": [],
"data": [
"DataProcessor",
"InputExample",
"InputFeatures",
# ... many more
],
"data.data_collator": [
"DataCollator",
"DataCollatorForLanguageModeling",
# ...
"default_data_collator",
],
# ... many other entries
}
Instead of importing each submodule and pulling objects out, the file simply declares names. It’s a sitemap for the package: it shows where everything will live without loading the pages yet.
Later, once optional backends are accounted for, this map is combined with dynamically discovered model modules and handed to _LazyModule:
else:
import sys
_import_structure = {k: set(v) for k, v in _import_structure.items()}
import_structure = define_import_structure(Path(__file__).parent / "models", prefix="models")
import_structure[frozenset({})].update(_import_structure)
sys.modules[__name__] = _LazyModule(
__name__,
globals()["__file__"],
import_structure,
module_spec=__spec__,
extra_objects={"__version__": __version__},
)
Here:
define_import_structurescans themodels/directory and returns its own mapping.- The static mapping (
_import_structure) is merged into that dynamic mapping. - The real module object in
sys.modulesis replaced with_LazyModule, which uses this combined structure.
From that point on, when you access transformers.PreTrainedModel or transformers.pipeline, _LazyModule consults the map, imports the underlying submodule on demand, and returns the attribute.
This design scales as the library grows. The report estimates complexity as effectively O(N + M), where N is the number of static submodules and symbols listed in _import_structure and M is the number of model modules under models/. For any given process, most of these will never be used. A small microservice might only need pipeline("text-generation"); a research notebook might touch dozens of classes. The cost you always pay is building the map, not loading all model code.
The core pattern is: separate “what exists” from “what is loaded now.” Declare everything in a side structure, then let a lazy module turn declarations into behavior on demand.
Keeping imports working when dependencies are missing
Lazy loading keeps startup time under control, but not everyone has the same backends installed. Despite that, import transformers must still succeed. The file follows a repeated pattern: check availability, wire either the real module or a dummy, and keep the public API shape stable.
Tokenizers: one pattern, many backends
For the Rust-backed tokenizers, the code looks like this:
# tokenizers-backed objects
try:
if not is_tokenizers_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from .utils import dummy_tokenizers_objects
_import_structure["utils.dummy_tokenizers_objects"] = [
name for name in dir(dummy_tokenizers_objects) if not name.startswith("_")
]
else:
# Fast tokenizers structure
_import_structure["tokenization_utils_tokenizers"] = [
"TokenizersBackend",
"PreTrainedTokenizerFast",
]
The flow is:
- Check whether the dependency is available via
is_tokenizers_available(). - If not, raise a sentinel
OptionalDependencyNotAvailableand catch it immediately. - On failure, import
dummy_tokenizers_objectsand export every public name it contains. - On success, export the real fast tokenizer classes from
tokenization_utils_tokenizers.
From a user’s perspective, transformers remains importable in both cases. The difference appears later, when they try to construct something that actually needs that backend—dummy classes can then fail with a clear error message pointing to the missing dependency.
PyTorch: graceful degradation of capabilities
PyTorch availability is even more critical, but the pattern is the same:
# PyTorch-backed objects
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from .utils import dummy_pt_objects
_import_structure["utils.dummy_pt_objects"] = [
name for name in dir(dummy_pt_objects) if not name.startswith("_")
]
else:
_import_structure["model_debugging_utils"] = [
"model_addition_debugger_context",
]
_import_structure["activations"] = []
_import_structure["cache_utils"] = [
"CacheLayerMixin",
"DynamicLayer",
# ... many more
]
# ... lots of training, optimization, and trainer symbols
Then, regardless of which branch ran, the module emits a single advisory:
if not is_torch_available():
logger.warning_advice(
"PyTorch was not found. Models won't be available and only tokenizers, "
"configuration and file/data utilities can be used."
)
Imports always succeed, but the library sets expectations early through logging. Users learn that something is missing before they hit a confusing error while trying to instantiate a model.
The implicit contract with dummy modules
The initializer assumes that dummy modules export the same public names as the real implementations (anything not starting with _), but nothing in this file enforces that contract.
| Backend | Real module | Dummy module | Expected guarantee |
|---|---|---|---|
| Tokenizers | tokenization_utils_tokenizers |
utils.dummy_tokenizers_objects |
Exports stand-in versions of fast tokenizer classes. |
| SentencePiece + tokenizers | convert_slow_tokenizer |
utils.dummy_sentencepiece_and_tokenizers_objects |
Exports stand-ins for conversion utilities. |
| PyTorch | various modeling_*, trainer, etc. |
utils.dummy_pt_objects |
Exports placeholders for Trainer, models, etc. |
In your own libraries, if you mirror this pattern, it’s worth adding automated tests that:
- Import both the real and dummy modules.
- Compare their public attribute sets (minus allowed exceptions).
- Fail CI if the dummy loses sync with the real interface.
The pattern to copy is: “import never fails, capabilities degrade gracefully.” If something optional is missing, you still export symbols and tell the truth through clear error messages and logs.
Operational Behavior at Scale
So far we’ve looked at structure. To really appreciate why this design matters, we should connect it to how transformers behaves in real systems: startup time, observability, and reliability.
Import cost and scalability
Two main hot paths matter operationally:
- The first import of
transformersin a process. - The first access to heavy symbols that triggers lazy imports.
At import time, we pay for:
- Dependency checks (e.g.,
is_torch_available,is_tokenizers_available). - Building
_import_structureand merging it with the dynamically discoveredmodels/structure. - Installing
_LazyModuleand the logger.
To keep this under control as the library grows, the report suggests tracking a metric such as:
transformers_import_time_seconds– a histogram measuring how longimport transformerstakes in your environment.
With a target like “p95 < 0.3s in typical server environments,” you can detect regressions when someone adds a very expensive check or directory scan. For services that import heavy libraries on startup, treating import time as a small SLI (Service Level Indicator) helps keep cold starts and autoscaling behavior predictable.
Lazy imports: success and failure modes
Because attribute access triggers imports lazily through _LazyModule, some failures only appear when a specific symbol is touched. To keep this observable in production, the report recommends metrics like:
transformers_lazy_import_failures_total– counts failures in lazy attribute resolution (for example, misconfigured import structure).transformers_optional_dependency_missing_total– counts how often optional dependencies are unavailable at runtime.
These metrics answer questions such as:
- “Did we accidentally break lazy loading for a new model family?”
- “Did a deployment miss installing the tokenizers or vision backends that our pipelines expect?”
Concurrency and reliability
CPython guards module imports with a global import lock, so this initializer executes safely even if multiple threads import transformers at the same time. The same applies to _LazyModule’s internal imports, assuming its implementation is careful.
On reliability, the initializer takes a clear stance:
- Never fail import due to optional dependencies. Instead, use
OptionalDependencyNotAvailableand dummy modules. - Log warnings when critical backends are absent (for example, when PyTorch is missing).
- Keep risky work out of
__init__.py. Model loading, I/O, and network access live in submodules behind this facade.
Operationally, the story is: import is fast, idempotent, and robust. All the complex, failure-prone work is pushed behind a thin but carefully designed boundary.
Keeping the Facade Maintainable
The patterns we’ve seen so far make imports feel lightweight and resilient, but they come with maintainability costs. The file is long, dense, and requires discipline to update. The report surfaces two main smells and some refactors that keep behavior while improving readability.
Extracting the base import structure
Right now, _import_structure is built directly at the top level. One suggested refactor is to wrap the backend-agnostic part in a helper:
--- a/src/transformers/__init__.py
+++ b/src/transformers/__init__.py
@@ -39,7 +39,10 @@
-# Base objects, independent of any specific backend
-_import_structure = {
+def _build_base_import_structure():
+ """Return the base import structure independent of optional backends."""
+ return {
"audio_utils": [],
"cli": [],
"configuration_utils": ["PreTrainedConfig", "PretrainedConfig"],
@@ -119,7 +122,10 @@
- "video_utils": [],
- "utils.kernel_config": ["KernelConfig"],
-}
+ "video_utils": [],
+ "utils.kernel_config": ["KernelConfig"],
+ }
+
+
+_import_structure = _build_base_import_structure()
This keeps the public surface exactly the same but:
- Makes the “base mapping” a clear, testable unit.
- Separates static declarations (the plain mapping) from logic (availability checks and dummy wiring).
- Reduces cognitive load when scanning the initializer.
DRYing up dummy module exports
The initializer repeats the same pattern for dummy modules:
from .utils import dummy_tokenizers_objects
_import_structure["utils.dummy_tokenizers_objects"] = [
name for name in dir(dummy_tokenizers_objects) if not name.startswith("_")
]
and similarly for other backends. A tiny helper can collapse this duplication:
--- a/src/transformers/__init__.py
+++ b/src/transformers/__init__.py
@@ -167,8 +167,15 @@
- from .utils import dummy_tokenizers_objects
-
- _import_structure["utils.dummy_tokenizers_objects"] = [
- name for name in dir(dummy_tokenizers_objects) if not name.startswith("_")
- ]
+ from .utils import dummy_tokenizers_objects
+
+ def _export_public(module):
+ return [name for name in dir(module) if not name.startswith("_")]
+
+ _import_structure["utils.dummy_tokenizers_objects"] = _export_public(dummy_tokenizers_objects)
@@ -181,9 +188,7 @@
- from .utils import dummy_sentencepiece_and_tokenizers_objects
-
- _import_structure["utils.dummy_sentencepiece_and_tokenizers_objects"] = [
- name for name in dir(dummy_sentencepiece_and_tokenizers_objects) if not name.startswith("_")
- ]
+ from .utils import dummy_sentencepiece_and_tokenizers_objects
+ _import_structure["utils.dummy_sentencepiece_and_tokenizers_objects"] = _export_public(
+ dummy_sentencepiece_and_tokenizers_objects
+ )
Functionally nothing changes, but intent (“export public names from this module”) is now explicit and centralized.
Aligning runtime and TYPE_CHECKING views
The hardest maintenance challenge is keeping _import_structure and the TYPE_CHECKING imports in sync. Whenever a symbol is added to the public API, it must appear in both places. The comment at the top is a reminder, but humans are fallible.
The report suggests two broad approaches:
- Procedural generation – Store a single canonical data structure (for example, a mapping of
submodule → symbols) and generate both the mapping and the import statements from it, either at runtime or via a code generation script. - Static checking – Add CI tests that import the package under normal conditions and under
TYPE_CHECKING-like analysis, then compare exposed symbols.
An illustrative (not from transformers) approach for a smaller project could look like:
# illustrative example, not from transformers
_PUBLIC_API = {
"foo": ["Foo", "make_foo"],
"bar": ["Bar"],
}
_import_structure = _PUBLIC_API.copy()
if TYPE_CHECKING:
from .foo import Foo, make_foo # generated from _PUBLIC_API
from .bar import Bar
For a library as large as transformers, you’d likely want a script that reads a single source of truth and updates __init__.py accordingly, or a helper in utils.import_utils that can generate imports for the type-checking branch.
The broader lesson is: when you must duplicate information for different consumers (runtime vs tooling), centralize the data and automate the duplication as much as possible.
What to Steal for Your Own Libraries
We started with a simple question: why does import transformers feel so lightweight for such a huge library? By walking through its __init__.py, we’ve seen how a carefully designed facade separates declaration from execution, runtime from tooling, and capabilities from environment.
1. Design a facade, not a dump
Create a curated facade at your package root. Use a mapping like _import_structure to declare which symbols are part of your public contract instead of exposing every internal module directly. This makes navigation easier and evolution safer.
2. Embrace lazy loading for heavy pieces
If your library has heavy components (ML backends, database drivers, compression libraries), consider a lazy module pattern. Centralize where you decide what exists and let attribute access decide when it is imported. This can turn multi-second cold starts into predictable, fast imports.
3. Make optional dependencies truly optional
Don’t punish users with import errors because they don’t have a particular backend installed. Instead:
- Guard backend-dependent pieces with availability checks.
- Provide dummy implementations that raise clear, actionable errors when called.
- Log warnings when critical backends are missing so expectations are set upfront.
4. Serve both runtime and tooling
Optimize for both production and developer experience:
- Use
if TYPE_CHECKING:to expose real imports to type checkers and IDEs without slowing down runtime. - Keep a single source of truth for what’s public, and generate or validate both views (runtime vs type-checking) against it.
5. Measure and monitor your import path
If your library ends up in production services, treat it like a small system:
- Track import time as a metric (for example,
yourlib_import_time_seconds). - Count lazy import failures and missing optional dependencies.
- Use logs or tracing around the first heavy imports for latency attribution.
When we design our own packages with the same care—controlling what’s declared versus what’s loaded, keeping imports robust, and serving both runtime and tooling—we can give users a similar experience: a powerful library that still feels lightweight to import.
A practical next step is to sketch your own _import_structure-style map for a library you maintain and ask: what would it take to make this import fast, resilient, and friendly to both humans and tools? That is the journey this __init__.py has already taken for transformers.



