Export PyTorch models with torch.export -- dynamic shapes, symbolic tracing, control flow operators, debugging export failures, and making untraceable code traceable
40:Ta3d,
Capture a PyTorch nn.Module into an ExportedProgram -- a fully traced
ATen-dialect graph with no Python runtime dependency. This is the standard
entry point for deploying PyTorch models to any runtime (ExecuTorch,
TensorRT, ONNX, custom backends).
from torch.export import export
exported = export(model.train(False), (example_input,))
By default every dimension is static. Use dynamic_shapes to allow variable
sizes at runtime:
from torch.export import Dim, export
batch = Dim("batch", min=1, max=32)
seq_len = Dim("seq_len", min=1, max=2048)
exported = export(
model.train(False),
(example_input,),
dynamic_shapes={"x": {0: batch, 1: seq_len}},
)
Rules:
Dim objectsDim object across inputs asserts equal sizeDim.AUTO for best-effort dynamic markingprint(exported.graph) # torch.fx.Graph -- the traced DAG
gm = exported.graph_module # torch.fx.GraphModule wrapping the graph
out = exported.module()(x) # run eagerly for correctness testing
| IR | How | Op count |
|---|---|---|
| Training IR | export() (default) | ~3000 |
| Inference IR | ep.run_decompositions(decomp_table={}) | ~2000 |
| Core ATen IR | ep.run_decompositions(decomp_table=None) | ~180 |
Most deployment backends use Core ATen IR.
torch.export.save(exported, "model.pt2")
loaded = torch.export.load("model.pt2")
torch._check to assert one branch, or torch.cond for bothHigher-order operators: torch.cond, torch.while_loop, torch.map,
torch.scan, torch.associative_scan
torch.export.draft_export(model, args) -- always produces a graph, reports all issuesTORCH_LOGS="+dynamo,+export" python -m your_export_module 2>&1 | tlparserefine_dynamic_shapes_from_suggested_fixes(str(e), ds)Load guides/torch-export.md when the user needs a deep dive on tracing internals, symbolic shapes, guards, control flow operators, common failure patterns, or making untraceable code traceable.