11.1 Custom Operators
Created Date: 2025-07-03
PyTorch offers a large library of operators that work on Tensors (e.g. torch.add
, torch.sum
, etc). However, you might wish to use a new customized operator with PyTorch, perhaps written by a third-party library. This tutorial shows how to wrap Python functions so that they behave like PyTorch native operators. Reasons why you may wish to create a custom operator in PyTorch include:
Treating an arbitrary Python function as an opaque callable with respect to
torch.compile
(that is, preventtorch.compile
from tracing into the function).Adding training support to an arbitrary Python function.
Use torch.library.custom_op()
to create Python custom operators. Use the C++ TORCH_LIBRARY APIs to create C++ custom operators (these work in Python-less environments). See the Custom Operators Landing Page for more details.
Please note that if your operation can be expressed as a composition of existing PyTorch operators, then there is usually no need to use the custom operator API – everything (for example torch.compile, training support) should just work.
11.1.1 Example: Wrapping PIL's crop
Into a Custom Operator
Let’s say that we are using PIL’s crop
operation
crop
is not handled effectively out-of-the-box by torch.compile
: torch.compile
induces a "graph break" on functions it is unable to handle and graph breaks are bad for performance. The following code demonstrates this by raising an error (torch.compile with fullgraph=True raises an error if a graph break occurs).
In order to black-box crop
for use with torch.compile
, we need to do two things:
wrap the function into a PyTorch custom operator.
add a "FakeTensor kernel" (aka "meta kernel") to the operator. Given some FakeTensors inputs (dummy Tensors that don’t have storage), this function should return dummy Tensors of your choice with the correct Tensor metadata (shape/strides/dtype/device).
After this, crop
now works without graph breaks:
11.1.2 Adding training support for crop
Use torch.library.register_autograd
to add training support for an operator. Prefer this over directly using torch.autograd.Function
; some compositions of autograd.Function
with PyTorch operator registration APIs can lead to (and has led to) silent incorrectness when composed with torch.compile
.
If you don't need training support, there is no need to use torch.library.register_autograd. If you end up training with a custom_op that doesn’t have an autograd registration, we’ll raise an error message.
The gradient formula for crop is essentially PIL.paste (we’ll leave the derivation as an exercise to the reader). Let’s first wrap paste into a custom operator:
11.1.3 Testing Python Custom operators
Use torch.library.opcheck
to test that the custom operator was registered correctly. This does not test that the gradients are mathematically correct; please write separate tests for that (either manual ones or torch.autograd.gradcheck).
To use opcheck
, pass it a set of example inputs to test against. If your operator supports training, then the examples should include Tensors that require grad. If your operator supports multiple devices, then the examples should include Tensors from each device.
11.1.4 Mutable Python Custom operators
You can also wrap a Python function that mutates its inputs into a custom operator. Functions that mutate inputs are common because that is how many low-level kernels are written; for example, a kernel that computes sin may take in the input and an output tensor and write input.sin() to the output tensor.
11.1.5 Conclusion
In this tutorial, we learned how to use torch.library.custom_op to create a custom operator in Python that works with PyTorch subsystems such as torch.compile and autograd.
This tutorial provides a basic introduction to custom operators. For more detailed information, see:
the torch.library documentation
the Custom Operators Manual