Rollup merge of #137880 - EnzymeAD:autodiff-batching, r=oli-obk

Autodiff batching

Enzyme supports batching, which is especially known from the ML side when training neural networks.
There we would normally have a training loop, where in each iteration we would pass in some data (e.g. an image), and a target vector. Based on how close we are with our prediction we compute our loss, and then use backpropagation to compute the gradients and update our weights.
That's quite inefficient, so what you normally do is passing in a batch of 8/16/.. images and targets, and compute the gradients for those all at once, allowing better optimizations.

Enzyme supports batching in two ways, the first one (which I implemented here) just accepts a Batch size,
and then each Dual/Duplicated argument has not one, but N shadow arguments.  So instead of
```rs
for i in 0..100 {
   df(x[i], y[i], 1234);
}
```
You can now do
```rs
for i in 0..100.step_by(4) {
   df(x[i+0],x[i+1],x[i+2],x[i+3], y[i+0], y[i+1], y[i+2], y[i+3], 1234);
}
```
which will give the same results, but allows better compiler optimizations. See the testcase for details.

There is a second variant, where we can mark certain arguments and instead of having to pass in N shadow arguments, Enzyme assumes that the argument is N times longer. I.e. instead of accepting 4 slices with 12 floats each, we would accept one slice with 48 floats. I'll implement this over the next days.

I will also add more tests for both modes.

For any one preferring some more interactive explanation, here's a video of Tim's llvm dev talk, where he presents his work. https://www.youtube.com/watch?v=edvaLAL5RqU
I'll also add some other docs to the dev guide and user docs in another PR.

r? ghost

Tracking:

- https://github.com/rust-lang/rust/issues/124509
- https://github.com/rust-lang/rust/issues/135283
This commit is contained in:
Stuart Cook
2025-04-05 13:18:13 +11:00
committed by GitHub
21 changed files with 728 additions and 234 deletions

View File

@@ -77,6 +77,17 @@ pub struct AutoDiffAttrs {
/// e.g. in the [JAX
/// Documentation](https://jax.readthedocs.io/en/latest/_tutorials/advanced-autodiff.html#how-it-s-made-two-foundational-autodiff-functions).
pub mode: DiffMode,
/// A user-provided, batching width. If not given, we will default to 1 (no batching).
/// Calling a differentiated, non-batched function through a loop 100 times is equivalent to:
/// - Calling the function 50 times with a batch size of 2
/// - Calling the function 25 times with a batch size of 4,
/// etc. A batched function takes more (or longer) arguments, and might be able to benefit from
/// cache locality, better re-usal of primal values, and other optimizations.
/// We will (before LLVM's vectorizer runs) just generate most LLVM-IR instructions `width`
/// times, so this massively increases code size. As such, values like 1024 are unlikely to
/// work. We should consider limiting this to u8 or u16, but will leave it at u32 for
/// experiments for now and focus on documenting the implications of a large width.
pub width: u32,
pub ret_activity: DiffActivity,
pub input_activity: Vec<DiffActivity>,
}
@@ -222,6 +233,7 @@ impl AutoDiffAttrs {
pub const fn error() -> Self {
AutoDiffAttrs {
mode: DiffMode::Error,
width: 0,
ret_activity: DiffActivity::None,
input_activity: Vec::new(),
}
@@ -229,6 +241,7 @@ impl AutoDiffAttrs {
pub fn source() -> Self {
AutoDiffAttrs {
mode: DiffMode::Source,
width: 0,
ret_activity: DiffActivity::None,
input_activity: Vec::new(),
}