Presets & autotune
Longpipe ships seven model presets covering different speed/quality tradeoffs. The default ('auto') picks one for the user’s hardware automatically.
The presets
| Preset | Input | Encoder | Decoder | Target |
|---|---|---|---|---|
xl | 512×288 | full | 2× channels | M-series Mac, dGPU laptops |
large | 256×144 | full | standard | Mid-range laptops |
medium | 256×144 | full | standard, fp16 | Mid-range laptops |
compact | 256×144 | full | small | Constrained laptops |
small | 256×144 | small | standard | Chromebooks, low-end Windows |
xs | 192×108 | small | standard | Lowest-end constrained |
xxs | 128×72 | small | standard | Fallback / mobile |
All trained on the same data, exported with the same op fusions, evaluated against the same val set. See MODEL_PLAN.md in the repo for architecture details.
Autotune (preset: 'auto')
On init, the worker microbenchmarks each preset on the real device. It picks the largest preset whose model + composite cost fits a target framerate budget (~15 ms/frame headroom for 60fps display).
Autotune runs once at startup. Cost: ~200ms additional init time for the sweep.
Adaptive (runtime)
When preset: 'auto' and adaptive: true (the default), Longpipe also adjusts at runtime:
- Downgrade if the actual framerate drops below the target (typically because another tab started doing heavy work, the user plugged into a battery-saver power plan, etc.).
- Upgrade if
modelMsshows consistent headroom (WebGPU only — WebGL upgrades are too expensive to be worth doing live).
Disable with adaptive: false.
Pinning a preset
new EffectsPipeline(stream, { preset: 'large' }) // disables autotune & adaptive
new EffectsPipeline(stream, { preset: 'auto', adaptive: false }) // autotune once, then stay