API reference
EffectsPipeline
The single public class. Hand it a MediaStream, get a MediaStream back with the effect applied. The pipeline owns its own worker, weight loading, autotune, frame transport selection, audio passthrough, and adaptive preset swaps — designed so you don’t have to wire any of that yourself.
Construction returns synchronously: pipeline.stream is wired immediately and emits the unprocessed input until the model is ready (~1–3s on cold start, depending on hardware). Use await pipeline.ready if you want to wait for the effect to actually be live before consuming the output.
const pipeline = new EffectsPipeline(inputStream, options?)
Constructor parameters
inputStream
Type: MediaStream · Required
Any video stream — webcam from getUserMedia(), screen capture from getDisplayMedia(), or a captured stream from a <video> or <canvas> element via .captureStream(). Audio tracks are passed through unchanged by default; pass audio: 'drop' to strip them.
options
Type: PipelineOptions · Optional
Shape:
interface PipelineOptions {
background?: BackgroundInput // default: 'blur'
preset?: PresetName | ManualPreset // default: 'auto'
weightsBaseUrl?: string // default: cdn.longpipe.dev/.../v/0.0.1/
audio?: 'passthrough' | 'drop' // default: 'passthrough'
enabled?: boolean // default: true
outputResolution?: { w: number, h: number } // default: input track size
adaptive?: boolean // default: true
onReady?: () => void
onError?: (err: PipelineError) => void
}
Each property in detail:
Blur
To add a blur, you can either set background to 'blur' to use default blur settings,
{
background: 'blur'
}or you can specify an object
{
background: {
blur: {
strength?: number //0 to 1
}
}
}Image
You can set background to a url, an ImageBitmap or an <img> element
{
background: string | HTMLImageElement | ImageBitmap
}Video
You can also set a video background by specifying a video url, or a <video> element, which will play in a loop with each frame being set as the background as the video plays in a loop.
{
background: string | HTMLVideoElement
}Swappable at runtime via pipeline.setBackground(...) — no flicker, no re-init. See Backgrounds for the full input surface.
Model size selection. 'auto' runs an init-time microbenchmark and picks the largest preset that hits a 15fps budget on the user’s hardware; while in 'auto' mode the adaptive controller (see below) can swap up or down at runtime. Explicit choices — 'fast', 'balanced', 'quality' — are always respected and never auto-overridden. Pass a ManualPreset object to pin a specific model + dtype + resolution combination. See Presets & autotune.
Where the SDK fetches model weight files. The default points at a public, versioned CDN (https://cdn.longpipe.dev/models/v/0.0.1/) — fine for prototyping. Override to self-host: offline use, locked-down corporate networks, or to avoid the public CDN dependency. The SDK fetches files of the form model_${preset}.bin. See Self-hosting weights.
Whether the output MediaStream includes audio tracks from the input. 'passthrough' forwards them unmodified — useful for video-call scenarios where audio is part of the same stream. 'drop' strips audio, useful when you only care about the video (e.g., screen recording, frame export, server-side processing).
Whether the effect is active. false makes the pipeline emit the raw input stream verbatim — no model load, no inference, no compositing. Toggle at runtime via pipeline.setEnabled(true | false) — cheap to re-enable since the worker stays alive.
Dimensions of the output canvas. By default matches the input video track’s intrinsic size — preserves aspect ratio and avoids pointless rescale. Falls back to 1280×720 if the track hasn’t reported its size yet. Set explicitly to force a specific output dimension regardless of input.
Whether the SDK auto-adjusts the preset at runtime when conditions change. Only applies when preset: 'auto' — explicit preset choices are always respected. When enabled, downgrades to a smaller preset if framerate drops below target, and upgrades when modelMs shows consistent headroom (WebGPU only — WebGL upgrades are too expensive to swap live).
Fires once after the first composited frame lands. Equivalent to await pipeline.ready resolving — pick whichever pattern fits your code. Fires exactly once per pipeline lifetime.
Fires for async / runtime errors after the constructor returns: weight 404, GPU context loss (webglcontextlost or device.lost), worker crashes, adaptive preset swap failures, runtime background-resolution errors. Synchronous construction errors (no input video track, transport setup throws) propagate out of new EffectsPipeline(...) directly — wrap construction in try/catch if you want one handler for both.
Properties
pipeline.stream: MediaStream— the output stream. Available synchronously; emits passthrough video until the effect is live.pipeline.ready: Promise<void>— resolves when the worker emits its first composited frame. Optional to await.
Methods
pipeline.setBackground(bg: BackgroundInput): Promise<void>— swap the background at runtime. Same input shape as the option. No flicker, no re-init.pipeline.setEnabled(enabled: boolean): void— toggle the effect.falseputs the pipeline in passthrough mode.pipeline.setPreset(preset: PresetName | ManualPreset): void— manually swap presets. Disables the adaptive controller.pipeline.getStats(): Promise<PipelineStats>— runtime counters (model time, fps, current preset, etc). Single postMessage round-trip.pipeline.destroy(): void— terminates the worker and releases GPU resources. The pipeline is not reusable after destroy.
Error handling
onError and pipeline.ready rejection both fire for async errors after the constructor returns:
- Init failures — weight fetch 404,
normalizeBackgroundURL fetch fail, worker init exception. - GPU context loss — WebGL
webglcontextlostevent, WebGPUdevice.lostpromise. - Worker uncaught failures — pipe broken, command threw.
- Adaptive swap failures — marked
recoverable: trueon the error; pipeline keeps running on the prior preset. - Runtime background errors — e.g.,
setBackground()called with an invalid URL.
onError does not fire for synchronous constructor errors — those propagate out of new EffectsPipeline(...) directly. Wrap in try/catch if you want a single handler for both.