Use cases
- Real-time processing — separate stems live from a microphone or audio stream
- Offline applications — process audio without an internet connection
- Low-latency pipelines — integrate into DAWs, game engines, or embedded systems
- Privacy-sensitive workflows — keep audio on-device
Supported platforms
| Platform | Architecture | GPU backend | CPU supported |
|---|---|---|---|
| Linux | x86_64, ARM64 | CUDA 11.8+ | Yes |
| Windows | x86_64, ARM64 | DirectX 12, CUDA | Yes |
| Android | ARM64 | OpenGL ES 3.1+ | Yes |
| iOS | ARM64 | Metal, Neural Engine | Yes |
| macOS | ARM, x86_64 | Metal, Neural Engine | Yes |
How it works
The SDK loads an encrypted.crypt model file provided by AudioShake. You pass audio buffers in, and separated stem buffers come out. Processing runs on CPU or GPU depending on platform and configuration.
| Interface | Best for |
|---|---|
SourceSeparationTask | File-to-file separation with progress tracking |
AudioShakeSeparator | Stream and buffer-based processing for real-time or custom I/O |
Get access
The SDK requires a Client ID and Client Secret for authentication and model decryption.Request SDK Access
Contact info@audioshake.ai to get started with the Local Inference SDK.