Skip to main content
The Local Inference SDK runs AudioShake’s separation models directly on-device — no network calls, no cloud processing. It ships as a native library for Windows, Android, iOS, and macOS.

Use cases

  • Real-time processing — separate stems live from a microphone or audio stream
  • Offline applications — process audio without an internet connection
  • Low-latency pipelines — integrate into DAWs, game engines, or embedded systems
  • Privacy-sensitive workflows — keep audio on-device

Supported platforms

PlatformArchitectureGPU backendCPU supported
Linuxx86_64, ARM64CUDA 11.8+Yes
Windowsx86_64, ARM64DirectX 12, CUDAYes
AndroidARM64OpenGL ES 3.1+Yes
iOSARM64Metal, Neural EngineYes
macOSARM, x86_64Metal, Neural EngineYes
Demo applications are available for each platform.

How it works

The SDK loads an encrypted .crypt model file provided by AudioShake. You pass audio buffers in, and separated stem buffers come out. Processing runs on CPU or GPU depending on platform and configuration.
InterfaceBest for
SourceSeparationTaskFile-to-file separation with progress tracking
AudioShakeSeparatorStream and buffer-based processing for real-time or custom I/O

Get access

The SDK requires a Client ID and Client Secret for authentication and model decryption.

Request SDK Access

Contact info@audioshake.ai to get started with the Local Inference SDK.