Separate a recording with multiple speakers into one stem per speaker — even when speakers overlap. Use the outputs for transcription, speaker-specific editing, or feeding clean single-speaker audio into downstream AI models.Documentation Index
Fetch the complete documentation index at: https://developer.audioshake.ai/llms.txt
Use this file to discover all available pages before exploring further.
Create a Task
Use cases
- Clean per-speaker audio for transcription and diarization
- Isolate individual voices in meetings, interviews, or panel discussions
- Prepare training data for speech AI models
- Enable speaker-specific editing in podcast post-production
Speech Denoising
Clean up individual speaker stems after separation.
Dialogue Separation
Separate all speech from music and effects instead.