Skip to main content
Find every music segment in a podcast, video, or broadcast — with start and end timestamps. Use the results for licensing review, content indexing, or triggering downstream processing only where music is present.

Create a Task

import requests

API_KEY = "your_api_key"
HEADERS = {"Content-Type": "application/json", "x-api-key": API_KEY}

response = requests.post(
    "https://api.audioshake.ai/tasks",
    headers=HEADERS,
    json={
        "assetId": "your_asset_id",
        "targets": [
            {"model": "music_detection", "formats": ["json"]}
        ]
    }
)

task_id = response.json()["id"]
print(f"Task created: {task_id}")
Check Task status to monitor progress and download results, or use webhooks to be notified when each target completes.

Output format

Music is detected in 10-second intervals. The output JSON contains an array of segments where music is present, each with a confidence score:
[
  {
    "start_time": 20.0,
    "end_time": 30.0,
    "confidence": 0.18
  },
  {
    "start_time": 40.0,
    "end_time": 60.0,
    "confidence": 0.32
  }
]
FieldDescription
start_timeStart of the music segment (seconds)
end_timeEnd of the music segment (seconds)
confidenceDetection confidence score (0–1)

Use cases

  • Flag content that requires music licensing review
  • Build searchable timelines of music usage across archives
  • Trigger stem separation or transcription only on segments containing music
  • Monitor broadcast compliance with music usage policies

Dialogue Separation

Separate speech from music and effects in your content.

Models

See all available detection and analysis models.