Commit 4c1a63b4 authored by Steve Tjoa's avatar Steve Tjoa

exercise, feature sonification

parent c80afad0
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy, scipy, matplotlib.pyplot as plt, librosa, IPython.display, mir_eval, urllib"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[← Back to Index](index.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercise: Understanding Audio Features through Sonification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is an *exercise* notebook. It's a playground for your Python code. Feel free to write and execute your code without fear.\n",
"\n",
"When you see a cell that looks like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plt.plot?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"that is a cue to use a particular command, in this case, `plot`. Run the cell to see documentation for that command. (To quickly close the Help window, press `q`.) \n",
"\n",
"For more documentation, visit the links in the Help menu above. Also see the other notebooks; all the exercises here are covered somewhere else in separate notebooks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This exercise is loosely based upon \"Lab 1\" from previous MIR workshops ([2010](https://ccrma.stanford.edu/workshops/mir2010/Lab1_2010.pdf))."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goals"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this exercise, you will segment, feature extract, and analyze audio files."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Detect onsets in an audio signal.\n",
"2. Segment the audio signal at each onset.\n",
"3. Compute features for each segment.\n",
"4. Gain intuition into the features by listening to each segment separately."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: Retrieve Audio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Download the file `simpleLoop.wav` onto your local machine."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"filename = '125_bounce.wav'\n",
"#filename = 'conga_groove.wav'\n",
"#filename = '58bpm.wav'\n",
"url = 'http://audio.musicinformationretrieval.com/' + filename\n",
"\n",
"urllib.urlretrieve?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Make sure the download worked:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%ls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Save the audio signal into an array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"librosa.load?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Show the sample rate:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print fs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Listen to the audio signal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"IPython.display.Audio?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Display the audio signal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"librosa.display.waveplot?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compute the short-time Fourier transform:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"librosa.stft?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For display purposes, compute the log amplitude of the STFT:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"librosa.logamplitude?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Display the spectrogram."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Play with the parameters, including x_axis and y_axis\n",
"librosa.display.specshow?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2: Detect Onsets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find the times, in seconds, when onsets occur in the audio signal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"librosa.onset.onset_detect?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"librosa.frames_to_time?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Convert the onset times into sample indices."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"librosa.frames_to_samples?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Play a \"beep\" at each onset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Use the `length` parameter so the click track is the same length as the original signal\n",
"mir_eval.sonify.clicks?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Play the click track \"added to\" the original signal\n",
"IPython.display.Audio?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3: Segment the Audio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Save into an array, `segments`, 100-ms segments beginning at each onset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Assuming these variables exist:\n",
"# x: array containing the audio signal\n",
"# fs: corresponding sampling frequency\n",
"# onset_samples: array of onsets in units of samples\n",
"frame_sz = int(0.100*fs)\n",
"segments = numpy.array([x[i:i+frame_sz] for i in onset_samples])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is a function that adds 300 ms of silence onto the end of each segment and concatenates them into one signal.\n",
"\n",
"Later, we will use this function to listen to each segment, perhaps sorted in a different order."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def concatenate_segments(segments, fs=44100, pad_time=0.300):\n",
" padded_segments = [numpy.concatenate([segment, numpy.zeros(int(pad_time*fs))]) for segment in segments]\n",
" return numpy.concatenate(padded_segments)\n",
"concatenated_signal = concatenate_segments(segments, fs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Listen to the newly concatenated signal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"IPython.display.Audio?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 4: Extract Features"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For each segment, compute the zero crossing rate."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# returns a boolean array\n",
"librosa.core.zero_crossings?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# you'll need this to actually count the number of zero crossings per segment\n",
"sum?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use `argsort` to find an index array, `ind`, such that `segments[ind]` is sorted by zero crossing rate."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# zcrs: array, number of zero crossings in each frame\n",
"ind = numpy.argsort(zcrs)\n",
"print ind"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sort the segments by zero crossing rate, and concatenate the sorted segments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"concatenated_signal = concatenate_segments(segments[ind], fs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 5: Listen to Segments"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Listen to the sorted segments. What do you hear?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"IPython.display.Audio?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## More Exercises"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Repeat the steps above for the following audio files:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#url = 'http://audio.musicinformationretrieval.com/conga_groove.wav'\n",
"#url = 'http://audio.musicinformationretrieval.com/58bpm.wav'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[← Back to Index](index.html)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
...@@ -243,7 +243,7 @@ div#notebook { ...@@ -243,7 +243,7 @@ div#notebook {
<li><a href="signal_representations.html">Signal Representations</a> (<a href="signal_representations.ipynb">ipynb</a>)</li> <li><a href="signal_representations.html">Signal Representations</a> (<a href="signal_representations.ipynb">ipynb</a>)</li>
<li><a href="onset_detection.html">Onset Detection</a> (<a href="onset_detection.ipynb">ipynb</a>)</li> <li><a href="onset_detection.html">Onset Detection</a> (<a href="onset_detection.ipynb">ipynb</a>)</li>
<li><a href="beat_tracking.html">Beat Tracking</a> (<a href="beat_tracking.ipynb">ipynb</a>)</li> <li><a href="beat_tracking.html">Beat Tracking</a> (<a href="beat_tracking.ipynb">ipynb</a>)</li>
<li><a href="exercises/feature_sonification.ipynb">Exercise: Understanding Audio Features through Sonification</a></li> <li><a href="feature_sonification.html">Exercise: Understanding Audio Features through Sonification</a> (<a href="feature_sonification.ipynb">ipynb</a>)</li>
</ol> </ol>
</div> </div>
......
...@@ -46,7 +46,7 @@ ...@@ -46,7 +46,7 @@
"1. [Signal Representations](signal_representations.html) ([ipynb](signal_representations.ipynb))\n", "1. [Signal Representations](signal_representations.html) ([ipynb](signal_representations.ipynb))\n",
"1. [Onset Detection](onset_detection.html) ([ipynb](onset_detection.ipynb))\n", "1. [Onset Detection](onset_detection.html) ([ipynb](onset_detection.ipynb))\n",
"1. [Beat Tracking](beat_tracking.html) ([ipynb](beat_tracking.ipynb))\n", "1. [Beat Tracking](beat_tracking.html) ([ipynb](beat_tracking.ipynb))\n",
"1. [Exercise: Understanding Audio Features through Sonification](exercises/feature_sonification.ipynb)" "1. [Exercise: Understanding Audio Features through Sonification](feature_sonification.html) ([ipynb](feature_sonification.ipynb))"
] ]
}, },
{ {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment