"This is an *exercise* notebook. It's a playground for your Python code. Feel free to write and execute your code without fear.\n",
"\n",
"Detect onsets. Save the onset times.\n",
"When you see a cell that looks like this:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"plot?"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": 1
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"that is a cue to use a particular command, in this case, `plot`. Run the cell to see documentation for that command. (To quickly close the Help window, press `q`.) \n",
"\n",
"At each onset, extract one 100-ms segment from the audio signal.\n",
"For more documentation, visit the links in the Help menu above. Also see the other notebooks; all the exercises here are covered somewhere else in separate notebooks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This exercise is loosely based upon \"Lab 1\" from previous MIR workshops ([2010](https://ccrma.stanford.edu/workshops/mir2010/Lab1_2010.pdf))."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Goals"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Extract spectral features from an audio signal.\n",
"2. Train a K-Nearest Neighbor classifier.\n",
"3. Use the classifier to classify beats in a drum loop."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Step 1: Retrieve Audio, Detect Onsets, and Segment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Follow the same steps here.](feature_sonification.ipynb#Step-1:-Retrieve-Audio)\n",
"\n",
"For each segment, compute the MFCCs.\n",
"1. Download the file `simpleLoop.wav` onto your local machine.\n",
"1. Save the audio signal into an array.\n",
"1. Find the times, in seconds, when onsets occur in the audio signal.\n",
"1. Save into an array, `segments`, 100-ms segments beginning at each onset."
"Train a K-NN classifier using test signals. When training, discard the 0th MFCC coefficient, because it only represents the energy in the frame and does not add any discriminative power. \n",
"imshow(mfccs[:,1:].T, origin='lower', aspect='auto', interpolation='nearest') # Ignore the 0th MFCC\n",
"yticks(range(12), range(1,13)) # Ignore the 0th MFCC\n",
"Read a training set of drum samples. For each test signal, extract MFCCs, and use `mean` to obtain one MFCC vector per signal.\n",
"\n",
"Train a K-NN classifier using test signals. When training, discard the 0th MFCC coefficient, because it only represents the energy in the frame and does not add any discriminative power. \n",
"\n",
"In addition to the MFCCs, extract the delta-MFCCs. Re-train the classifier, and re-run the classifier over the test audio signal. Do the results change?\n",
"\n",
"Repeat for other audio files."
"\n",
"For each segment in the test audio signal, feed it into the trained K-NN classifier, and save the label."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Bonus"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In addition to the MFCCs, extract the following features:\n",