Commit 249c91cd authored by Steve Tjoa's avatar Steve Tjoa

initial commit: old workshop material

parent 112c18c1
{
"metadata": {
"name": "",
"signature": "sha256:0f1abdcd8499eb9b02b57d097d07a0fe316258b0b6c00214a178b1242ab2a07c"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lab 1 \n",
"=====\n",
"\n",
"**Basic Feature Extraction and Classification**\n",
"\n",
"Purpose: Introduce you to the practice of analyzing, segmenting, feature extracting, and applying basic classifications to audio files. Our future labs will build upon this essential work - but will use more sophisticated training sets, features, and classifiers. \n",
"\n",
"We'll first need to setup some additional Matlab folders, toolboxes, and scripts that we'll use later. \n",
"\n",
"Directory\n",
"---------\n",
"\n",
"Course related code, toolboxes, and audio are stored at: \n",
"\n",
" /usr/ccrma/courses/mir2013\n",
" \n",
"A large collection of audio files for your experimentation are located at \n",
"\n",
" /usr/ccrma/courses/mir2013/audio\n",
"\n",
"Matlab Setup\n",
"------------\n",
"\n",
"1. Launch Matlab\n",
"2. Configure your Path: Add the folder `/usr/ccrma/courses/mir2013/Toolboxes` to your local Matlab path (including all subfolders).\n",
"3. Set the \"Java Heap Memory\" to 900 MB via : File > Preferences > General > Java Heap Memory.\n",
" This allows us to load large audio files and feature vectors into memory. Click on \"OK\". Click Apply.\n",
"4. Restart Matlab.\n",
"\n",
"Why are the Paste / Save keys different? Why does Paste default to Control-Y? \n",
"On Linux, Matlab defaults to using Emacs key bindings. If you want Mac or Windows bindings, go: File menu > Preferences > Keyboard\n",
"Switch the Editor/Debugger key bindings to \"Windows\".\n",
"\n",
"You can easily comment and uncomment code by hitting Cntr-R, Cntrl-T. \n",
"\n",
"To read MP3 files into Matlab, we have a function called `mp3read`. It is used just like `wavread`. \n",
"\n",
" \n",
"Section 1: Segmentation and Zero-Crossing Rate\n",
"----------------------------------------------\n",
"\n",
"Purpose: We'll experiment with the different features for known frames and see if we can build a basic understanding of what they are doing. \n",
" \n",
"1. Make sure to save all of your development code in an .m file. You can build upon and reuse much of this code over the workshop. To create a new .m file, choose:\n",
" * File > New > Script...\n",
" * Save the file as Lab1.m\n",
"\n",
" You can execute the code in `Lab1.m` via any of the below options:\n",
" * Type Lab1.m in the command window\n",
" * press F5 in the Editor to execute the current selected script.\n",
" * You can execute 1 or more commands selected in the Editor window at a time. Select the code and press F9. Note the Command Window will update. \n",
"\n",
"2. Tab Completion. \n",
"\n",
" Tab Completion works in Command Window and the Editor.\n",
" After you type a few letters, hit the Tab key and a popup will appear and show you all of the possible completions, including variable names and functions.\n",
" This prevents you to mistyping the names of variables - a big time (and aggravation) saver! \n",
" \n",
" For example, in the Command Line or Editor , try typing `wavr` and then hitting Tab! (\"wavread\" should appear)\n",
"\n",
"3. Load the audio file simpleLoop.wav into Matlab, storing it in the variable x and sampling rate in fs. \n",
"\n",
" [x,fs] = wavread('/usr/ccrma/courses/mir2013/audio/simpleLoop.wav');\n",
"\n",
"4. In this course, we will convert all stereo files to mono. \n",
" Include this code after your read in WAV files to automatically detect if a file is stereo and convert it to mono.\n",
"\n",
" % MAKING MONO\n",
" % If your audio files (x) are stereo, here's how to make them mono:\n",
" if size(x,2) == 2\n",
" x= (x(:,1)+x(:,2) ) ./ max(abs(x(:,1)+x(:,2))) ;\n",
" disp('Making your file mono\u2026');\n",
" end\n",
"\n",
"5. You can play the audio file by typing using typing\n",
"\n",
" sound(x,fs)\n",
"\n",
" To stop listening to a long audio file, press Control-C. Audio snippets less than ~8000 samples will often not play out Matlab. (known bug on Linux machines)\n",
"\n",
"6. Run an onset detector to determine the approximate onsets in the audio file. \n",
"\n",
" [onsets] = onset_times(x,fs); % leighs onset detector with signal and sample_rate as input\n",
" onsets=round(fs*onsets); % convert onset times in seconds to to samples - round to nearest integer sample\n",
" numonsets = length(onsets);\n",
"\n",
" For debugging, we have function which generates mixes of the original audio file and onset times. This is demonstrated with `test_onsets.m`.\n",
"\n",
" One of Matlab's greatest features is its rich and easy visualization functions. Visualizing your data at every possible step in the algorithm development process not only builds a practical understanding of the variables, parameters and results, but it greatly aids debugging. \n",
" \n",
"8. Plot the audio file in a figure window. \n",
"\n",
" plot(x)\n",
"\n",
"9. Now, add a marker showing the position of each onset on top of the waveforms. \n",
"\n",
" plot(x); hold on; plot(onsets,0.2,'rx')\n",
"\n",
"10. Adding text markers to your plots can further aid in debugging or visualizing problems. Label each onset with it's respective onset number with the following simple loop:\n",
"\n",
" for i=1:numonsets\n",
" text(onsets(i),0.2,num2str(i)); % num2st converts an number to a string for display purposes\n",
" end\n",
"\n",
" Labeling the data is crucial. Add a title and axis to the figures. (ylabel, xlabel, title.)\n",
"\n",
" xlabel('seconds')\n",
" ylabel('magnitude')\n",
" title('my onset plot')\n",
"\n",
"11. Now that we can view the various onsets, try out the onset detector and visualization on a variety of other audio examples located in `/usr/ccrma/courses/mir2013/audio`. Continue to load the various audio files and run the onset detector - does it seem like it works well? If not, yell at Leigh.\n",
"\n",
" Segmenting audio in Frames\n",
" As we learned in lecture, it's common to chop up the audio into fixed-frames. These frames are then further analyzed, processed, or feature extracted. We're going to analyze the audio in 100 ms frames starting at each onset. \n",
"\n",
"12. Create a loop which carves up the audio in fixed-size frames (100ms), starting at the onsets.\n",
"\n",
"13. Inside of your loop, plot each frame, and play the audio for each frame. \n",
"\n",
" % Loop to carve up audio into onset-based frames\n",
" frameSize = 0.100 *fs; % sec\n",
" for i=1:numonsets\n",
" frames{i}= x(onsets(i):onsets(i)+frameSize);\n",
" figure(1);\n",
" plot(frames{i}); title(['frame ' num2str(i)]); \n",
" sound(frames{i} ,fs);\n",
" pause(0.5)\n",
" end\n",
" \n",
" Feature extract your frames\n",
"\n",
"14. Create a loop which extracts the Zero Crossing Rate for each frame, and stores it in an array. Your loop will select 100ms (in samples, this value is = fs * 0.1) , starting at the onsets, and obtain the number of zero crossings in that frame. \n",
"\n",
" The command `[z] = zcr(x)` returns the number of zero crossings for a vector x.\n",
" Don't forget to store the value of z in a feature array for each frame.\n",
"\n",
" clear features\n",
" % Extract Zero Crossing Rate from all frames and store it in \"features(i,1)\"\n",
" for i=1:numonsets\n",
" features(i,1) = zcr(frames{i})\n",
" end\n",
" \n",
" For simpleLoop.wav, you should now have a feature array of 5 x 1 - which is the 5 frames (one at each detected onset) and 1 feature (zcr) for each frame. \n",
" \n",
" Sort the audio file by its feature array. \n",
" Let's test out how well our features characterize the underlying audio signal. \n",
" To build intuition, we're going to sort the feature vector by it's zero crossing rate, from low value to highest value. \n",
"\n",
"15. If we sort and re-play the audio that corresponds with these sorted frames, what do you think it will sound like? (e.g., same order as the loop, reverse order of the loop, snares followed by kicks, quiet notes followed by loud notes, or ??? ) Pause and think about this. \n",
"\n",
"16. Now, we're going to play these sorted audio frames, from lowest to highest. (The pause command will be quite useful here, too.) How does it sound? Does it sort them how you expect them to be sorted? \n",
"\n",
" [y,index] = sort(features);\n",
"\n",
" for i=1:numonsets\n",
" sound(frames{index(i)},fs)\n",
" figure(1); plot(frames{index(i)});title(i);\n",
" pause(0.5)\n",
" end\n",
"\n",
" You'll notice how trivial this drum loop is - always use familiar and predictable audio files when you're developing your algorithms. \n",
"\n",
"17. Now that you have this file loading, playing , and sorting working, try this with out files, such as:\n",
"\n",
" /usr/ccrma/courses/mir2013/audio/CongaGroove-mono.wav\n",
" /usr/ccrma/courses/mir2013/audio/125BOUNC-mono.WAV"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
\ No newline at end of file
{
"metadata": {
"name": "",
"signature": "sha256:c5ceab4dd15cd4c672761897b42ca810c7e32f1ca641cc1b0ee141dc0df0a57f"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Section 2: Spectral Features & k-NN\n",
"------------------------------------\n",
"\n",
"My first audio classifier: introducing K-NN! We can now appreciate why we need additional intelligence in our systems - heuristics can't very far in the world of complex audio signals. We'll be using Netlab's implementation of the k-NN for our work here. It proves be a straight-forward and easy to use implementation. The steps and skills of working with one classifier will scale nicely to working with other, more complex classifiers. \n",
"\n",
"We're also going to be using the new features in our arsenal: cherishing those \"spectral moments\" (centroid, bandwidth, skewness, kurtosis) and also examining other spectral statistics. \n",
" \n",
"### TRAINING DATA\n",
"\n",
"First off, we want to analyze and feature extract a small collection of audio samples - storing their feature data as our \"training data\". The below commands read all of the .wav files in a directory into a structure, snareFileList. \n",
"\n",
"1. Use these commands to read in a list of filenames (samples) in a directory, replacing the path with the actual directory that the audio \\ drum samples are stored in.\n",
"\n",
" snareDirectory = ['/usr/ccrma/courses/mir2013/audio/drum samples/snares/'];\n",
" snareFileList = getFileNames(snareDirectory ,'wav')\n",
"\n",
" kickDirectory = ['/usr/ccrma/courses/mir2013/audio/drum samples/kicks/'];\n",
" kickFileList = getFileNames(kickDirectory ,'wav')\n",
"\n",
"2. To access the filenames contained in the cell array, use the brackets { } to get to the element that you want to access. \n",
"\n",
" For example, to access the text file name of the 1st file in the list, you would type:\n",
"\n",
" snareFileList{1}\n",
"\n",
" When we feature extract a sample collection, we need to sequentially access audio files, segment them (or not), and feature extract them. Loading a lot of audio files into memory is not always a feasible or desirable operation, so you will create a loop which loads an audio file, feature extracts it, and closes the audio file. Note that the only information that we retain in memory are the features that are extracted.\n",
"\n",
"3. Create a loop which reads in an audio file, extracts the zero crossing rate, and some spectral statistics. The feature information for each audio file (the \"feature vector\") should be stored as a feature array, with columns being the features and rows for each file. \n",
" \n",
" Or in Matlab, for example:\n",
"\n",
" featuresSnare =\n",
"\n",
" 1.0e+003 *\n",
" \n",
" 0.5730 1.9183 2.9713 0.0004 0.0002\n",
" 0.4750 1.4834 2.4463 0.0004 0.0012\n",
" 0.5900 2.2857 3.1788 0.0003 0.0041\n",
" 0.5090 1.6622 2.6369 0.0004 0.0051\n",
" 0.4860 1.4758 2.2085 0.0004 0.0021\n",
" 0.6060 2.2119 3.2798 0.0004 0.0651\n",
" 0.4990 2.0607 2.7654 0.0004 0.0721\n",
" 0.6360 2.3153 3.0256 0.0003 0.0221\n",
" 0.5490 2.0137 3.0342 0.0004 0.0016\n",
" 0.5900 2.2857 3.1788 0.0003 0.0012\n",
" \n",
" In your loop, here's how to read in your wav files, using a structure of file names:\n",
" [x,fs]=wavread([snareDirectory snareFileList{i}]); %note the use of brackets for snareFileList\n",
" \n",
" Here's an example of how to feature extract for the current audio file..\n",
" frameSize = 0.100 * fs; % 100ms\n",
" currentFrame = x(1:frameSize)\n",
" featuresSnare(i,1) = zcr(currentFrame);\n",
" [centroid, bandwidth, skew, kurtosis]=spectralMoments(currentFrame,fs,8192)\n",
" featuresSnare(i,2:5) = [centroid, bandwidth, skew, kurtosis];\n",
" \n",
"4. First, extract all of the feature data for the kick drums and store it in a feature array. (For my example, above, I'd put it in \"featuresKick\")\n",
"\n",
"5. Next, extract all of the feature data for the snares, storing them in a different array. \n",
"Again, the kick and snare features should be separated in two different arrays!\n",
" \n",
" OK, no more help. The rest is up to you! \n",
"\n",
"### Building Models\n",
"\n",
"1. Examine the feature array for the various snare samples. What do you notice? \n",
"\n",
"2. Since the features are different scales, we will want to normalize each feature vector to a common range - storing the scaling coefficients for later use. Many techniques exist for scaling your features. We'll use linear scaling, which forces the features into the range -1 to 1.\n",
"\n",
" For this, we'll use a custom-created function called scale. Scale returns an array of scaled values, as well as the multiplication and subtraction values which were used to conform each column into -1 to 1. Use this function in your code. \n",
" \n",
" [trainingFeatures,mf,sf]=scale([featuresSnare; featuresKick]);\n",
"\n",
"3. Build a k-NN model for the snare drums in Netlab, using the function knn. \n",
"\n",
" We'll the implementation of from the Matlab toolbox \"netlab\":\n",
"\n",
" >help knn\n",
" NET = KNN(NIN, NOUT, K, TR_IN, TR_TARGETS) creates a KNN model NET\n",
" with input dimension NIN, output dimension NOUT and K neighbours.\n",
" The training data is also stored in the data structure and the\n",
" targets are assumed to be using a 1-of-N coding.\n",
"\n",
" The fields in NET are\n",
"\n",
" type = 'knn'\n",
" nin = number of inputs\n",
" nout = number of outputs\n",
" tr_in = training input data\n",
" tr_targets = training target data\n",
"\n",
" Here's an example...\n",
" \n",
" labels=[[ones(10,1) zeros(10,1)]; [zeros(10,1) ones(10,1) ]];\n",
"\n",
" Which is an array of ones and zeros to correspond to the 10 snares and 10 kicks in our training sample set:\n",
"\n",
" labels=\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 1 0\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" 0 1\n",
" \n",
" [trainingFeatures,mf,sf]=scale([featuresSnare; featuresKick]);\n",
"\n",
" model_snare = knn(5,2,1,trainingFeatures,labels); \n",
" \n",
" This k-NN model uses 5 features, 2 classes for output (the label), uses k-NN = 1, and takes in the feature data via a feature array called trainingFeatures.\n",
"\n",
" These labels indicate which sample in our feature data is a snare, vs. a non-snare. The k-NN model uses this information to build a means of comparison and classification. It is really important that you get these labels correct - because they are the crux of all future classifications that are made later on. (Trust me, I've made many mistakes in this area - training models with incorrect label data.)\n",
"\n",
"4. Create a script which extracts features for a single file, re-scales its feature values, and evaluates them with your kNN classifier. \n",
"\n",
"Evaluating samples with your k-NN\n",
"Now that the hard part is done, it's time to throw some feature data through the trained k-NN and see what it outputs. \n",
" \n",
"RESCALING.\n",
"In evaluating a new audio file, we need to extract it's features, re-scale them to the same range as the trained feature values, and then send them through the knn.\n",
"\n",
"Some helpful commands:\n",
"featuresScaled = rescale(features,mf,sf) ; % This uses the previous calculated linear scaling parameters to adjust the incoming features to the same range. \n",
"\n",
"EVALUTING WITH KNN\n",
"\n",
" [voting,model_output]=knnfwd(model_snare , featuresScaled )\n",
"\n",
"The output voting gives you a breakdown of how many nearest neighbors were closest to the test feature vector. \n",
" \n",
"The `model_output` provides a list of whether output is Class 1 or Class 2.\n",
"\n",
" output = zeros(size(model_output),2)\n",
" output(find(model_output==1),1)=1\n",
" output(find(model_output==2),2)=1\n",
"\n",
"Now you can visually compare the output to trainlabels\n",
" \n",
"Once you have completed function, first, test it with your training examples. Since a k-NN model has exact representations of the training data, it will have 100% training accuracy - meaning that every training example should be predicted correctly, when fed back into the trained model. \n",
"\n",
"Now, test out with the examples in the folder \"test kicks\" and \"test snares\", located in the drum samples folder. These are real-world testing samples\u2026\n",
"\n",
"If the output labels \"1\" or \"0\" aren't insightful for you, you can add an if statement to display them as strings \"snare\" and \"kick\".\n",
"\n",
" \n",
"NEED HELP?\n",
"Tricks of the trade\n",
"Select code in Matlab editor and then press F9. This will execute the currently selected code.\n",
"To run a Matlab \"cell\" (multiline block of code), press Control-Enter with the text cursor in the current cell.\n",
"\n",
"The clear command re-initializes a variable. To avoid confusion, you mind find it helpful to clear arrays and structures at the beginning of your scripts.\n",
"\n",
"Common Errors\n",
"\n",
" >??? Index exceeds matrix dimensions.\n",
"\n",
"Are you trying to access, display, plot, or play past the end of the file / frame? \n",
"For example, if an audio file is 10,000 samples long, make sure that the index is not greater than this maximum value. If the value is > than the length of your file, use an if statement to catch the problem.\n",
"\n",
" "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
\ No newline at end of file
{
"metadata": {
"name": "",
"signature": "sha256:417a167478ba4dfbc3a0ee4785668d8f8953045aa65b6a192f488670ef2b691d"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lab 2\n",
"=====\n",
"\n",
"Purpose: To gain an understanding of feature extraction, windowing, MFCCs.\n",
"\n",
"SECTION 1 SEGMENTING INTO EVERY N ms FRAMES\n",
"-------------------------------------------\n",
"\n",
"Segmenting: Chopping up into frames every N seconds\n",
"\n",
"Previously, we've either chopped up a signal by the location of it's onsets (and taking the following 100 ms) or just analyzing the entire file. \n",
"Analyzing the audio file by \"frames\" is another technique for your arsenal that is good for analyzing entire songs, phrases, or non-onset-based audio examples.\n",
"You easily chop up the audio into frames every, say, 100ms, with a for loop. \n",
"\n",
" frameSize = 0.100 * fs; % 100ms\n",
" for i = 1: frameSize : (length(x)-frameSize+1) \n",
" currentFrame = x(i:i+frameSize-1); % this is the current audio frame \n",
" % Now, do your feature extraction here and store the features in some matrix / array\n",
" end\n",
"\n",
"Very often, you will want to have some overlap between the audio frames - taking an 100ms long frame but sliding it 50 ms each time. To do a 100ms frame and have it with 50% overlap, try: \n",
"\n",
" frameSize = 0.100 * fs; % 100ms\n",
" hop = 0.5; % 50%overlap\n",
" for i = 1: hop * frameSize : (length(x)-frameSize+1) \n",
" ...\n",
" end\n",
"\n",
"Note that it's also important to multiple the signal by a window (e.g., Hamming / Hann window) equal to the frame size to smoothly transition between the frames. \n",
"\n",
"SECTION 2 MFCC\n",
"--------------\n",
"\n",
"Load an audio file of your choosing from the audio folder on `/usr/ccrma/courses/mir2012/audio`.\n",
"Use this as an opportunity to explore this collection.\n",
"\n",
"BAG OF FRAMES\n",
"\n",
"Test out MFCC to make sure that you know how to call it. We'll use the CATbox implementation of MFCC.\n",
"\n",
" currentFrameIndex = 1; \n",
" for i = 1: frameSize : (length(x)-frameSize+1)\n",
" currentFrame = x(i:i+frameSize-1) + eps ; % this is the current audio frame\n",
" % Note that we add EPS to prevent divide by 0 errors % Now, do your other feature extraction here \n",
" % The code generates MFCC coefficients for the audio signal given in the current frame.\n",
" [mfceps] = mfcc(currentFrame ,fs)' ; %note the transpose operator!\n",
" delta_mfceps = mfceps - [zeros(1,size(mfceps,2)); mfceps(1:end-1,:)]; %first delta\n",
" % Calculate the mean and std of the MFCCs, MFCC-deltas.\n",
" MFCC_mean(currentFrameIndex,:) = mean(mfceps);\n",
" MFCC_std(currentFrameIndex,:) = std(mfceps);\n",
" MFCC_delta_mean (currentFrameIndex,:)= mean(delta_mfceps);\n",
" MFCC_delta_std(currentFrameIndex,:)= std(delta_mfceps);\n",
" currentFrameIndex = currentFrameIndex + 1;\n",
" end\n",
"\n",
" features = [MFCC_mean MFCC_delta_mean ]; % In this case, we'll only store the MFCC and delta-MFCC means\n",
" % NOTE: You might want to toss out the FIRST MFCC coefficient and delta-coefficient since it's much larger than \n",
" others and only describes the total energy of the signal.\n",
"\n",
"You can include this code inside of your frame-hopping loop to extract the MFCC-values for each frame. \n",
"\n",
"Once MFCCs per frame have been calculated, consider how they can be used as features for expanding the k-NN classification and try implementing it!\n",
"\n",
"Extract the mean of the 12 MFCCs (coefficients 1-12, do not use the \"0th\" coefficient) for each onset using the code that you wrote. Add those to the feature vectors, along with zero crossing and centroid. We should now have 14 features being extracted - this is starting to get \"real world\"! With this simple example (and limited collection of audio slices, you probably won't notice a difference - but at least it didn't break, right?) Try it with the some other audio to truly appreciate the power of timbral classification. \n",
"\n",
"\n",
"\n",
"SECTION 3 CROSS VALIDATION\n",
"--------------------------\n",
"\n",
"You'll need some of this code and information to calculate your accuracy rate on your classifiers.\n",
"\n",
"EXAMPLE\n",
"\n",
"Let's say we have 10-fold cross validation...\n",
"\n",
"1. Divide test set into 10 random subsets.\n",
"2. 1 test set is tested using the classifier trained on the remaining 9.\n",
"3. We then do test/train on all of the other sets and average the percentages. \n",
"\n",
"To achieve the first step (divide our training set into k disjoint subsets), use the function crossvalind.m (posted in the Utilities)\n",
"\n",
" INDICES = CROSSVALIND('Kfold',N,K) returns randomly generated indices\n",
" for a K-fold cross-validation of N observations. INDICES contains equal\n",
" (or approximately equal) proportions of the integers 1 through K that\n",
" define a partition of the N observations into K disjoint subsets.\n",
"\n",
" You can type help crossvalind to look at all the other options. This code is also posted as a template in \n",
" `/usr/ccrma/courses/mir2010/Toolboxes/crossValidation.m`\n",
"\n",
" % This code is provided as a template for your cross-validation\n",
" % computation. Replace the variables \"features\", \"labels\" with your own\n",
" % data. \n",
" % As well, you can replace the code in the \"BUILD\" and \"EVALUATE\" sections\n",
" % to be useful with other types of Classifiers.\n",
" %\n",
" %% CROSS VALIDATION \n",
" numFolds = 10; % how many cross-validation folds do you want - (default=10)\n",
" numInstances = size(features,1); % this is the total number of instances in our training set\n",
" numFeatures = size(features,2); % this is the total number of instances in our training set\n",
" indices = crossvalind('Kfold',numInstances,numFolds) % divide test set into 10 random subsets\n",
" clear errors\n",
" for i = 1:10\n",
" % SEGMENT DATA INTO FOLDS\n",
" disp(['fold: ' num2str(i)]) \n",
" test = (indices == i) ; % which points are in the test set\n",
" train = ~test; % all points that are NOT in the test set\n",
" % SCALE\n",
" [trainingFeatures,mf,sf]=scale(features(train,:));\n",
" % BUILD NEW MODEL - ADD YOUR MODEL BUILDING CODE HERE...\n",
" model = knn(numFeatures,2,3,trainingFeatures,labels(train,:)); \n",
" % RESCALE TEST DATA TO TRAINING SCALE SPACE\n",
" [testingFeatures]=rescale(features(test,:),mf,sf);\n",
" % EVALUATE WITH TEST DATA - ADD YOUR MODEL EVALUATION CODE HERE\n",
" [voting,model_output] = knnfwd(model ,testingFeatures);\n",
" % CONVERT labels(test,:) LABELS TO SAME FORMAT TO COMPUTE ERROR \n",
" labels_test = zeros(size(model_output,1),1); % create array of 0s\n",
" labels_test(find(labels(test,1)==1))=1; % convert column 1 to class 1 \n",
" labels_test(find(labels(test,2)==1))=2; % convert column 2 to class 2 \n",
" % COUNT ERRORS \n",
" errors(i) = mean ( model_output ~= labels_test )\n",
" end\n",
" disp(['cross validation error: ' num2str(mean(errors))])\n",
" disp(['cross validation accuracy: ' num2str(1-mean(errors))])"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
\ No newline at end of file
{
"metadata": {
"name": "",
"signature": "sha256:1ffd98b7c0e2505fce2d22c46232fa4cd491976740e5a8d1c6eec175cb6f3566"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lab 3\n",
"=====\n",
"\n",
"PURPOSE\n",
"Sometimes, an unsupervised learning technique is preferred. Perhaps you do not have access to adequate training data. Or perhaps the classifications for the training data's labels events are not completely clear. Or perhaps you just want to quickly sort real-world, unseen, data into groups based on it's feature similarity. Regardless of your situation, clustering is a great option!\n",
"\n",
"Section 1: Clustering\n",
"Now we're going to try clustering with a familiar bunch of audio files and code. Sorry, the simple drum loop is going to make an appearance again. However, once we prove that it works - you can experiment with other audio collections that are posted. \n",
"\n",
"Create a new .m file for your code. \n",
"\n",
"Load simpleLoop.wav. (Sorry - we'll use other audio files soon! It's best to start simple - because if it don't work for this file, we have a problem.)\n",
"\n",
"Segment this file into 100ms frames based on the onsets.\n",
"\n",
"Now, feature extract the frames using only zero crossing and centroid. Store the feature values in one matrix for both the kick and the snares\u2026 remember, we don't care about the labels with clustering - we just want to create some clustered groups of data.\n",
"\n",
"Scale the features (using the scale function) from -1 to 1. (See Lab 2 if you need a reminder.)\n",
"\n",
"It's cluster time! We're using NETLAB's implementation of the kmeans algorithm. \n",
"\n",
"Use the kmeans algorithm to create clusters of your feature. kMeans will output 2 things of interest to you: \n",
"(1) The center-points of clusters. You can use the coordinates of the center of the cluster to measure the distance of any point from the center. This not only provides you with a distance metric of how \"good\" a point fits into a given cluster, but this allows you to sort by the points which are closest to the center of a given frame! Quite useful. \n",
"\n",
"(2) Each point will be assigned a label, or cluster #. You can then use this label to produce a transcription, do creative stuff, or further train another downstream classifier.\n",
"\n",
"Attention:\n",
"There are 2 functions called kmeans - one from the CATBox and another from Netlab. You should be using the one from Netlab. Verify that you are by typing which kmeans in your command line to verify...\n",
"\n",
"Here's the help function for kmeans: \n",
"\n",
"> help kmeans\n",
"\n",
" KMEANS\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Trains a k means cluster model.\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Description\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 CENTRES = KMEANS(CENTRES, DATA, OPTIONS) uses the batch K-means\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0algorithm to set the centres of a cluster model. The matrix DATA\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0represents the data which is being clustered, with each row\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0corresponding to a vector. The sum of squares error function is used.\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0The point at which a local minimum is achieved is returned as\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0CENTRES. The error value at that point is returned in OPTIONS(8).\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[CENTRES, OPTIONS, POST, ERRLOG] = KMEANS(CENTRES, DATA, OPTIONS)\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0also returns the cluster number (in a one-of-N encoding) for each\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0data point in POST and a log of the error values after each cycle in\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ERRLOG. The optional parameters have the following\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0interpretations.\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OPTIONS(1) is set to 1 to display error values; also logs error\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0values in the return argument ERRLOG. If OPTIONS(1) is set to 0, then\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0only warning messages are displayed. If OPTIONS(1) is -1, then\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0nothing is displayed.\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OPTIONS(2) is a measure of the absolute precision required for the\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0value of CENTRES at the solution. If the absolute difference between\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0the values of CENTRES between two successive steps is less than\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OPTIONS(2), then this condition is satisfied.\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OPTIONS(3) is a measure of the precision required of the error\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0function at the solution. If the absolute difference between the\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0error functions between two successive steps is less than OPTIONS(3),\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0then this condition is satisfied. Both this and the previous\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0condition must be satisfied for termination.\n",
"\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OPTIONS(14) is the maximum number of iterations; default 100.\n",
"\n",
" Now, simply put, here are some examples of how you use it: \n",
"\n",
" % Initialize # of clusters that you want to find and their initial conditions.\n",
" numCenters = 2; % the size of the initial centers; this is passed to k-means to determine the value of k.\n",
" numFeatures = 2; % replace the \"2\" with however many features you have extracted\n",
" centers = zeros(numCenters , numFeatures ); % inits center points to 0\n",
" \u00a0\n",
" % setup vector of options for kmeans trainer\n",
" options(1) = 1; \n",
" options(5) = 1;\n",
" options(14) = 50; % num of steps to wait for convergence\n",
" \u00a0\n",
" % train centers from data\n",
" [centers,options,post] = kmeans(centers , your_feature_data_matrix , options);\n",
" \u00a0\n",
" %Output: \n",
" % Centers contains the center coordinates of the clusters - we can use this to calculate the distance for each point \n",
" in the distance to the cluster center.\n",
" % Post contains the assigned cluster number for each point in your feature matrix. (from 1 to k) \n",
"\n",
"Write a script to list which audio slices (or audio files) were categorized as Cluster # 1. Do the same or Cluster # 2. Do the clusters make sense? Now, modify the script to play the audio slices that in each cluster - listening to the clusters will help us build intuition of what's in each cluster. \n",
"\n",
"Repeat this clustering (steps 3-7), and listening to the contents of the clusters with CongaGroove-mono.wav. \n",
"\n",
"Repeat this clustering (steps 3-7) using the CongaGroove and 3 clusters. Listen to the results. Try again with 4 clusters. Listen to the results. (etc, etc\u2026)\n",
"\n",
"Once you complete this, try out some of the many, many other audio loops in the audio loops. (Located In audio\\Miscellaneous Loops Samples and SFX)\n",
"\n",
"Let's add MFCCs to the mix. Extract the mean of the 12 MFCCs (coefficients 1-12, do not use the \"0th\" coefficient) for each onset using the code that you wrote. Add those to the feature vectors, along with zero crossing and centroid. We should now have 14 features being extracted - this is started to get \"real world\"! With this simple example (and limited collection of audio slices, you probably won't notice a difference - but at least it didn't break, right?) Let's try it with the some other audio to truly appreciate the power of timbral clustering.\n",
"\n",
"BONUS (ONLY IF YOU HAVE EXTRA TIME\u2026)\n",
"Now that we can take ANY LOOP, onset detect, feature extract, and cluster it, let's have some fun. \n",
"Choose any audio file from our collection and use the above techniques break it up into clusters. \n",
"Listen to those clusters.\n",
"\n",
"Some rules of thumb: since you need to pick the number of clusters ahead of time, listen to your audio files first. \n",
"You can break a drum kit or percussion loop into 3 - 6 clusters for it to segment well. More is OK too.\n",
"Musical loops: 3-6 clusters should work nicely. \n",
"Songs - lots of clusters for them to segment well. Try 'em out!\n",
"\n",
"BONUS (ONLY IF YOU REALLY HAVE EXTRA TIME\u2026)\n",
"Review your script that PLAYs all of the audio files that were categorized as Cluster # 1 or Cluster # 2. \n",
"Now, modify your script to play and plot the audio files which are closest to the center of your clusters.\n",
"\n",
"This hopefully provides you with which files are representative of your cluster. \n",
"\n",
"Helpful Commands for sorting or measuring distance: \n",
"\n",
"d = dist2( featureVector1 ,featureVector2 ) % measures the Euclidean distance betw/ point 1 and point 1\n",
"\n",
"DIST2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Calculates squared distance between two sets of points.\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Description\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0D = DIST2(X, C) takes two matrices of vectors and calculates the\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0squared Euclidean distance between them. Both matrices must be of\n",
"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0the same column dimension. If X has M rows and N columns, and C has\n",
" \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0L rows and N columns, then the result has M rows and L columns. The\n",
" \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0I, Jth entry is the squared distance from the Ith row of X to the\n",
" \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Jth row of C.\n",
"\n",
"[y,ind] = sort( ) \n",
">> help sort\n",
"SORT Sort in ascending or descending order.\n",
"For vectors, SORT(X) sorts the elements of X in ascending order.\n",
"For matrices, SORT(X) sorts each column of X in ascending order.\n",
"For N-D arrays, SORT(X) sorts the along the first non-singleton\n",
"dimension of X. When X is a cell array of strings, SORT(X) sorts\n",
"the strings in ASCII dictionary order.\n",
"\n",
"Y = SORT(X,DIM,MODE)\n",
"has two optional parameters. \n",
"DIM selects a dimension along which to sort.\n",
"MODE selects the direction of the sort\n",
"'ascend' results in ascending order\n",
" 'descend' results in descending order\n",
" The result is in Y which has the same shape and type as X.\n",
"\n",
"[Y,I] = SORT(X,DIM,MODE) also returns an index matrix I.\n",
"If X is a vector, then Y = X(I). \n",
"If X is an m-by-n matrix and DIM=1, then\n",
"for j = 1:n, Y(:,j) = X(I(:,j),j); end\n",
"\n",
"When X is complex, the elements are sorted by ABS(X). Complex\n",
"matches are further sorted by ANGLE(X).\n",
"\n",
"When more than one element has the same value, the order of the\n",
"elements are preserved in the sorted result and the indexes of\n",
"equal elements will be ascending in any index matrix.\n",
"\n",
"Example: If X = [3 7 5\n",
"0 4 2]\n",
"\n",
"then sort(X,1) is [0 4 2 and sort(X,2) is [3 5 7\n",
"3 7 5] 0 2 4];\n",
"\u00a0\n",
"[y,ind] = sortrows ( ) \n",
"\u00a0\n",
"SORTROWS Sort rows in ascending order.\n",
"Y = SORTROWS(X) sorts the rows of the matrix X in ascending order as a\n",
"group. X is a 2-D numeric or char matrix. For a char matrix containing\n",
"strings in each row, this is the familiar dictionary sort. When X is\n",
"complex, the elements are sorted by ABS(X). Complex matches are further\n",
"sorted by ANGLE(X). X can be any numeric or char class. Y is the same\n",
"size and class as X.\n",
"\n",
"SORTROWS(X,COL) sorts the matrix based on the columns specified in the\n",
"vector COL. If an element of COL is positive, the corresponding column\n",
"in X will be sorted in ascending order; if an element of COL is negative,\n",
"the corresponding column in X will be sorted in descending order. For \n",
"example, SORTROWS(X,[2 -3]) sorts the rows of X first in ascending order \n",
"for the second column, and then by descending order for the third\n",
"column.\n",
"\n",
"[Y,I] = SORTROWS(X) and [Y,I] = SORTROWS(X,COL) also returns an index \n",
"matrix I such that Y = X(I,:).\n",
"[y,ind] = sortrows (featureData_from_a_particular_cluster, clusterNum)\n",
" \u00a0\n"
]
}
],
"metadata": {}
}
]
}
\ No newline at end of file
{
"metadata": {
"name": "",
"signature": "sha256:8cb92bce0fc8cd2698239681515f405f57245c6def0ac2620d221a3f49fcad54"
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Lab 4\n",
"=====\n",
"\n",
"Summary:\n",
"\n",
"1. Separate sources.\n",
"2. Separate noisy sources.\n",
"3. Classify separated sources.\n",
"\n",
"Matlab Programming Tips\n",
"* Pressing the up and down arrows let you scroll through command history.\n",
"* A semicolon at the end of a line simply means ``suppress output''.\n",
"* Type `help <command>` for instant documentation. For example, `help wavread`, `help plot`, `help sound`. Use `help` liberally!\n",
"\n",
"\n",
"Section 1: Source Separation\n",
"----------------------------\n",
"\n",
"1. In Matlab: Select File > Set Path. \n",
"\n",
"Select \"Add with Subfolders\". \n",
"\n",
"Select `/usr/ccrma/courses/mir2011/lab3skt`.\n",
"\n",
"2. As in Lab 1, load the file, listen to it, and plot it.\n",
"\n",
" [x, fs] = wavread('simpleLoop.wav');\n",
" sound(x, fs)\n",
" t = (0:length(x)-1)/fs;\n",
" plot(t, x)\n",
" xlabel('Time (seconds)')\n",
"\n",
"3. Compute and plot a short-time Fourier transform, i.e., the Fourier transform over consecutive frames of the signal.\n",
"\n",
" frame_size = 0.100;\n",
" hop = 0.050;\n",
" X = parsesig(x, fs, frame_size, hop);\n",
" imagesc(abs(X(200:-1:1,:)))\n",
"\n",
" Type `help parsesig`, `help imagesc`, and `help abs` for more information.\n",
"\n",
" This step gives you some visual intuition about how sounds (might) overlap.\n",
"\n",
"4. Let's separate sources!\n",
"\n",
" K = 2;\n",
" [y, W, H] = sourcesep(x, fs, K);\n",
"\n",
" Type `help sourcesep` for more information.\n",
"\n",
"5. Plot and listen to the separated signals.\n",
"\n",
" plot(t, y)\n",
" xlabel('Time (seconds)')\n",
" legend('Signal 1', 'Signal 2')\n",
" sound(y(:,1), fs)\n",
" sound(y(:,2), fs)\n",
"\n",
" Feel free to replace `Signal 1` and `Signal 2` with `Kick` and `Snare` (depending upon which is which). \n",
"\n",
"6. Plot the outputs from NMF.\n",
"\n",
" figure\n",
" plot(W(1:200,:))\n",
" legend('Signal 1', 'Signal 2')\n",
" figure\n",
" plot(H')\n",
" legend('Signal 1', 'Signal 2')\n",
"\n",
" What do you observe from `W` and `H`? \n",
"\n",
" Does it agree with the sounds you heard?\n",
"\n",
"7. Repeat the earlier steps for different audio files.\n",
"\n",
" * `125BOUNC-mono.WAV`\n",
" * `58BPM.WAV` \n",
" * `CongaGroove-mono.wav`\n",
" * `Cstrum chord_mono.wav`\n",
"\n",
" ... and more.\n",
"\n",
"8. Experiment with different values for the number of sources, `K`. \n",
"\n",
" Where does this separation method succeed? \n",
"\n",
" Where does it fail?\n",
"\n",
"\n",
"Section 2: Noise Robustness\n",
"---------------------------\n",
"\n",
"1. Begin with `simpleLoop.wav`. Then try others.\n",
"\n",
" Add noise to the input signal, plot, and listen.\n",
"\n",
" xn = x + 0.01*randn(length(x),1);\n",
" plot(t, xn)\n",
" sound(xn, fs)\n",
"\n",
"2. Separate, plot, and listen.\n",
"\n",
" [yn, Wn, Hn] = sourcesep(xn, fs, K);\n",
" plot(t, yn)\n",
" sound(yn(:,1), fs)\n",
" sound(yn(:,2), fs)\n",
" \n",
" How robust to noise is this separation method? \n",
"\n",
" Compared to the noisy input signal, how much noise is left in the output signals? \n",
"\n",
" Which output contains more noise? Why?\n",
"\n",
"\n",
"Section 3: Classification\n",
"-------------------------\n",
"\n",
"Follow the K-NN example in Lab 1, but classify the *separated* signals.\n",
"\n",
"As in Lab 1, extract features from each training sample in the kick and snare drum directories.\n",
"\n",
"1. Train a K-NN model using the kick and snare drum samples.\n",
"\n",
" labels=[[ones(10,1) zeros(10,1)];\n",
" [zeros(10,1) ones(10,1)]];\n",
" model_snare = knn(5, 2, 1, trainingFeatures, labels);\n",
" [voting, model_output] = knnfwd(model_snare, featuresScaled)\n",
"\n",
"2. Extract features from the drum signals that you separated in Lab 4 Section 1. \n",
"\n",
"3. Classify them using the K-NN model that you built.\n",
"\n",
" Does K-NN accurately classify the separated signals?\n",
"\n",
"4. Repeat for different numbers of separated signals (i.e., the parameter `K` in NMF). \n",
"\n",
"5. Overseparate the signal using `K = 20` or more. For those separated components that are classified as snare, add them together using `sum}. The listen to the sum signal. Is it coherent, i.e., does it sound like a single separated drum?\n",
"\n",
"...and more!\n",
"\n",
"* If you have another idea that you would like to try out, please ask me!\n",
"* Feel free to collaborate with a partner. Together, brainstorm your own problems, if you want!\n",
"\n",
"Good luck!\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment