Commit 9da3c372 authored by Steve Tjoa's avatar Steve Tjoa

clarify about

parent 50972ea9
...@@ -66,6 +66,8 @@ To edit musicinformationretrieval.com: ...@@ -66,6 +66,8 @@ To edit musicinformationretrieval.com:
$ git commit $ git commit
$ git push $ git push
You may need to wait 1-2 minutes before the changes are live on GitHub Pages.
Appendix Appendix
-------- --------
......
...@@ -169,6 +169,16 @@ div#notebook { ...@@ -169,6 +169,16 @@ div#notebook {
<div tabindex="-1" id="notebook" class="border-box-sizing"> <div tabindex="-1" id="notebook" class="border-box-sizing">
<div class="container" id="notebook-container"> <div class="container" id="notebook-container">
<div class="cell border-box-sizing text_cell rendered">
<div class="prompt input_prompt">
</div>
<div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><a href="index.html">&larr; Back to Index</a></p>
</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"> <div class="cell border-box-sizing text_cell rendered">
<div class="prompt input_prompt"> <div class="prompt input_prompt">
</div> </div>
...@@ -183,7 +193,7 @@ div#notebook { ...@@ -183,7 +193,7 @@ div#notebook {
</div> </div>
<div class="inner_cell"> <div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html"> <div class="text_cell_render border-box-sizing rendered_html">
<p>musicinformationretrieval.com is a collection of instructional materials for music information retrieval. These materials contain a mix of text, technical and quantitative discussion, and Python code.</p> <p><strong>musicinformationretrieval.com</strong> is a collection of instructional materials for music information retrieval. These materials contain a mix of text, technical and quantitative discussion, and Python code.</p>
<p>These pages, including the one you're reading, are authored using <a href="http://jupyter.org/">Jupyter notebooks</a>, formerly known as <a href="https://ipython.org/notebook.html">IPython notebooks</a>. They are statically hosted using <a href="https://pages.github.com/">GitHub Pages</a>. The GitHub repository is found here: <a href="https://github.com/stevetjoa/stanford-mir">stevetjoa/stanford-mir</a>.</p> <p>These pages, including the one you're reading, are authored using <a href="http://jupyter.org/">Jupyter notebooks</a>, formerly known as <a href="https://ipython.org/notebook.html">IPython notebooks</a>. They are statically hosted using <a href="https://pages.github.com/">GitHub Pages</a>. The GitHub repository is found here: <a href="https://github.com/stevetjoa/stanford-mir">stevetjoa/stanford-mir</a>.</p>
<p>This material is used during the annual Summer Workshop on Music Information Retrieval at CCRMA, Stanford University (<a href="https://ccrma.stanford.edu/workshops/music-information-retrieval-mir">description and registration</a>).</p> <p>This material is used during the annual Summer Workshop on Music Information Retrieval at CCRMA, Stanford University (<a href="https://ccrma.stanford.edu/workshops/music-information-retrieval-mir">description and registration</a>).</p>
...@@ -205,9 +215,9 @@ div#notebook { ...@@ -205,9 +215,9 @@ div#notebook {
<div class="inner_cell"> <div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html"> <div class="text_cell_render border-box-sizing rendered_html">
<p>The MIR workshop teaches the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using MIR algorithms. It lasts five full days, Monday through Friday. It was founded by <a href="https://www.linkedin.com/in/jayleboeuf">Jay LeBoeuf</a> (Real Industry, CCRMA consulting professor) in 2008.</p> <p>The MIR workshop teaches the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using MIR algorithms. It lasts five full days, Monday through Friday. It was founded by <a href="https://www.linkedin.com/in/jayleboeuf">Jay LeBoeuf</a> (Real Industry, CCRMA consulting professor) in 2008.</p>
<p>The workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of music information retrieval (MIR). We demonstrate the exciting technologies enabled by basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.</p> <p>The workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of music information retrieval (MIR). We demonstrate the technologies enabled by signal processing and machine learning. Lectures cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search and retrieval, and design and evaluation of classification systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.</p>
<p>Knowledge of basic digital audio principles is recommended. Experience with a scripting language such as Python or Matlab is desired. Students are encouraged to bring their own audio source material for course labs and demonstrations.</p> <p>Knowledge of basic digital audio principles is recommended. Experience with a scripting language such as Python or Matlab is desired. Students are encouraged to bring their own audio source material for course labs and demonstrations.</p>
<p>The workshop consists of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs allow students to design basic "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.</p> <p>The workshop consists of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs allow students to design basic "intelligent audio systems" leveraging existing MIR toolboxes, programming environments, and applications. Labs include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems.</p>
</div> </div>
</div> </div>
...@@ -301,6 +311,16 @@ div#notebook { ...@@ -301,6 +311,16 @@ div#notebook {
<li>2015: Eric Raymond, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr, Allen Wu, Aaron Hipple</li> <li>2015: Eric Raymond, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr, Allen Wu, Aaron Hipple</li>
</ul> </ul>
</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered">
<div class="prompt input_prompt">
</div>
<div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><a href="index.html">&larr; Back to Index</a></p>
</div> </div>
</div> </div>
</div> </div>
......
{ {
"cells": [ "cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[&larr; Back to Index](index.html)"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
...@@ -11,7 +18,7 @@ ...@@ -11,7 +18,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"musicinformationretrieval.com is a collection of instructional materials for music information retrieval. These materials contain a mix of text, technical and quantitative discussion, and Python code. \n", "**musicinformationretrieval.com** is a collection of instructional materials for music information retrieval. These materials contain a mix of text, technical and quantitative discussion, and Python code. \n",
"\n", "\n",
"These pages, including the one you're reading, are authored using [Jupyter notebooks](http://jupyter.org/), formerly known as [IPython notebooks](https://ipython.org/notebook.html). They are statically hosted using [GitHub Pages](https://pages.github.com/). The GitHub repository is found here: [stevetjoa/stanford-mir](https://github.com/stevetjoa/stanford-mir).\n", "These pages, including the one you're reading, are authored using [Jupyter notebooks](http://jupyter.org/), formerly known as [IPython notebooks](https://ipython.org/notebook.html). They are statically hosted using [GitHub Pages](https://pages.github.com/). The GitHub repository is found here: [stevetjoa/stanford-mir](https://github.com/stevetjoa/stanford-mir).\n",
"\n", "\n",
...@@ -31,11 +38,11 @@ ...@@ -31,11 +38,11 @@
"source": [ "source": [
"The MIR workshop teaches the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using MIR algorithms. It lasts five full days, Monday through Friday. It was founded by [Jay LeBoeuf](https://www.linkedin.com/in/jayleboeuf) (Real Industry, CCRMA consulting professor) in 2008.\n", "The MIR workshop teaches the underlying ideas, approaches, technologies, and practical design of intelligent audio systems using MIR algorithms. It lasts five full days, Monday through Friday. It was founded by [Jay LeBoeuf](https://www.linkedin.com/in/jayleboeuf) (Real Industry, CCRMA consulting professor) in 2008.\n",
"\n", "\n",
"The workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of music information retrieval (MIR). We demonstrate the exciting technologies enabled by basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.\n", "The workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of music information retrieval (MIR). We demonstrate the technologies enabled by signal processing and machine learning. Lectures cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search and retrieval, and design and evaluation of classification systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.\n",
"\n", "\n",
"Knowledge of basic digital audio principles is recommended. Experience with a scripting language such as Python or Matlab is desired. Students are encouraged to bring their own audio source material for course labs and demonstrations.\n", "Knowledge of basic digital audio principles is recommended. Experience with a scripting language such as Python or Matlab is desired. Students are encouraged to bring their own audio source material for course labs and demonstrations.\n",
"\n", "\n",
"The workshop consists of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs allow students to design basic \"intelligent audio systems\", leveraging existing MIR toolboxes, programming environments, and applications. Labs include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems." "The workshop consists of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs allow students to design basic \"intelligent audio systems\" leveraging existing MIR toolboxes, programming environments, and applications. Labs include creation and evaluation of basic instrument recognition, transcription, and real-time audio analysis systems."
] ]
}, },
{ {
...@@ -105,6 +112,13 @@ ...@@ -105,6 +112,13 @@
"- 2014: Krishna Kumar, Owen Campbell, Dan Cartoon, Rob Miller, Davide Fossati, Biagio Gallo, Joel Hunt, Shinobu Yamada, Fredom Luo, Sejin Oh, Phaedon Sinis, Xinyuan Lai, Greg Mertz, Matt Mitchell\n", "- 2014: Krishna Kumar, Owen Campbell, Dan Cartoon, Rob Miller, Davide Fossati, Biagio Gallo, Joel Hunt, Shinobu Yamada, Fredom Luo, Sejin Oh, Phaedon Sinis, Xinyuan Lai, Greg Mertz, Matt Mitchell\n",
"- 2015: Eric Raymond, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr, Allen Wu, Aaron Hipple" "- 2015: Eric Raymond, Stelios Andrew Stavroulakis, Richard Mendelsohn, Naithan Bosse, Alessio Bazzica, Karthik Yadati, Martha Larson, Stephen Hartzog, Philip Lee, Jaeyoung Choi, Matthew Gallagher, Yule Wu, Mark Renker, Rohit Ainapure, Eric Tarr, Allen Wu, Aaron Hipple"
] ]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[&larr; Back to Index](index.html)"
]
} }
], ],
"metadata": { "metadata": {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment