{"id":1104,"date":"2014-07-10T01:10:21","date_gmt":"2014-07-10T01:10:21","guid":{"rendered":"http:\/\/www.trivedigaurav.com\/blog\/?p=1104"},"modified":"2020-01-10T03:41:00","modified_gmt":"2020-01-10T03:41:00","slug":"quoc-les-lectures-on-deep-learning","status":"publish","type":"post","link":"https:\/\/www.trivedigaurav.com\/blog\/quoc-les-lectures-on-deep-learning\/","title":{"rendered":"Quoc Le&#8217;s Lectures on Deep Learning"},"content":{"rendered":"<blockquote><p><strong>Update:&nbsp;<\/strong> Dr. Le has posted tutorials on this topic: <a href=\"http:\/\/cs.stanford.edu\/~quocle\/tutorial1.pdf\">Part 1<\/a> and <a href=\"http:\/\/cs.stanford.edu\/~quocle\/tutorial2.pdf\">Part 2<\/a>.<\/p><\/blockquote>\n<p><a href=\"http:\/\/cs.stanford.edu\/~quocle\/\">Dr. Quoc Le<\/a> from the <a href=\"http:\/\/en.wikipedia.org\/wiki\/Google_Brain\">Google Brain<\/a> project team (yes, the one that made <a href=\"http:\/\/www.nytimes.com\/2012\/06\/26\/technology\/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=all&amp;_r=0\">headlines<\/a> for creating a cat recognizer) presented a series of lectures at the <a href=\"http:\/\/www.mlss2014.com\/\">Machine Learning Summer School (MLSS &#8217;14)<\/a> in Pittsburgh this week. This is my favorite lecture series from the event till now and I was glad to be able to attend them.<\/p>\n<p>The good news is that the organizers have made <a href=\"http:\/\/www.mlss2014.com\/materials.html\">available<\/a> the <a href=\"https:\/\/www.youtube.com\/watch?v=4myTpLua0EM&amp;index=1&amp;list=PLZSO_6-bSqHQCIYxE3ycGLXHMjK3XV7Iz\">entire set<\/a> of video lectures in 4K for you to watch. But since Dr. Le did most of them on a blackboard and did not provide any accompanying slides, I decided to put the brief content descriptions of the lectures along with the videos here. Hope this will help you navigate the videos better.<\/p>\n<h2>Lecture 1: Neural Networks Review<\/h2>\n<div id=\"ytlecture1\"><em>[JavaScript needed to view this video.]<\/em><\/div>\n<p>Dr. Le begins his lecture starting from the fundamentals on Neural Networks if you&#8217;d like to brush up your knowledge about them. Otherwise feel free to quickly skim through the initial sections, but I promise there are interesting things later on. You may use the links below to skip to the relevant parts. The links are using an experimental script, let me know in the comments if they don&#8217;t work.<\/p>\n<h4>Contents<\/h4>\n<ul>\n<li><a href=\"#\">Introduction<\/a><\/li>\n<li><a href=\"#\">Why Neural Networks:<\/a> Motivation, Non-linear classification<\/li>\n<li><a href=\"#\">Mathematical Expression for NN:<\/a> Decision function, Minimizing Loss and Gradient Descent (<em><a href=\"#\">Correction<\/a> in derivative<\/em>), Making decision<\/li>\n<li><a href=\"#\">Backpropagation:<\/a> Audience questions, <a href=\"#\">Derivation for backpropagation<\/a>, <a href=\"#\">Backpropagation algorithm<\/a><\/li>\n<\/ul>\n<hr>\n<h2>Lecture 2: NNs in Practice<\/h2>\n<div id=\"ytlecture2\"><em>[JavaScript needed to view this video.]<\/em><\/div>\n<p>If you have already covered NN in the past then the first lecture may have been a bit dry for you but the real fun begins in this lecture when Dr. Le starts talking about his experiences of using deep learning in practice.<\/p>\n<h4>Contents<\/h4>\n<ul>\n<li><a href=\"#\">Stochastic gradient descent<\/a><\/li>\n<li><a href=\"#\">Clarifications from Lecture 1:<\/a> Data partitioning is not needed, Derivative of the loss function, Tip &#8211; <a href=\"#\">Write unit tests!<\/a><\/li>\n<li><a href=\"#\">Ideas for practical implementations<\/a>: Breaking Symmetry, Monitoring Progress on training, Underfitting and overfitting, How to select NN architecture and hyper-parameters, Other tips for improvements<\/li>\n<li><a href=\"#\">Deep Neural Networks: <\/a> Review of why NN, Shallow vs. Deep, <a href=\"#\">Rectified Linear Units<\/a>, <a href=\"#\">Definitions for deep NN<\/a>, <a href=\"#\">History of NN<\/a><\/li>\n<li><a href=\"#\">Deep NN Architectures:<\/a> Autoencoder, <a href=\"#\">Intuition for using autoencoders for initialization<\/a> (<em>Continued in the next lecture<\/em>)<\/li>\n<\/ul>\n<hr>\n<h2>Lecture 3: Deep NN Architectures<\/h2>\n<div id=\"ytlecture3\"><em>[JavaScript needed to view this video.]<\/em><\/div>\n<p>In this lecture, Dr. Le finishes his description on NN architectures. He also talks a bit about how they are being used at Google &#8211; for applications in image and speech recognition, and language modelling.<\/p>\n<h4>Contents<\/h4>\n<ul>\n<li><a href=\"#\">Pre-training with autoencoders<\/a><\/li>\n<li><a href=\"#\">Convolutional NN (Convnets)<\/a>: Local receptive field, Why are Convnets useful?, Image classification, General Pipeline<\/li>\n<li><a href=\"#\">Recurrent NN:<\/a> <a href=\"#\">Word Vectors<\/a><\/li>\n<li><a href=\"#\">Applications:<\/a> Google Brain and other ongoing work<\/li>\n<\/ul>\n<p><script src=\"https:\/\/www.youtube.com\/iframe_api\" type=\"text\/javascript\"><\/script><script src=\"\/exp\/youtube\/youtube.js\" type=\"text\/javascript\"><\/script><script type=\"text\/javascript\">\/\/ <![CDATA[\nwindow.onYouTubeIframeAPIReady =  function () {\nprepareVideo('[{\"id\":\"ytlecture1\", \"v\":\"IxflKHX7aes\", \"ratio\":1.77778},{\"id\":\"ytlecture2\", \"v\":\"0EM1v6jDD_E\", \"ratio\":1.77778},{\"id\":\"ytlecture3\", \"v\":\"6yHO8pi0GZ8\", \"ratio\":1.77778}]');\n}\n\/\/ ]]><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Update:&nbsp; Dr. Le has posted tutorials on this topic: Part 1 and Part 2. Dr. Quoc Le from the Google Brain project team (yes, the one that made headlines for creating a cat recognizer) presented a series of lectures at the Machine Learning Summer School (MLSS &#8217;14) in Pittsburgh this week. This is my favorite &hellip; <a href=\"https:\/\/www.trivedigaurav.com\/blog\/quoc-les-lectures-on-deep-learning\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Quoc Le&#8217;s Lectures on Deep Learning<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[13,14],"tags":[],"class_list":["post-1104","post","type-post","status-publish","format-standard","hentry","category-machine-learning","category-talks"],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p46eol-hO","jetpack-related-posts":[],"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/posts\/1104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/comments?post=1104"}],"version-history":[{"count":127,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/posts\/1104\/revisions"}],"predecessor-version":[{"id":2870,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/posts\/1104\/revisions\/2870"}],"wp:attachment":[{"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/media?parent=1104"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/categories?post=1104"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.trivedigaurav.com\/blog\/wp-json\/wp\/v2\/tags?post=1104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}