Update:Dr. Le has posted tutorials on this topic: Part 1 and Part 2.

Dr. Quoc Le from the Google Brain project team (yes, the one that made headlines for creating a cat recognizer) presented a series of lectures at the Machine Learning Summer School (MLSS ’14) in Pittsburgh this week. This is my favorite lecture series from the event till now and I was glad to be able to attend them.

The good news is that the organizers have made available the entire set of video lectures in 4K for you to watch. But since Dr. Le did most of them on a blackboard and did not provide any accompanying slides, I decided to put the brief content descriptions of the lectures along with the videos here. Hope this will help you navigate the videos better.

## Lecture 1: Neural Networks Review

*[JavaScript needed to view this video.]*

Dr. Le begins his lecture starting from the fundamentals on Neural Networks if you’d like to brush up your knowledge about them. Otherwise feel free to quickly skim through the initial sections, but I promise there are interesting things later on. You may use the links below to skip to the relevant parts. The links are using an experimental script, let me know in the comments if they don’t work.

#### Contents

- Introduction
- Why Neural Networks: Motivation, Non-linear classification
- Mathematical Expression for NN: Decision function, Minimizing Loss and Gradient Descent (
*Correction in derivative*), Making decision - Backpropagation: Audience questions, Derivation for backpropagation, Backpropagation algorithm

## Lecture 2: NNs in Practice

*[JavaScript needed to view this video.]*

If you have already covered NN in the past then the first lecture may have been a bit dry for you but the real fun begins in this lecture when Dr. Le starts talking about his experiences of using deep learning in practice.

#### Contents

- Stochastic gradient descent
- Clarifications from Lecture 1: Data partitioning is not needed, Derivative of the loss function, Tip – Write unit tests!
- Ideas for practical implementations: Breaking Symmetry, Monitoring Progress on training, Underfitting and overfitting, How to select NN architecture and hyper-parameters, Other tips for improvements
- Deep Neural Networks: Review of why NN, Shallow vs. Deep, Rectified Linear Units, Definitions for deep NN, History of NN
- Deep NN Architectures: Autoencoder, Intuition for using autoencoders for initialization (
*Continued in the next lecture*)

## Lecture 3: Deep NN Architectures

*[JavaScript needed to view this video.]*

In this lecture, Dr. Le finishes his description on NN architectures. He also talks a bit about how they are being used at Google – for applications in image and speech recognition, and language modelling.

#### Contents

- Pre-training with autoencoders
- Convolutional NN (Convnets): Local receptive field, Why are Convnets useful?, Image classification, General Pipeline
- Recurrent NN: Word Vectors
- Applications: Google Brain and other ongoing work

good，studying

Nice

great job! I will watch it

watch it

watch it

His introduction of word representation by Tomos Mikolov is not accurate. He should read more carefully about Tomos work before giving lecture.

Very nice series of lectures about Neural Networks and Deep Learning ! Thanks very much

I am watching this.

Good lecture

Good

good

Good!

Great!

Thanks a lot.

Nice,Great!

Great！

Best tutorial I found on deep learning. Thanks so much!

good

very good lectures

please we need more

extremely useful, thanks