Deep Learning and Neural Networks
Advanced Research Seminar I/III
Graduate School of Information Science
Nara Institute of Science and Technology
Kevin Duh, IS Building Room A-705
Office hours: after class, or appointment by email (email@example.com where x=kevinduh)
Deep Learning is a family of methods that exploits using deep architectures to learn high-level feature representations from data. Recently, these methods have helped researchers achieve impressive results in various fields within Artificial Intelligence, such as speech recognition, computer vision, and natural language processing. This course provides an overview of Deep Learning and Neural Networks; the goal is to establish a foundational understanding at a level sufficient for students to start reading research papers in this exciting and growing area.
Prerequisites: basic calculus, probability, linear algebra.
Jan 14, 16, 21, 23 (9:20-10:50am) @ IS Building Room L2
- Lecture 4 (Jan 23): Advanced Topics in Optimization (Hessian-free optimization, Dropout, Large-scale distributed training, Hyper-parameter search)
Two video options are available:  Video (HD) includes slide synchronization and requires Adobe Flash Player version 10 or above.  Video (Youtube) may be faster to load and is recommended if you have trouble with Video (HD).
If you find errors, typos, or bugs in the slides/video, please let me know.
- Short surveys and tutorials:
- Yoshua Bengio’s monograph (available online): Learning Deep Architectures for AI
- Yann LeCun & Marc’Aurelio Ranzato’s ICML2013 tutorial (computer vision perspective)
- Richard Socher et. al.’s NAACL2013 tutorial (natural language processing perspective)
- Li Deng’s talk at Johns Hopkins University CSLP (speech recognition perspective)
- In-depth lectures and books:
- To go even deeper:
Other reference deep learning video on Youtube