Introduction to Machine Learning

Course No. 9070
Professor Michael L. Littman, PhD
Brown University
Share This Course
Course No. 9070
Video Streaming Included Free

What Will You Learn?

  • numbers Learn how machine learning works and how it is transforming society
  • numbers Survey the three broad types of machine learning: supervised, unsupervised, and reinforcement
  • numbers Dig into the procedures for writing and testing a machine learning program
  • numbers Use Python to try out sample programs for all flavors of machine learning
  • numbers Explore the many applications of machine learning-from medical diagnosis to games to media special effects

Course Overview

We live on a planet with billions of people—but also billions of computers, many of them programmed to evaluate and make decisions much as humans do. We don’t yet reside among truly intelligent machines, but they are getting there, and knowing how machines learn is crucial for everyone from professionals to students to ordinary citizens. Machine learning pervades our culture in a multitude of ways, through tools and practices from medical diagnosis and data management to speech synthesis and search engines.

An offshoot of artificial intelligence, machine learning takes programming a giant step beyond the traditional role of computers in routine data processing, such as scheduling, keeping accounts, and making calculations. Now computers are being programmed to figure out how to solve problems by themselves—problems that are so complex that humans often don’t know where to begin. Indeed, machine learning has become so advanced that, often, even the experts don’t know how a computer arrives at the solution it does.

Introduction to Machine Learning demystifies this revolutionary discipline in 25 try-it-yourself lessons taught by award-winning educator and researcher Michael L. Littman, the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. Dr. Littman guides you through the history, concepts, and techniques of machine learning, using the popular computer language Python to give you hands-on experience with the most widely used programs and specialized libraries.

For those new to Python, this course includes a lecture that is a dedicated tutorial on how to get started with this versatile, easy-to-use language. Professor Littman includes approximately one Python demonstration in each lesson. Even if you have never written code in Python, or any language, you can still run these programs for yourself to get a feeling for the amazing power of machine learning.

Get Started with Machine Learning

Backed by Bach-inspired music composed by a machine learning program, Professor Littman opens the course with playful displays of the technology: automatic voice transcription, word prediction, face aging, foreign language translation, voice simulation, and more. Then he launches into a real-world example: how to use machine learning to listen to heartbeats and diagnose heart disease. Traditional computer programs only do what you tell them to, and medical software would typically match a set of symptoms to already well-established diagnoses. But the advantage of machine learning is that the computer is set loose to find patterns that may have escaped human observation.

How does it do it? Professor Littman walks you through the process, which starts with choosing a “representational space”—a formal description that defines how to approach the problem. The representational space is the domain of all possible rules, or algorithms, which the machine-learning program should consider. It’s called a space because it encompasses an array of possibilities that can be made more or less expansive depending on the data and time available. The next step is defining the “loss function,” which determines how the possible rules in the representational space are assessed; better rules get better scores. Finally, a program called the “optimizer” rummages through the representational space to find the rules that score well. One or more of these rules become the preferred solution to the problem .

Dig into the Details

In Introduction to Machine Learning, you investigate three major types of representational spaces, focusing on the types of problems they excel at solving.

  • Decision Trees: Anyone who has dealt with a phone menu has faced a decision tree. “For sales, press 1. For accounts, press 2.” Each choice is followed by additional choices, until you get the person or department you want. Decision trees are a natural fit for machine-learning problems that require “if-then” reasoning, such as many medical diagnoses.
  • Bayesian Networks: In contrast to decision trees, which rely on a sequence of deductions, Bayesian networks involve inferences from probability. They are well-suited to cases where you need to work backwards from the data to their likely causes. A prominent example is software that identifies probable spam messages.
  • Neural Networks: Designed to work like neurons in the brain, neural networks excel at perceptual tasks, such as image recognition, language processing, and data classification. Deep neural networks are composed of networks of networks and are the heart of the “deep learning” revolution that Professor Littman covers in detail.

You delve into the mechanics of each of these strategies as well as their pitfalls, especially overfitting, which is when a rule works too well. Overfitting may sound like a good thing, but it is a sign that the rule is tailored too closely to the original data and may not work on new data that requires treatment with a general rule. Professor Littman explains how to steer clear of this hazard and deal with other problems, such as hidden biases, sampling flaws, and false positives.

Embark on Your Own Coding Adventures

Another way to classify machine learning programs is the degree of human input involved. Does the programmer specify a desired outcome or leave it to the computer—or is the approach something in between? These different strategies are:

  • Supervised Learning: Here, the desired answer is supplied by the programmer as a training dataset that acts as a teacher to guide the learning process. Recommender systems where the user rates a product work like this. So do a host of other machine-learning programs where examples are labeled with their relevant attribute.
  • Unsupervised Learning: This approach is like having no teacher at all. There is no right answer, just training data to be compared to test data in a search for similarity. News story recommender systems typically work this way, since people rarely rate the news.
  • Reinforcement Learning: This hybrid strategy is Dr. Littman’s favorite style of machine learning. Think of it as like having access to a critic. You are not being told what to do; you are simply getting feedback on how well you did it. The many examples of reinforcement learning in this course include an entire lesson on game-playing programs.

Throughout this extraordinary course, you dig deeply into the uses of machine learning for cutting-edge problems in research, education, business, entertainment, and daily life. You also consider the social implications of machine learning, which are likely to loom ever larger as its influence grows. Dr. Littman stresses that it’s up to each of us to ensure that this technology is applied in ways that benefit us all.

Therefore, it’s up to us to boost our machine-learning literacy. Many people regard the subject as a black box, where inscrutable things happen that lead to today’s technological wonders. Fortunately, Professor Littman has a gift for making opaque processes not only clear, but captivating. Introduction to Machine Learning will open your eyes to this thrilling field and, better yet, pave the way for your own coding adventures in machine learning.

Hide Full Description
25 lectures
 |  Average 29 minutes each
  • 1
    Telling the Computer What We Want
    Professor Littman gives a bird's-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem. x
  • 2
    Starting with Python Notebooks and Colab
    The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer. x
  • 3
    Decision Trees for Logical Rules
    Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I-as in diet" and "weigh"? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors." x
  • 4
    Neural Networks for Perceptual Rules
    Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits. x
  • 5
    Opening the Black Box of a Neural Network
    Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop. x
  • 6
    Bayesian Models for Probability Prediction
    A program need not understand the content of an email to know with high probability that it's spam. Discover how machine learning does so with the Naive Bayes approach, which is a simplified application of Bayes' theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam). x
  • 7
    Genetic Algorithms for Evolved Rules
    When you encounter a new type of problem and don't yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim. x
  • 8
    Nearest Neighbors for Using Similarity
    Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad. x
  • 9
    The Fundamental Pitfall of Overfitting
    Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations. x
  • 10
    Pitfalls in Applying Machine Learning
    Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge. x
  • 11
    Clustering and Semi-Supervised Learning
    See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples. x
  • 12
    Recommendations with Three Types of Learning
    Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman's dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix's celebrated one-million-dollar prize for an improved recommender algorithm. x
  • 13
    Games with Reinforcement Learning
    In 1959, computer pioneer Arthur Samuel popularized the term machine learning" for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today's sophisticated algorithms for improving play-at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research." x
  • 14
    Deep Learning for Computer Vision
    Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters. x
  • 15
    Getting a Deep Learner Back on Track
    Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It's an amazing feeling when you finally get the program working well. x
  • 16
    Text Categorization with Words as Vectors
    Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which define" the meanings of words using vectors in high-dimensional space-a method that involves techniques from linear algebra." x
  • 17
    Deep Networks That Output Language
    Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic! x
  • 18
    Making Stylistic Images with Deep Networks
    One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects. x
  • 19
    Making Photorealistic Images with GANs
    A new approach to image generation and discrimination pits both processes against each other in a generative adversarial network," or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud." x
  • 20
    Deep Learning for Speech Recognition
    Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today's sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers. x
  • 21
    Inverse Reinforcement Learning from People
    Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, if you are a horse and like carrots, go to the carrot." Inverse reinforcement looks at it like this: "if you see a horse go to the carrot, it might be because the horse likes carrots."" x
  • 22
    Causal Inference Comes to Machine Learning
    Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods-the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation. x
  • 23
    The Unexpected Power of Over-Parameterization
    Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses' original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems. x
  • 24
    Protecting Privacy within Machine Learning
    Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack. x
  • 25
    Mastering the Machine Learning Process
    Finish the course with a lightning tour of meta-learning-algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field. x

Lecture Titles

Clone Content from Your Professor tab

What's Included

What Does Each Format Include?

Video DVD
Instant Video Includes:
  • Ability to download 25 video lectures from your digital library
  • Downloadable PDF of the course guidebook
  • FREE video streaming of the course from our website and mobile apps
Video DVD
DVD Includes:
  • 25 lectures on 4 DVDs
  • Printed course guidebook
  • Downloadable PDF of the course guidebook
  • FREE video streaming of the course from our website and mobile apps
  • Closed captioning available

What Does The Course Guidebook Include?

Video DVD
Course Guidebook Details:
  • Printed course guidebook
  • Photos & illustrations
  • Suggested readings
  • Questions to consider

Enjoy This Course On-the-Go with Our Mobile Apps!*

  • App store App store iPhone + iPad
  • Google Play Google Play Android Devices
  • Kindle Fire Kindle Fire Kindle Fire Tablet + Firephone
*Courses can be streamed from anywhere you have an internet connection. Standard carrier data rates may apply in areas that do not have wifi connections pursuant to your carrier contract.

Your professor

Michael L. Littman

About Your Professor

Michael L. Littman, PhD
Brown University
Michael L. Littman is the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. He earned his bachelor’s and master’s degrees in Computer Science from Yale University and his PhD in Computer Science from Brown University. Professor Littman’s teaching has received numerous awards, including the Robert B. Cox Award from Duke University, the Warren I. Susman Award for...
Learn More About This Professor
Also By This Professor


Introduction to Machine Learning is rated 3.4 out of 5 by 5.
Rated 5 out of 5 by from The mystique of Machine Learning dispelled! I am working with a team of engineers to develop a web-based AI system for my startup, and not being an IT guy myself, feel it is imperative to understand at least the basics of Machine Learning (ML) and other elements of AI. After watching the course, I can state unequivocally that most of the mystique surrounding the subject has been dispelled. I am not saying that writing the software for ML analysis and other functions is easy, but at least the process by which such fantastic technology is made possible is clearer to me, thanks to the course. So, inasmuch as I might not be writing the next blockbuster ML algorithm, I can now participate fully in the creation of my system. The professor’s presentation was simply superb!
Date published: 2020-11-20
Rated 1 out of 5 by from The greatness is over... I used to be a fan of the great courses. But now their courses is more mind numbing than reality tv. They don't care anymore about teaching, just pandering to the masses. That's probably why the professor babbles on and on about what we're going to learn instead of actually teaching it. So much more could be said, but no need. The course is just as bad as the streaming services they're trying to copy.
Date published: 2020-11-16
Rated 1 out of 5 by from So much ubrelated fluff! The great courses is on a STREAK du Bing down things to make people feel smart without earning any real knowledge. This is not an exception. If you want to learn machine learning this course is not for you. The professor mostly talks about learning machine learning instead of teaching it. YouTube gives free and better sources. This course is only good for people who wants to pretend they know anything about it, just like their differential equations course... Probably one of their worse technical courses ever. They didn't even try on this one
Date published: 2020-11-16
  • y_2020, m_11, d_23, h_16
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.12
  • cp_1, bvpage1
  • co_hasreviews, tv_2, tr_3
  • loc_en_US, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.0
  • CLOUD, getContent, 60.93ms

Questions & Answers

Customers Who Bought This Course Also Bought