Logistics | Course Info | Syllabus | Other Resources
- Lectures: Tue, Thu, 9:45am-11:15am, Nvidia Auditorium
- Office Hours and Sections: Google Calendar
Please use Ed for all questions related to lectures and coursework. For SCPD students, please email firstname.lastname@example.org or call 650-741-1542.
ermon [at] cs.stanford.edu
cundy [at] stanford.edu
avelu [at] stanford.edu
kellyyhe [at] stanford.edu
xiyan [at] stanford.edu
Probabilistic graphical models are a powerful framework for representing complex domains using probability distributions, with numerous applications in machine learning, computer vision, natural language processing and computational biology. Graphical models bring together graph theory and probability theory, and provide a flexible framework for modeling large collections of random variables with complex interactions. This course will provide a comprehensive survey of the topic, introducing the key formalisms and main techniques used to construct them, make predictions, and support decision-making under uncertainty.
The aim of this course is to develop the knowledge and skills necessary to design, implement and apply these models to solve real problems. The course will cover: (1) Bayesian networks, undirected graphical models and their temporal extensions; (2) exact and approximate inference methods; (3) estimation of the parameters and the structure of graphical models.
Students are expected to have background in basic probability theory, statistics, programming, algorithm design and analysis.
Required Textbook: (“PGM”) Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and Nir Friedman. MIT Press.
Course Notes: Available here. Student contributions welcome!
Lecture Videos: [Link TBD]
- (“GEV”) Graphical models, exponential families, and variational inference by Martin J. Wainwright and Michael I. Jordan. Available online.
- Modeling and Reasoning with Bayesian Networks by Adnan Darwiche. Available online (through Stanford).
- Pattern Recognition and Machine Learning by Chris Bishop. Available online.
- Machine Learning: A Probabilistic Perspective by Kevin P. Murphy. Available online (through Stanford).
- Information Theory, Inference, and Learning Algorithms by David J. C. Mackay. Available online.
- Bayesian Reasoning and Machine Learning by David Barber. Available online.
Homeworks (70%): There will be five homeworks with both written and programming parts. Each homework is centered around an application and will also deepen your understanding of the theoretical concepts. Homeworks will be posted on Ed.
Final Exam (30%): There will be a final exam covering the material taught in the course. The exam will be take home, and must be taken in a 48-hour window from 9:00 AM PT on Tuesday, Mar 15th to 9:00 AM PT on Thursday, Mar 17th. The exam details and instructions for the exam can be found on Ed.
Extra Credit (+3%): You will be awarded with up to 3% extra credit if you answer other students’ questions on Ed in a substantial and helpful way, or contribute to the course notes on GitHub with pull requests.
Written Assignments: Homeworks should be written up clearly and succinctly; you may lose points if your answers are unclear or unnecessarily complicated. You are encouraged to use LaTeX to writeup your homeworks (here is a template), but this is not a requirement.
Homework Submission: All students (non-SCPD and SCPD) should submit their assignments electronically via Gradescope.
Late Homework: You have 6 late days to use at any time during the term without penalty. For a particular homework, you can use only two late days. Once you run out of late days, you will incur a 25% penalty for each extra late day you use. Each late homework should be clearly marked as “Late” on the first page.
Regrade Policy: If you believe that the course staff made an error in grading, you may submit a regrade request through Gradescope within one week of receiving your grade. Please be as specific as possible with your regrade request.
Collaboration Policy and Honor Code: You are free to form study groups and discuss homeworks and projects. However, you must write up homeworks and code from scratch independently without referring to any notes from the joint session. You should not copy, refer to, or look at the solutions from previous years’ homeworks in preparing your answers. It is an honor code violation to intentionally refer to a previous year’s solutions, either official or written up by another student. Anybody violating the honor code will be referred to the Office of Community Standards.
|1||Jan. 4 & 6||Introduction, Probability Theory, Bayesian Networks||PGM Ch. 1-3||HW 1 (Jan 4 - Jan 18)|
|2||Jan. 11 & 13||Undirected models||PGM Ch. 4|
|3||Jan. 18 & 20||Learning Bayes Nets||PGM Ch. 16-17||HW 2 (Jan 18 - Jan 28)|
|4||Jan 25 & 27||Exact Inference; Message Passing||PGM Ch. 9-10||HW 3 (Jan 27 - Feb 8)|
|5||Feb. 1 & 3||Sampling||PGM Ch. 12|
|6||Feb. 8 & 10||MAP Inference; Structured prediction||PGM Ch. 13||HW 4 (Feb 8 - Feb 22)|
|7||Feb. 15 & 17||Parameter Learning||PGM Ch. 19-20|
|8||Feb. 22 & 24||Bayesian Learning; Structure Learning||PGM Ch. 17-18||HW 5 (Feb 24 - Mar 10)|
|9||Mar. 1 & 3||Exponential families; variational inference||PGM Ch. 8 & 11; GEV Section 3|
|10||Mar. 8 & 10||Advanced topics and conclusions|
Many thanks to David Sontag, Adnan Darwiche, Vibhav Gogate, and Tamir Hazan for sharing material used in slides and homeworks.
Additional TA Sessions
Attendence is optional but encouraged. Times of the section will be is TBD and will be announced near the start of the quarter.
- Week 2: d-separation
- Week 4: Variable elimination
- Week 5: Junction Tree
- Week 6: Metropolis Hastings and Gibbs sampling
- Week 7: EM
There are many software packages available that can greatly simplify the use of graphical models. Here are a few examples:
The Stanford University Honor Code
The Honor Code (https://communitystandards.stanford.edu/policies-and-guidance/honor-code) is a part of this course.The Honor Code is the university's statement on academic integrity written by students in 1921. It articulates universityexpectations of students and faculty in establishing and maintaining the highest standards in academic work.The Honor Code is an undertaking of the students, individually and collectively: that they will not give or receive aid inexaminations; that they will not give or receive unpermitted aid in class work, in the preparation of reports, or in any other workthat is to be used by the instructor as the basis of grading; that they will do their share and take an active part in seeing to it thatothers as well as themselves uphold the spirit and letter of the Honor Code.The faculty on its part manifests its confidence in the honor of its students by refraining from proctoring examinations and fromtaking unusual and unreasonable precautions to prevent the forms of dishonesty mentioned above. The faculty will also avoid, asfar as practicable, academic procedures that create temptations to violate the Honor Code.While the faculty alone has the right and obligation to set academic requirements, the students and faculty will work together toestablish optimal conditions for honorable academic work.
Students with Documented Disabilities
Students who may need an academic accommodation based on the impact of a disability must initiate the request with the Officeof Accessible Education (OAE). Professional staff will evaluate the request with required documentation, recommend reasonableaccommodations, and prepare an Accommodation Letter for faculty dated in the current quarter in which the request is beingmade. Students should contact the OAE as soon as possible since timely notice is needed to coordinate accommodations. TheOAE is located at 563 Salvatierra Walk (phone: 723-1066, URL: http://oae.stanford.edu).
Names and Pronouns
Use the names and pronouns (e.g., they/them, she/her, he/him, just a name, or something else) indicated by your classmates forthemselves. If you don’t want to share a set of pronouns for yourself, that is perfectly acceptable, too. If your name or pronounschange during the course, we invite you to share this with us and/or other students, so we may talk with you and refer to yourideas in discussion as you would wish.
Probabilistic graphical models are a powerful framework for representing complex domains using probability distributions, with numerous applications in machine learning, computer vision, natural language processing and computational biology.Are probabilistic graphical models still used? ›
At the same time, probabilistic modeling is widely used throughout machine learning and in many real-world applications. These techniques can be used to solve problems in fields as diverse as medicine, language processing, vision, and many others.Are neural networks probabilistic graphical models? ›
TL;DR: We use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves inference tasks in probabilistic graphical models. Abstract: A useful computation when acting in a complex environment is to infer the marginal probabilities or most probable states of task-relevant variables.What is Pgmpy? ›
pgmpy [pgmpy] is a python library for working with graphical models. It al- lows the user to create their own graphical models and answer inference or map queries over them. pgmpy has implementation of many inference algorithms like VariableElimination, Belief Propagation etc.What is probabilistic model? ›
Probabilistic modeling is a statistical technique used to take into account the impact of random events or actions in predicting the potential occurrence of future outcomes.What are the types of graphical models? ›
The two most common forms of graphical model are directed graphical models and undirected graphical models, based on directed acylic graphs and undirected graphs, respectively.What is probabilistic graphical models in machine learning? ›
A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a graph expresses the conditional dependence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning.Why do we need structured probabilistic models? ›
Structured probabilistic models provide a formal framework for modeling only direct interactions between random variables. This allows the models to have significantly fewer parameters and therefore be estimated reliably from less data.Why are graphical models important? ›
Graphical models allow us to define general message-passing algorithms that implement probabilistic inference efficiently. Thus we can answer queries like “What is p(A|C = c)?” without enumerating all settings of all variables in the model.Why is there a Bayesian network? ›
Bayesian networks are a type of Probabilistic Graphical Model that can be used to build models from data and/or expert opinion. They can be used for a wide range of tasks including prediction, anomaly detection, diagnostics, automated insight, reasoning, time series prediction and decision making under uncertainty.
Implement Bayesian Networks In Python | Edureka - YouTubeWhat is CausalNex? ›
CausalNex is a Python library that uses Bayesian Networks to combine machine learning and domain expertise for causal reasoning. You can use CausalNex to uncover structural relationships in your data, learn complex distributions, and observe the effect of potential interventions.What are the two types of probabilistic Models? ›
These models can be part deterministic and part random or wholly random.What is probabilistic model give an example of it? ›
For example, if you live in a cold climate you know that traffic tends to be more difficult when snow falls and covers the roads. We could go a step further and hypothesize that there will be a strong correlation between snowy weather and increased traffic incidents.What is the difference between deterministic and probabilistic Models? ›
In deterministic models, the output of the model is fully determined by the parameter values and the initial values, whereas probabilistic (or stochastic) models incorporate randomness in their approach. Consequently, the same set of parameter values and initial conditions will lead to a group of different outputs.What is probabilistic models with hidden variables? ›
It is a very general algorithm used to learn probabilistic models in which variables are hidden; that is, some of the variables are not observed. Models with hidden variables are sometimes called latent variable models.Is Linear model A graphical model? ›
Linear Regression as a Graphical Model
The observed data used in the linear regression example.
Graphical models aim to describe concisely the possibly complex interrelationships between a set of variables. Moreover, from the description key, properties can be read directly. The central idea is that each variable is represented by a node in a graph. Any pair of nodes may be joined by an edge.