Probabilistic Graphical Models 1: Introduction

This course is offered by Prof. Daphne Koller from Stanford University and Coursera. The class webpage is here: class.coursera.org/pgm

 

Daphne Koller is a professor in the Department of Computer Science at Stanford University and a MacArthur Fellowship recipient. Her general research area is artificial intelligence and its application in the biomedical sciences. In 2009 she published a textbook on Probabilistic Graphical Models together with Nir Friedman.

 

Prerequisites of this course: probability and statistics, machine learning, Matlab or GNU Octave.

 

A probabilistic graphical model (PGM for short) is a probabilistic model for which a graph denotes the conditional independence structure between random variables. It is an advanced topic in machine learning. Since modern machine learning models are nearly all probabilistic and statistically learned, and graph structure is efficient in demonstrating template models, this technique is widely applied. The applications of PGM include medical diagnosis, fault diagnosis, natural language processing, traffic analysis, social network models, message decoding, computer vision (image segmentation, 3D reconstruction, holistic scene analysis), speech recognition, robot localization & mapping, etc.

 

This course will teach fundamental methods in PGM, as well as some real-world applications, like medical diagnosis systems. Also, by accomplishing programming assignments in this course, we are able to use these methods in our work. This is perhaps the most exciting but at the same time, most time-consuming part of this course. The overview of this course will be three parts: representation, inference and learning. Representation introduces graph structures and some basic terms and properties. Inference is how we shall use a trained model to predict results and make decisions. Learning refers to the procedure we build the model and train its parameters, given training data.

 

Graphical models are divided into two categories: Bayesian networks which are denoted by directed graphs and Markov networks which are denoted by undirected graphs. We shall see that these two categories vary a lot. And both of them have many applications.

 

Factor is a basic concept. In a joint distribution, we may break up the distribution into smaller components, each over a smaller space of possibilities. We can then define the overall joint distribution as a product of these components, or factors. And we define the scope of a factor as the set of random variables it is related to. Factor marginalization and factor reduction are almost the same with marginalization and reduction in joint distribution.

转载于:https://www.cnblogs.com/JVKing/articles/2478304.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值