Machine Learning Andrew Ng -3. Linear Algebra review

3.1 Matrices and vectors

Matrix: Rectangular array of numbers. (A B C)

Dimension of matrix : number of rows x number of columns

Matrix Elements (entires of matrix) : A i j A_{ij} Aij

Vector: An n x 1 matrix. (a, b, c)

1- indexed vs 0- indexed :

y=\left\lbrack\matrix{y_1\y_2\y_3\y_4}\right\rbrack vs y=\left\lbrack\matrix{y_0\y_1\y_2\y_3}\right\rbrack
又是一段CSDN显示不出来的公式…

在这里插入图片描述

we use 1- indexed

3.2 Addition and scalar multiplication(标量乘法)

3.3 Matrix-vector multiplication

House sizes : \matrix{2104 \ 1416\1532\852} \quad \quad h_{\theta}(x)=-40+0.25x
matrix : \left\lbrack\matrix{1 & 2104\1 & 1416\1&1532\1&852}\right\rbrack \times \left\lbrack\matrix{-40\0.25}\right\rbrack =
依旧显示不出来

在这里插入图片描述

3.4 Matrix-matrix multiplication

House sizes : \matrix{2104 \ 1416\1532\852}

在这里插入图片描述

  1. h θ ( x ) = − 40 + 0.25 x \quad h_{\theta}(x)=-40+0.25x hθ(x)=40+0.25x
  2. h θ ( x ) = 200 + 0.1 x \quad h_{\theta}(x)=200+0.1x hθ(x)=200+0.1x
  3. h θ ( x ) = − 150 + 0.4 x \quad h_{\theta}(x)=-150+0.4x hθ(x)=150+0.4x

matrix : \left\lbrack\matrix{1 & 2104\1 & 1416\1&1532\1&852}\right\rbrack \times \left\lbrack\matrix{-40&200&-150\0.25&0.1&0.4}\right\rbrack =

在这里插入图片描述

3.5 Matrix multiplication properties

A × B ≠ B × A A\times B \neq B \times A A×B=B×A

A × B × C = A × ( B × C ) = ( A × B ) × C A \times B \times C=A \times (B \times C)=(A \times B) \times C A×B×C=A×(B×C)=(A×B)×C

Identity Matrix: I n × n I_{n\times n} In×n

3.6 Inverse and transpose(逆和转置)

Matrices that don’t have an inverse are “singular” (奇异) or “degenerate” (退化)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Machine learning allows computational systems to adaptively improve their performance with experience accumulated from the observed data. Its techniques are widely applied in engineering, science, finance, and commerce. This book is designed for a short course on machine learning. It is a short course, not a hurried course. From over a decade of teaching this material, we have distilled what we believe to be the core topics that every student of the subject should know. We chose the title `learning from data' that faithfully describes what the subject is about, and made it a point to cover the topics in a story-like fashion. Our hope is that the reader can learn all the fundamentals of the subject by reading the book cover to cover. ---- Learning from data has distinct theoretical and practical tracks. In this book, we balance the theoretical and the practical, the mathematical and the heuristic. Our criterion for inclusion is relevance. Theory that establishes the conceptual framework for learning is included, and so are heuristics that impact the performance of real learning systems. ---- Learning from data is a very dynamic field. Some of the hot techniques and theories at times become just fads, and others gain traction and become part of the field. What we have emphasized in this book are the necessary fundamentals that give any student of learning from data a solid foundation, and enable him or her to venture out and explore further techniques and theories, or perhaps to contribute their own. ---- The authors are professors at California Institute of Technology (Caltech), Rensselaer Polytechnic Institute (RPI), and National Taiwan University (NTU), where this book is the main text for their popular courses on machine learning. The authors also consult extensively with financial and commercial companies on machine learning applications, and have led winning teams in machine learning competitions.
Linear algebra is a pillar of machine learning. You cannot develop a deep understanding and application of machine learning without it. In this new laser-focused Ebook written in the friendly Machine Learning Mastery style that you’re used to, you will finally cut through the equations, Greek letters, and confusion, and discover the topics in linear algebra that you need to know. Using clear explanations, standard Python libraries, and step-by-step tutorial lessons, you will discover what linear algebra is, the importance of linear algebra to machine learning, vector, and matrix operations, matrix factorization, principal component analysis, and much more. This book was designed to be a crash course in linear algebra for machine learning practitioners. Ideally, those with a background as a developer. This book was designed around major data structures, operations, and techniques in linear algebra that are directly relevant to machine learning algorithms. There are a lot of things you could learn about linear algebra, from theory to abstract concepts to APIs. My goal is to take you straight to developing an intuition for the elements you must understand with laser-focused tutorials. I designed the tutorials to focus on how to get things done with linear algebra. They give you the tools to both rapidly understand and apply each technique or operation. Each tutorial is designed to take you about one hour to read through and complete, excluding the extensions and further reading. You can choose to work through the lessons one per day, one per week, or at your own pace. I think momentum is critically important, and this book is intended to be read and used, not to sit idle. I would recommend picking a schedule and sticking to it.
Preface I wrote this book to help machine learning practitioners, like you, get on top of linear algebra, fast. Linear Algebra Is Important in Machine Learning There is no doubt that linear algebra is important in machine learning. Linear algebra is the mathematics of data. It’s all vectors and matrices of numbers. Modern statistics is described using the notation of linear algebra and modern statistical methods harness the tools of linear algebra. Modern machine learning methods are described the same way, using the notations and tools drawn directly from linear algebra. Even some classical methods used in the field, such as linear regression via linear least squares and singular-value decomposition, are linear algebra methods, and other methods, such as principal component analysis, were born from the marriage of linear algebra and statistics. To read and understand machine learning, you must be able to read and understand linear algebra. Practitioners Study Linear Algebra Too Early If you ask how to get started in machine learning, you will very likely be told to start with linear algebra. We know that knowledge of linear algebra is critically important, but it does not have to be the place to start. Learning linear algebra first, then calculus, probability, statistics, and eventually machine learning theory is a long and slow bottom-up path. A better fit for developers is to start with systematic procedures that get results, and work back to the deeper understanding of theory, using working results as a context. I call this the top-down or results-first approach to machine learning, and linear algebra is not the first step, but perhaps the second or third. Practitioners Study Too Much Linear Algebra When practitioners do circle back to study linear algebra, they learn far more of the field than is required for or relevant to machine learning. Linear algebra is a large field of study that has tendrils into engineering, physics and quantum physics. There are also theorems and derivations for nearly everything, most of which will not help you get better skill from or a deeper understanding of your machine learning model. Only a specific subset of linear algebra is required, though you can always go deeper once you have the basics.
"linear algebra and optimization for machine learning" csdn 是关于机器学习中的线性代数和优化的主题的博客文章。 线性代数在机器学习中起着重要的作用。它提供了一种处理数据的有效方法,可以用于解决许多复杂的问题。线性代数的主要工具之一是矩阵,它可以用来表示数据和变换。在机器学习中,我们经常使用矩阵来表示特征和样本,进行数据的转换和降维。线性代数还为我们提供了诸如特征值和特征向量等重要概念,这些概念在机器学习中具有广泛的应用。 优化是机器学习中的另一个重要主题。机器学习算法通常涉及到最小化或最大化一个目标函数,以此来找到最优的模型参数。而优化算法可以帮助我们在复杂的参数空间中搜索最优解。通过使用优化算法,我们可以有效地求解机器学习问题,例如回归、分类和聚类等。 "linear algebra and optimization for machine learning" csdn 的文章会深入探讨线性代数和优化在机器学习中的应用。它会介绍线性代数的基本概念,如矩阵运算、特征值和特征向量等,并说明它们在机器学习中的具体应用。同时,它还会介绍一些常用的优化算法,如梯度下降法和牛顿法等,并解释它们在机器学习中的作用。通过阅读这篇文章,读者可以更好地理解线性代数和优化在机器学习中的重要性,以及如何应用它们来解决实际的机器学习问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值