Machine Learning
文章平均质量分 88
The Prestige
这个作者很懒,什么都没留下…
展开
-
吴恩达·Machine Learning || chap18 Application example photo OCR & chap19 Conclusion简记
18 Application example photo OCR18-1 Problem description and pipelineThe photo OCR problem1.Text detection2.Character segmentation3.Character classification (recognition)4.*Spelling correctionPhoto OCR pipeline18-2 Sliding windowsText detection原创 2021-09-12 20:03:18 · 126 阅读 · 0 评论 -
吴恩达·Machine Learning || chap17 Large scale machine learning简记
17 Large scale machine learning17-1 Learning with large datasetsMachine learning and dataClassify between confusable words. E.g., (to, two, too), (then, than)It’s not who has the best algorithm that wins It’s who has the most dataLearning with large原创 2021-09-12 11:44:17 · 118 阅读 · 0 评论 -
吴恩达·Machine Learning || chap16 Recommender System简记
16 Recommender System16-1 Problem formulationExample : predicting movie ratingUser rates movies using one to five stars16-2 Content-based recommendationscontent-based recommender systemsProblem formulation$r(i,j)=$1 if user j has movie i (0,otherw原创 2021-09-11 17:20:26 · 91 阅读 · 0 评论 -
吴恩达·Machine Learning || chap15 Anomaly detection简记
15 Anomaly detection15-1 Problem motivationAnomaly detection exampleAircraft engine features: x1x_1x1=heat generated x2x_2x2=vibration intensity ⋯\cdots⋯Dataset: {x(1),x(2),⋯ ,x(m)}\{ x ^ { ( 1 ) } , x ^ { ( 2 ) } , \cdots , x ^ { ( m ) } \}{x(原创 2021-09-10 18:55:48 · 156 阅读 · 0 评论 -
吴恩达·Machine Learning || chap14 Dimensionality Reduction简记
14-1 Motivation I:Data CompressionData CompressionReduce data from 2D to 1D:project line x1,x2⟶z1x_1,x_2\longrightarrow z_1x1,x2⟶z1Reduce data from 3D to 2D:project plane x1,x2,x3⟶z1,z2x_1,x_2,x_3\longrightarrow z_1,z_2x1,x2,x3⟶z1,z214-2 Motiva原创 2021-09-09 17:45:49 · 89 阅读 · 0 评论 -
吴恩达·Machine Learning || chap13 Clustering简记
13 Clustering13-1 Unsupervised learning introductionSupervised learningTraining set: {(x(1),y(1)),(x(2),y(2)),(x(3),y(3)),⋯ ,(x(m),y(m))}\{ ( x ^ { ( 1 ) } , y ^ { ( 1 ) } ) , ( x ^ { ( 2 ) } , y ^ { ( 2 ) } ) , ( x ^ { ( 3 ) } , y ^ { ( 3 ) } ) , \cdo原创 2021-09-08 16:18:06 · 119 阅读 · 0 评论 -
吴恩达·Machine Learning || chap12 Support Vector Machines简记
12-1 Optimization objectiveAlternative view of logistic regressionhθ(x)=11+e−θTxh _ { \theta } ( x ) = \frac { 1 } { 1 + e ^ { - \theta ^ { T } x } }hθ(x)=1+e−θTx1If y=1y=1y=1,we want hθ(x)≈1,θTx≫0h_{\theta}(x)\approx1, \theta^Tx\gg0hθ(x)≈1,θTx≫0If原创 2021-09-06 18:55:43 · 89 阅读 · 0 评论 -
吴恩达·Machine Learning || chap11 Machine learning system design简记
11 Machine learning system design11-1 Prioritizing what to work on: Spam classification exampleBuilding a spam classifierSupervised learning. x=features of email. y=spam(1) or not spam(0).Features x: Choose 100 words indicative of spam/not spamxj={1&原创 2021-09-04 16:23:17 · 355 阅读 · 0 评论 -
吴恩达·Machine Learning || chap10 Advice for applying machine learning简记
10 Advice for applying machine learning10-1 Deciding what to try nextDebugging a learning algorithmSuppose you have implemented regularized linear regression to predict housing prices.J(θ)=12m[∑i=1m(h0(x(i))−y(i))2+λ∑j=1mθj2]J ( \theta ) = \frac { 1 }原创 2021-09-03 16:29:22 · 133 阅读 · 0 评论 -
吴恩达·Machine Learning || chap9 Neural Network : Learning简记
9 Neural Network : Learning9-1 Cost functionNeural Network(classification)(x(1),y(1)),(x(2),y(2)),⋯ ,(x(m),y(m))(x^{(1)},y^{(1)}),(x^{(2)},y^{(2)}),\cdots,(x^{(m)},y^{(m)})(x(1),y(1)),(x(2),y(2)),⋯,(x(m),y(m))LLL = total no. of layers in networksls_ls原创 2021-08-16 20:46:22 · 95 阅读 · 0 评论 -
吴恩达·Machine Learning || chap8 Neural Networks:Representation 简记
8 Neural Networks: Representation8-1 Non-linear hypothesesNon-linear Classification 0、18-2 Neurons and the brainNeurons Networks Origins: Algorithms that try to mimic the brain.Was very widely used in 80s and early 90s;popularity diminished in late原创 2021-08-13 17:00:08 · 126 阅读 · 0 评论 -
吴恩达·Machine Learning || chap7 Regularizationn 简记
7 Regularization7-1 The problem of overfittingunderfitting——high biasJust rightoverfitting——high variance 高方差Overfitting: If we have too many features, the learned hypothesis may fit the training set very well, but fail to generalize to new examplesA原创 2021-08-11 21:34:57 · 125 阅读 · 0 评论 -
吴恩达·Machine Learning || chap5&6 Octave Tutorial&Logistic Regression 简记
5 Octave Tutorial(Octave也可以用MATLAB学习,个人决定使用Python进行学习实现,5-6建议无论学习什么语言都可以看一看)5-1 Basic operations5-2 Moving data around5-3 Computing on data5-4 Plotting data5-5 for,while,if statements,and function5-6 VectorizationVectorization examplehθ(x)=∑j=0原创 2021-08-06 18:19:37 · 104 阅读 · 0 评论 -
吴恩达·Machine Learning || chap4 Linear Regression with multiple variables 简记
4 Linear Regression with multiple variables4-1 Multiple featuresMultiple features (variables) Notation: nnn = number of features x(i)x^{(i)}x(i)=input(features) of ithi^{th}ith training example. xj(i)x_j^{(i)}xj(i)=value of feature in ith原创 2021-07-31 16:21:25 · 99 阅读 · 0 评论 -
吴恩达·Machine Learning || chap3 Linear Algebra review(optional) 简记
3 Linear Algebra review(optional)3-1 Matrices and VectorsMatrix: Rectangular array of numbersDimension of matrix: number of rows x number of columns (m x n)Matrix Elements(entries of matrix)Vector: An n x 1 matrix1-indexedrefer A,B,C,X a,b,x,y3-原创 2021-07-30 18:06:42 · 110 阅读 · 0 评论 -
吴恩达·Machine Learning || chap2 Linear regression with one variable 简记
2 Linear regression with one variable2-1 Model representationTraining setm = Number of training examplesx’s = “input” variable/featuresy’s = “output” variable/“target” variable(x,y) = one training exampleHypothesis 假设函数hθ(x)=(θ)0+θ1xh_\theta (x)=(原创 2021-07-28 17:34:16 · 100 阅读 · 0 评论 -
吴恩达·Machine Learning || chap1 Introduction 简记
这里写自定义目录标题欢迎使用Markdown编辑器新的改变功能快捷键合理的创建标题,有助于目录的生成如何改变文本的样式插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列表创建一个表格设定内容居中、居左、居右SmartyPants创建一个自定义列表如何创建一个注脚注释也是必不可少的KaTeX数学公式新的甘特图功能,丰富你的文章UML 图表FLowchart流程图导出与导入导出导入欢迎使用Markdown编辑器你好! 这是你第一次使用 Markdown编辑器 所展示的欢迎页。如果你想学习如何使用Mar原创 2021-07-26 16:40:16 · 81 阅读 · 0 评论