machine learning-model总结

算法部分

ml-分类算法

回归

θ使h(x)yθcostfunctionh(x(i))y(i)
这里写图片描述

  1. 梯度下降
    现在我们要调整θ使得J(θ)取得最小值,为了达到这个目的,我们可以对θ取一个随机初始值(随机初始化的目的是使对称失效),然后不断地迭代改变θ的值来使J(θ)减小,直到最终收敛取得一个θ值使得J(θ)最小。梯度下降法就采用这样的思想:对θ设定一个随机初值,然后迭代进行以下更新。
    梯度下降更新
    直到收敛,这里的α称为学习率learning rate。
    梯度方向由J(θ)对θ 的偏导数决定,由于要求的是最小值,因此对偏导数取负值得到梯度方向。将J(θ)代入可得到总的更新公式。
    下面的更新规则称为LMS update rule(least mean squares),也称为Widrow-Hoff learning rule。
    最终权值更新
    对于如下更新算法:
    批量梯度下降
    由于在每一次迭代都考察训练集的所有样本,而称为批量梯度下降batch gradient descent。
    如果参数更新计算算法如下:
    这里写图片描述
    这里我们按照单个训练样本更新θ的值,称为随机梯度下降stochastic gradient descent。比较这两种梯度下降算法,由于batch gradient descent在每一步都考虑全部数据集,因而复杂度比较高,随机梯度下降会比较快地收敛,而且在实际情况中两种梯度下降得到的最优解J(θ)一般会接近真实的最小值。所以对于较大的数据集,一般采用效率较高的随机梯度下降法。
    参考:http://www.cnblogs.com/fanyabo/p/4060498.html

决策树

参考:http://www.cnblogs.com/hsydj/p/5853954.html

构建方法:
- 深度优先
- 广度优先

策树节点分裂终止条件:
分裂终止条件

贝叶斯

SVM

ANN

Deep-NN

优化算法

ml-聚类算法

ml-关联分析

推荐系统

模型方面

数据预处理部分

参数调优

模型评价

python

大数据平台

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Privacy-preserving machine learning is becoming increasingly important in today's world where data privacy is a major concern. Federated learning and secure aggregation are two techniques that can be used to achieve privacy-preserving machine learning. Federated learning is a technique where the machine learning model is trained on data that is distributed across multiple devices or servers. In this technique, the model is sent to the devices or servers, and the devices or servers perform the training locally on their own data. The trained model updates are then sent back to a central server, where they are aggregated to create a new version of the model. The key advantage of federated learning is that the data remains on the devices or servers, which helps to protect the privacy of the data. Secure aggregation is a technique that can be used to protect the privacy of the model updates that are sent to the central server. In this technique, the updates are encrypted before they are sent to the central server. The central server then performs the aggregation operation on the encrypted updates, and the result is sent back to the devices or servers. The devices or servers can then decrypt the result to obtain the updated model. By combining federated learning and secure aggregation, it is possible to achieve privacy-preserving machine learning. This approach allows for the training of machine learning models on sensitive data while protecting the privacy of the data and the model updates.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值