Algorithm: Logistic Regression and Gradient Descent

The most classic model in machine learning : Logistic Regression.

Some problems for two class classify

Logistic Regression is a base line for classify problem

When we design a model for machine learning system,at first we create a simple model which is used for the base line and then we optimize the model which will compare with the base line.

A classify problem:

for condition probabiliy: define P(y|x) we need to calculate p(y = 1 |x) and p(y = 0 |x).

For a sample data if p(y = 1 |x) > p(y = 0 |x), we can classify the sample to category '1' ,otherwise '0'.

Mapping the linear regression model W.Tx + b into the domain(0,1), sigmoid function:

 

An example:

the condition probability is :

Summary:

Proof for the logistic regression model is an linear model.

decision boundary

 

The loss function for logistic regression:

Maximum Likelihood : for the logistic regression model, there are two parameter as w and b(intercept)

We investigate the input sample data and estimate the maximum of the conditon probability related to the parameters , and maxize this probability to derive the best parameters for the model.

Maximum likelihood estimation:

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
The stochastic gradient descent (SGD) algorithm is a popular optimization algorithm used in machine learning. It is an iterative algorithm that updates the model parameters in small steps based on the gradient of the loss function with respect to the parameters. The algorithm works as follows: 1. Initialize the model parameters randomly. 2. Set the learning rate, which determines the step size of the updates. 3. For each training example: - Compute the gradient of the loss function with respect to the parameters using the current example. - Update the model parameters by subtracting the gradient multiplied by the learning rate. The key difference between SGD and regular gradient descent is that in SGD, the gradient is computed and the parameters are updated for each training example, rather than for the entire training set. This makes the algorithm faster and more scalable for large datasets. The stochastic aspect of the algorithm comes from the fact that the training examples are sampled randomly from the training set, rather than being processed in a fixed order. This randomness can help the algorithm escape from local minima and find better solutions. Here is the pseudocode for the SGD algorithm: ``` Input: Training set (X, Y), learning rate α, number of iterations T Output: Model parameters θ Initialize θ randomly for t = 1 to T do Sample a training example (x, y) from (X, Y) randomly Compute the gradient ∇θ L(θ; x, y) using the current example Update the parameters: θ ← θ - α * ∇θ L(θ; x, y) end for return θ ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值