NonLinear Regression Problem & Ridge Regression

Part One : Basis for Nonlinear Regression

In this part we’re going to investigate how to use linear regression to approximate and estimate complicated functions. For example, suppose we want to fit the following function on the interval [0,1] :

Import Modules

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

Create a problem

def y(x):
    ret = .15*np.sin(40*x) 
    ret = ret + .25*np.sin(10*x)
    step_fn1 = np.zeros(len(x))
    step_fn1[x >= .25] = 1
    step_fn2 = np.zeros(len(x))
    step_fn2[x >= .75] = 1
    ret = ret - 0.3*step_fn1 + 0.8 *step_fn2 
    return ret

x = np.arange(0.0, 1.0, 0.001)
plt.plot(x, y(x))
plt.show()

在这里插入图片描述

Plot the graph and see the functions

Here the input space, shown on the x-axis, is very simple – it’s just the interval [ 0 , 1 ] [0,1] [0,1]. The output space is reals ( R \mathcal{R} R), and a graph of the function { ( x , y ( x ) ) ∣ x ∈ [ 0 , 1 ] } \{(x,y(x)) \mid x \in [0,1]\} { (x,y(x))x[0,1]} is shown above. Clearly a linear function of the input will not give a good approximation to the function. There are many ways to construct nonlinear functions of the input. Some popular approaches in machine learning are regression trees, neural networks, and local regression (e.g. LOESS). However, our approach here will be to map the input space [ 0 , 1 ] [0,1] [0,1] into a “feature space” R d \mathcal{R}^d Rd, and then run standard linear regression in the feature space.

Feature extraction, or “featurization”, maps an input from some input space X \mathcal{X} X to a vector in R d \mathcal{R}^d Rd. Here our input space is X = [ 0 , 1 ] \mathcal{X}=[0,1] X=[0,1], so we could write our feature mapping as a function Φ : [ 0 , 1 ] → R d \Phi:[0,1]\to\mathcal{R}^d Φ:[0,1]Rd. The vector Φ ( x ) \Phi(x) Φ(x) is called a feature vector, and each entry of the vector is called a feature. Our feature mapping is typically defined in terms of a set of functions, each computing a single entry of the feature vector. For example, let’s define a feature function ϕ 1 ( x ) = 1 ( x ≥ 0.25 ) \phi_1(x)=1(x\ge0.25) ϕ1(x)=1(x0.25). The 1 ( ⋅ ) 1(\cdot) 1() denotes an indicator function, which is 1 if the expression in the parenthesis is true, and 0 otherwise. So ϕ 1 ( x ) \phi_1(x) ϕ1(x) is 1 1 1 if x ≥ 0.25 x\ge0.25 x0.25 and 0 0 0 otherwise. This function produces a “feature” of x. Let’s define two more features: ϕ 2 ( x ) = 1 ( x ≥ 0.5 ) \phi_2(x)=1(x\ge0.5) ϕ2(x)=1(x0.5) and ϕ 3 ( x ) = 1 ( x ≥ 0.75 ) \phi_3(x)=1(x\ge0.75) ϕ3(x)=1(x0.75). Now we can define a feature mapping into R 3 \mathcal{R}^3 R3 as:
Φ ( x ) = ( ϕ 1 ( x ) , ϕ 2 ( x ) , ϕ 3 ( x ) ) . \Phi(x) = ( \phi_1(x), \phi_2(x), \phi_3(x) ). Φ(x)=(ϕ1(x),ϕ2(x),ϕ

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值