python optimize_Python optimize.fminbound方法代码示例

本文详细介绍了Python中的scipy.optimize.fminbound方法,包括其用途和常见用法,通过17个精选代码示例进行展示,涵盖了各种应用场景,如参数优化、线性搜索等。示例来源于不同项目,涉及数据处理、统计分析和机器学习等多个领域。
摘要由CSDN通过智能技术生成

本文整理汇总了Python中scipy.optimize.fminbound方法的典型用法代码示例。如果您正苦于以下问题:Python optimize.fminbound方法的具体用法?Python optimize.fminbound怎么用?Python optimize.fminbound使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块scipy.optimize的用法示例。

在下文中一共展示了optimize.fminbound方法的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _psturng

​点赞 6

# 需要导入模块: from scipy import optimize [as 别名]

# 或者: from scipy.optimize import fminbound [as 别名]

def _psturng(q, r, v):

"""scalar version of psturng"""

if q < 0.:

raise ValueError('q should be >= 0')

opt_func = lambda p, r, v : abs(_qsturng(p, r, v) - q)

if v == 1:

if q < _qsturng(.9, r, 1):

return .1

elif q > _qsturng(.999, r, 1):

return .001

return 1. - fminbound(opt_func, .9, .999, args=(r,v))

else:

if q < _qsturng(.1, r, v):

return .9

elif q > _qsturng(.999, r, v):

return .001

return 1. - fminbound(opt_func, .1, .999, args=(r,v))

开发者ID:birforce,项目名称:vnpy_crypto,代码行数:21,

示例2: _find_estimator_weight

​点赞 6

# 需要导入模块: from scipy import optimize [as 别名]

# 或者: from scipy.optimize import fminbound [as 别名]

def _find_estimator_weight(self, y, dv_pre, y_pred):

"""Make line search to determine estimator weights."""

with warnings.catch_warnings():

warnings.simplefilter("ignore")

def optimization_function(alpha):

p_ij = self._estimate_instance_probabilities(dv_pre + alpha * y_pred)

p_i = self._estimate_bag_probabilites(p_ij)

return self._negative_log_likelihood(p_i)

# TODO: Add option to choose optimization method.

alpha, fval, err, n_func = fminbound(optimization_function, 0.0, 5.0, full_output=True, disp=1)

if self.learning_rate < 1.0:

alpha *= self.learning_rate

return alpha, fval

开发者ID:hbldh,项目名称:skboost,代码行数:18,

示例3: CA_step

​点赞 6

# 需要导入模块: from scipy import optimize [as 别名]

# 或者: from scipy.optimize import fminbound [as 别名]

def CA_step(z1, z2, theta, index, min_val, max_val):

"""Take a single coordinate ascent step.

"""

inner_theta = theta.copy()

def f(alpha):

inner_theta[index] = theta[index] + alpha

return -calc_gaussian_mix_log_lhd(inner_theta, z1, z2)

assert theta[index] >= min_val

min_step_size = min_val - theta[index]

assert theta[index] <= max_val

max_step_size = max_val - theta[index]

alpha = fminbound(f, min_step_size, max_step_size)

prev_lhd = -f(0)

new_lhd = -f(alpha)

if new_lhd > prev_lhd:

theta[index] += alpha

else:

new_lhd = prev_lhd

return theta, new_lhd

开发者ID:nboley,项目名称:idr,代码行数:24,

示例4: learn_rmp

​点赞 6

# 需要导入模块: from scipy import optimize [as 别名]

# 或者: from scipy.optimize import fminbound [as 别名]

def learn_rmp(subpops, D):

K = len(subpops)

rmp_matrix = np.eye(K)

models = learn_models(subpops)

for k in range(K - 1):

for j in range(k + 1, K):

probmatrix = [np.ones([models[k].num_sample, 2]),

np.ones([models[j].num_sample, 2])]

probmatrix[0][:, 0] = models[k].density(subpops[k])

probmatrix[0][:, 1] = models[j].density(subpops[k])

probmatrix[1][:, 0] = models[k].density(subpops[j])

probmatrix[1][:, 1] = models[j].density(subpops[j])

rmp = fminbound(lambda rmp: log_likelihood(rmp, probmatrix, K), 0, 1)

rmp += np.random.randn() * 0.01

rmp = np.clip(rmp, 0, 1)

rmp_matrix[k, j] = rmp

rmp_matrix[j, k] = rmp

return rmp_matrix

# OPTIMIZATION RESULT HELPERS

开发者ID:thanhbok26b,项目名称:mfea-ii,代码行数:25,

示例5: test_var

​点赞 5

# 需要导入模块: from scipy import optimize [as 别名]

# 或者: from scip

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值