scala数据结构和算法-02-用模式匹配实现合并排序

发布一个k8s部署视频:https://edu.csdn.net/course/detail/26967

课程内容:各种k8s部署方式。包括minikube部署,kubeadm部署,kubeasz部署,rancher部署,k3s部署。包括开发测试环境部署k8s,和生产环境部署k8s。

腾讯课堂连接地址https://ke.qq.com/course/478827?taid=4373109931462251&tuin=ba64518

第二个视频发布  https://edu.csdn.net/course/detail/27109

腾讯课堂连接地址https://ke.qq.com/course/484107?tuin=ba64518

介绍主要的k8s资源的使用配置和命令。包括configmap,pod,service,replicaset,namespace,deployment,daemonset,ingress,pv,pvc,sc,role,rolebinding,clusterrole,clusterrolebinding,secret,serviceaccount,statefulset,job,cronjob,podDisruptionbudget,podSecurityPolicy,networkPolicy,resourceQuota,limitrange,endpoint,event,conponentstatus,node,apiservice,controllerRevision等。

第三个视频发布:https://edu.csdn.net/course/detail/27574

详细介绍helm命令,学习helm chart语法,编写helm chart。深入分析各项目源码,学习编写helm插件
————————————————------------------------------------------------------------------------------------------------------------------

 

package data

object MergeSortUsePattern {
   def mergeSort[T](compatator:(T,T)=>Boolean)(source:List[T]):List[T]={
    source match{
      case list if (list.length==0)=>return List()
      case list if(list.length==1)=>return list;
      case _=>
    }
    val (left,right)=source.splitAt(source.length/2);
    merge(compatator)(mergeSort(compatator)(left),mergeSort(compatator)(right));
  }
  
  def merge[T](compatator:(T,T)=>Boolean)(left:List[T],right:List[T]):List[T]={
    (left,right)match{
      case (Nil,list)=>return list;
      case (list,Nil)=>return list;
      case (left,right)=>{
        if(compatator(left(0),right(0))){
          return left(0)::merge(compatator)(left.slice(1, left.length),right);
        }else{
          return right(0)::merge(compatator)(left,right.slice(1, right.length))
        }
        
      }
    }
  }
  
  def main(args: Array[String]): Unit = {
    val source=List(1,3,9,8,4,7,5,6)
    println(mergeSort[Int]((x,y)=>{ if (x<y)  true else  false})(source))
  }
}


 

以下是使用Scala语言实现逻辑回归的Newton-Raphson算法的示例代码: ``` import breeze.linalg.{DenseMatrix, DenseVector} import breeze.numerics.{exp, log} import scala.annotation.tailrec object LogisticRegression { /** * Compute the sigmoid function * * @param z input value * @return sigmoid value */ def sigmoid(z: Double): Double = { 1.0 / (1.0 + exp(-z)) } /** * Compute the gradient of the log-likelihood function * * @param X design matrix * @param y target variable * @param weights current weights * @return gradient vector */ def gradient(X: DenseMatrix[Double], y: DenseVector[Double], weights: DenseVector[Double]): DenseVector[Double] = { val activation = sigmoid(X * weights) X.t * (activation - y) } /** * Compute the Hessian matrix of the log-likelihood function * * @param X design matrix * @param weights current weights * @return Hessian matrix */ def hessian(X: DenseMatrix[Double], weights: DenseVector[Double]): DenseMatrix[Double] = { val activation = sigmoid(X * weights) val diagonal = activation *:* (1.0 - activation) X.t * (X(::, *) * diagonal) } /** * Compute the log-likelihood function * * @param X design matrix * @param y target variable * @param weights current weights * @return log-likelihood value */ def logLikelihood(X: DenseMatrix[Double], y: DenseVector[Double], weights: DenseVector[Double]): Double = { val activation = sigmoid(X * weights) val epsilon = 1e-16 val clippedActivation = activation.map(a => math.max(a, epsilon)).map(a => math.min(a, 1.0 - epsilon)) val logActivation = log(clippedActivation) val logOneMinusActivation = log(1.0 - clippedActivation) val logLikelihood = y.t * logActivation + (1.0 - y).t * logOneMinusActivation -logLikelihood } /** * Train a logistic regression model using Newton-Raphson algorithm * * @param X design matrix * @param y target variable * @param maxIterations maximum number of iterations * @param tolerance convergence tolerance * @return weights vector */ def train(X: DenseMatrix[Double], y: DenseVector[Double], maxIterations: Int = 100, tolerance: Double = 1e-6): DenseVector[Double] = { val numFeatures = X.cols val weights = DenseVector.zeros[Double](numFeatures) @tailrec def loop(iteration: Int): DenseVector[Double] = { val grad = gradient(X, y, weights) val hess = hessian(X, weights) val delta = hess \ grad weights -= delta val llh = logLikelihood(X, y, weights) val improvement = llh - logLikelihood(X, y, weights + delta) if (iteration >= maxIterations || improvement < tolerance) { weights } else { loop(iteration + 1) } } loop(0) } } ``` 该示例代码定义了sigmoid函数、梯度函数、Hessian矩阵函数、对数似然函数和训练函数。在训练函数中,使用了尾递归进行迭代,直到满足最大迭代次数或收敛容差的条件为止。最终,训练函数返回权重向量作为模型的输出。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

hxpjava1

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值