机器学习PS参数服务器——分布式计算是个什么鬼?

1. Overview

The parameter server aims for high-performance distributed machine learning applications. In this framework, multiple nodes runs over multiple machines to solve machine learning problems. There are often a single schedule node(进度控制节点), and several worker and servers nodes.

ps arch

  • Worker. A worker node performs the main computations such as reading the data and computing the gradient. It communicates with the server nodes via push and pull. For example, it pushes the computed gradient to the servers, or pulls the recent model from them.(通过从PS拉取参数,以及推送参数到PS)
  • Server. A server node maintains and updates the model weights. Each node maintains only a part of the model.
  • Scheduler. The scheduler node monitors the aliveness of other nodes. It can be also used to send control signals to other nodes and collect their progress.(进度控制器)

1.1. Distributed Optimization

Assume(假设) we are going to solve the following

minwi=1nf(xi,yi,w) minw∑i=1nf(xi,yi,w)

where (yi, xi) are example pairs and w is the weight.

We consider solve the above problem by minibatch stochastic gradient descent (SGD) with batch size b. At time t, this algorithm first randomly picks up b examples, and then updates the weight w by

w=wηti=1bf(xki,yki,w) w=w−ηt∑i=1b∇f(xki,yki,w)

We give two examples to illusrate(说明) the basic idea of how to implement a distributed optimization algorithm in ps-lite.

1.1.1. Asynchronous SGD 异步随机梯度下降

In the first example, we extend SGD into asynchronous SGD. We let the servers maintain w, where server k gets the k-th segment of w, denoted by wk<\sub>. Once received gradient from a worker, the server k will update the weight it maintained:

t = 0;
while (Received(&grad)) {
  w_k -= eta(t) * grad;
  t++;
}

where the function received returns if received gradient from any worker node, and eta returns the learning rate at time t.

While for a worker, each time it dose four things

Read(&X, &Y);  // read a minibatch X and Y
Pull(&w);      // pull the recent weight from the servers
ComputeGrad(X, Y, w, &grad);  // compute the gradient
Push(grad);    // push the gradients to the servers

where ps-lite will provide function push and pull which will communicate with servers with the right part of data.

Note that asynchronous SGD is semantically different the single machine version. Since there is no communication between workers, so it is possible that the weight is updated while one worker is calculating the gradients. In other words, each worker may used the delayed(滞后) weights. The following figure shows the communication with 2 server nodes and 3 worker nodes.

1.1.2. Synchronized SGD 同步随机梯度下降

Different to the asynchronous version, now we consider a synchronized version, which is semantically identical(相同) to the single machine algorithm. We use the scheduler to manage the data synchronization

for (t = 0, t < num_iteration; ++t) {
  for (i = 0; i < num_worker; ++i) {
     IssueComputeGrad(i, t);
  }
  for (i = 0; i < num_server; ++i) {
     IssueUpdateWeight(i, t);
  }
  WaitAllFinished();
}

where IssueComputeGrad and IssueUpdateWeight issue commands to worker and servers, while WaitAllFinished wait until all issued commands are finished.

When worker received a command, it executes the following function,

ExecComputeGrad(i, t) {
   Read(&X, &Y);  // read minibatch with b / num_workers examples
   Pull(&w);      // pull the recent weight from the servers
   ComputeGrad(X, Y, w, &grad);  // compute the gradient
   Push(grad);    // push the gradients to the servers
}

which is almost identical to asynchronous SGD but only b/num_workers examples are processed each time.

While for a server node, it has an additional aggregation step comparing to asynchronous SGD

ExecUpdateWeight(i, t) {
   for (j = 0; j < num_workers; ++j) {
      Receive(&grad);
      aggregated_grad += grad;
   }
   w_i -= eta(t) * aggregated_grad;
}

1.1.3. Which one to use?

Comparing to a single machine algorithm, the distributed algorithms have two additional costs(两个额外的代价), one is the data communication cost(一个就是通讯代价), namely(也就是) sending data over the network(通过网络发送数据的代价); the other one is synchronization cost due to the imperfect load balance and performance variance cross machines(另一个就是同步代价). These two costs may dominate the performance for large scale applications with hundreds of machines and terabytes of data.(数据量大时尤为明显)

Assume denotations:(符号意义)

f f convex function
n n number of examples
m m number of workers
b b minibatch size
τ τ maximal delay
Tcomm Tcomm data communication overhead of one minibatch
Tsync Tsync synchronization overhead

The trade-offs are summarized by(总结)

SGD slowdown of convergence additional overhead
synchronized b b nb(Tcomm+Tsync) nb(Tcomm+Tsync)
asynchronous bτ nmbTcomm nmbTcomm

What we can see are

  • the minibatch size trade-offs the convergence and communication cost (minibatch的大小权衡了收敛和通讯的开销)
  • the maximal allowed delay trade-offs the convergence and synchronization cost. In synchronized SGD, we have τ=0 and therefore it suffers a large synchronization cost. While asynchronous SGD uses an infinite τ to eliminate this cost. In practice, an infinite τ is unlikely happens. But we also place a upper bound of τ to guarantee the convergence with some synchronization costs.

1.2. Further Reads

Distributed optimization algorithm is an active research topic these years. To name some of them


原文:
http://ps-lite.readthedocs.io/en/latest/overview.html
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值