前言
之前zinxV0.7我们已经实现了读写分离,对应每个client,我们有3个go程,分别是reader、writer、DoMsgHandle 假设服务器有10W个client请求,那么server就会有10W个reader的go、10W个writer的go程,10W个DoMsgHandler的go程;其中10W个reader的go程和10W个writer的go程处于阻塞状态并不会抢占CPU资源,CPU仍然会在10W个DoMsgHandler的go程中来回切换,这个切换成本还是很高的;我们希望有固定的go程数量来处理DoMsgHandler的业务 接下来我们就需要给Zinx添加消息队列和多任务Worker机制了。我们可以通过worker的数量来限定处理业务的固定goroutine数量,⽽不是⽆限制的开辟Goroutine,虽然我们知道go的调度算法已经做的很极致了,但是⼤数量的Goroutine依然会带来一些不必要的环境切换成本,这些本应该是服务器应该节省掉的成本。我们可以⽤消息队列来缓冲worker⼯作的数据
一、实现思路
二、创建消息队列
1 - MsgHandle增加消息队列和worker池
type MsgHandle struct {
Apis map [ uint32 ] ziface. IRouter
TaskQueue [ ] chan ziface. IRequest
WorkerPoolSize uint32
}
func NewMsgHandle ( ) * MsgHandle {
return & MsgHandle{
Apis: make ( map [ uint32 ] ziface. IRouter) ,
WorkerPoolSize: utils. GlobalObject. WorkerPoolSize,
TaskQueue: make ( [ ] chan ziface. IRequest, utils. GlobalObject. MaxWorkerTaskLen) ,
}
}
2 - 将消息队列和worker数量配置化
package utils
import (
"encoding/json"
"io/ioutil"
"zinx/ziface"
)
type GlobalObj struct {
TcpServer ziface. IServer
Host string
TcpPort int
Name string
Version string
MaxConn int
MaxPackageSize uint32
WorkerPoolSize uint32
MaxWorkerTaskLen uint32
}
var GlobalObject * GlobalObj
func ( g * GlobalObj) Reload ( ) {
data, err := ioutil. ReadFile ( "conf/zinx.json" )
if err != nil {
panic ( err)
}
err = json. Unmarshal ( data, & GlobalObject)
if err != nil {
panic ( err)
}
}
func init ( ) {
GlobalObject = & GlobalObj{
Name: "ZinxServerApp" ,
Version: "V0.8" ,
TcpPort: 8999 ,
Host: "0.0.0.0" ,
MaxConn: 1000 ,
MaxPackageSize: 4096 ,
WorkerPoolSize: 10 ,
MaxWorkerTaskLen: 1024 ,
}
GlobalObject. Reload ( )
}
三、Worker工作池实现
1 - 定义启动⼯作池的接⼝
zinx/ziface/imsgHandler.go
package ziface
type IMsgHandle interface {
DoMsgHandler ( request IRequest)
AddRouter ( msgID uint32 , router IRouter)
StartWorkerPool ( )
SendMsgToTaskQueue ( request IRequest)
}
2 - 实现工作池
zinx/znet/msgHandler.go
StartWorkerPool() ⽅法是启动Worker⼯作池,这⾥根据⽤户配置好的 WorkerPoolSize 的数量来启动,然后分别给每个Worker分配⼀个 TaskQueue ,然后⽤⼀个goroutine来承载⼀个Worker的⼯作业务 StartOneWorker() ⽅法就是⼀个Worker的⼯作业务,每个worker是不会退出的(⽬前没有设定worker的停⽌⼯作机制),会永久的从对应的TaskQueue中等待消息,并处理 SendMsgToTaskQueue() 作为⼯作池的数据⼊⼝,这⾥⾯采⽤的是轮询的分配机制,因为不同链接信息都会调⽤这个⼊⼝,那么到底应该由哪个worker处理该链接的请求处理,整理⽤的是⼀个简单的求模运算。⽤余数和workerID的匹配来进⾏分配
func ( mh * MsgHandle) StartWorkerPool ( ) {
for i := 0 ; i < int ( mh. WorkerPoolSize) ; i++ {
mh. TaskQueue[ i] = make ( chan ziface. IRequest, utils. GlobalObject. MaxWorkerTaskLen)
go mh. StartOneWorker ( i, mh. TaskQueue[ i] )
}
}
func ( mh * MsgHandle) StartOneWorker ( workerID int , taskQueue chan ziface. IRequest) {
fmt. Println ( "Worker ID = " , workerID, " is started ..." )
for {
select {
case request := <- taskQueue:
mh. DoMsgHandler ( request)
}
}
}
func ( mh * MsgHandle) SendMsgToTaskQueue ( request ziface. IRequest) {
workerID := request. GetConnection ( ) . GetConnID ( ) % mh. WorkerPoolSize
fmt. Println ( "Add ConnID = " , request. GetConnection ( ) . GetConnID ( ) ,
" reqeust MsgID = " , request. GetMsgID ( ) ,
" to WorkerID = " , workerID)
mh. TaskQueue[ workerID] <- request
}
3 - 将消息交给消息队列处理
zinx/znet/connection.go
这⾥并没有强制使⽤多任务Worker机制,⽽是判断⽤户配置 WorkerPoolSize 的个数,如果⼤于0,那么我就启动多任务机制处理链接请求消息,如果=0或者<0那么,我们依然只是之前的开启⼀个临时的Goroutine处理客户端请求消息
func ( c * Connection) StartReader ( ) {
...
req := Request{
conn: c,
msg: msg,
}
if utils. GlobalObject. WorkerPoolSize > 0 {
c. MsgHandler. SendMsgToTaskQueue ( & req)
} else {
go c. MsgHandler. DoMsgHandler ( & req)
}
}
}
4 - 工作池开启
func ( s * Server) Start ( ) {
fmt. Printf ( "[Zinx] Server Name : %s, listenner at IP : %s, Port:%d is starting\n" ,
utils. GlobalObject. Name, utils. GlobalObject. Host, utils. GlobalObject. TcpPort)
fmt. Printf ( "[Zinx] Version %s, MaxConn:%d, MaxPackeetSize:%d\n" ,
utils. GlobalObject. Version,
utils. GlobalObject. MaxConn,
utils. GlobalObject. MaxPackageSize)
go func ( ) {
s. MsgHandler. StartWorkerPool ( )
...
}
} ( )
}
测试截图
四、项目目录结构
五、完整源码
点击下载zinxV0.8