ZLMediaKit中多线程
- 在main函数中,一下语句是设置线程池中的线程个数
EventPollerPool::setPoolSize(threads);
- 在main中创建了很多的TcpServer,其构造函数如下,其中会初始化当前TcpServer对象的** _poller**成员。其中
EventPollerPool::Instance()
会返回一个静态的EventPollerPool
对象 - 对于后续的
EventPollerPool::Instance().getPoller()
,由于TcpServer当前的线程不同于线程池中的任何一个线程,所以,auto poller = EventPoller::getCurrentPoller();
获取为空,最后是走的语句return dynamic_pointer_cast<EventPoller>(getExecutor())
,此语句就是从线程池中获取一个负载最小的线程
EventPoller::Ptr EventPollerPool::getPoller(){
auto poller = EventPoller::getCurrentPoller();
if(_preferCurrentThread && poller){
return poller;
}
return dynamic_pointer_cast<EventPoller>(getExecutor());
}
TcpServer(const EventPoller::Ptr &poller = nullptr) {
setOnCreateSocket(nullptr);
_poller = poller ? poller : EventPollerPool::Instance().getPoller();
_socket = createSocket();
_socket->setOnAccept(bind(&TcpServer::onAcceptConnection_l, this, placeholders::_1));
_socket->setOnBeforeAccept(bind(&TcpServer::onBeforeAcceptConnection_l, this, std::placeholders::_1));
}
- 接下来是
_socket = createSocket()
,就是对socket对象进行了一些初始化,后续还设置了socket的两个回调函数。
Socket::Ptr Socket::createSocket(const EventPoller::Ptr &poller, bool enable_mutex){
return Socket::Ptr(new Socket(poller, enable_mutex));
}
Socket::Socket(const EventPoller::Ptr &poller, bool enable_mutex) :
_mtx_sock_fd(enable_mutex), _mtx_event(enable_mutex),
_mtx_send_buf_waiting(enable_mutex), _mtx_send_buf_sending(enable_mutex){
_poller = poller;
if (!_poller) {
_poller = EventPollerPool::Instance().getPoller();
}
setOnRead(nullptr);
setOnErr(nullptr);
setOnAccept(nullptr);
setOnFlush(nullptr);
setOnBeforeAccept(nullptr);
}
- 构造看完了之后,我们接下来看其的调用
start
,在start里面创建了socket(epoll),并将socket添加到poller的监听事件中。并将此socket添加到其他的线程中进行监听
监听事件中
template<typename SessionType>
void start_l(uint16_t port, const std::string &host = "0.0.0.0", uint32_t backlog = 1024) {
_session_alloc = [](const TcpServer::Ptr &server, const Socket::Ptr &sock) {
auto session = std::make_shared<SessionType>(sock);
session->setOnCreateSocket(server->_on_create_socket);
return std::make_shared<TcpSessionHelper>(server, session);
};
if (!_socket->listen(port, host.c_str(), backlog)) {
string err = (StrPrinter << "listen on " << host << ":" << port << " failed:" << get_uv_errmsg(true));
throw std::runtime_error(err);
}
weak_ptr<TcpServer> weak_self = shared_from_this();
_timer = std::make_shared<Timer>(2.0f, [weak_self]() -> bool {
auto strong_self = weak_self.lock();
if (!strong_self) {
return false;
}
strong_self->onManagerSession();
return true;
}, _poller);
InfoL << "TCP Server listening on " << host << ":" << port;
}
template <typename SessionType>
void start(uint16_t port, const std::string &host = "0.0.0.0", uint32_t backlog = 1024) {
start_l<SessionType>(port, host, backlog);
EventPollerPool::Instance().for_each([&](const TaskExecutor::Ptr &executor) {
EventPoller::Ptr poller = dynamic_pointer_cast<EventPoller>(executor);
if (poller == _poller || !poller) {
return;
}
auto &serverRef = _cloned_server[poller.get()];
if (!serverRef) {
serverRef = onCreatServer(poller);
}
if (serverRef) {
serverRef->cloneFrom(*this);
}
});
}