这篇文章承接于我的上一篇文章:muduo核心组件分析_小猪快快跑的博客-CSDN博客
如果有不正确的地方,欢迎朋友们指正。
用 telnet 命令连接服务器。
在没有连接进来的时候,主线程阻塞在下面的函数(epoll_wait可以设置超时时间,这样就不阻塞了):
// EPollPoller.cc
Timestamp EPollPoller::poll(int timeoutMs, ChannelList* activeChannels)
{
int numEvents = ::epoll_wait(epollfd_,
&*events_.begin(),
static_cast<int>(events_.size()),
timeoutMs);
int savedErrno = errno;
Timestamp now(Timestamp::now());
if (numEvents > 0)
{
fillActiveChannels(numEvents, activeChannels);
if (implicit_cast<size_t>(numEvents) == events_.size())
{
events_.resize(events_.size() * 2);
}
}
else if (numEvents == 0)
{}
else
{
if (savedErrno != EINTR)
{
errno = savedErrno;
}
}
return now;
}
现在有客户端连接进来了,就从 epoll_wait 执行到 fillActiveChannels 函数了。
下面跳转到 fillActiveChannels:
// EPollPoller.cc
void EPollPoller::fillActiveChannels(int numEvents,
ChannelList* activeChannels) const
{
for (int i = 0; i < numEvents; ++i)
{
Channel* channel = static_cast<Channel*>(events_[i].data.ptr);
channel->set_revents(events_[i].events);
activeChannels->push_back(channel);
}
}
这个函数把新连接打包成一个 Channel。然后跳转到:Channel.h::set_revents 设置刚刚打包的 Channel 发生的事件。最后 EPollPoller::ChannelList 添加这个 Channel。
返回到 EPollPoller::poll。由于 EPollPoller::poll 是 EventLoop::loop 调用的。那么再从 EPollPoller::poll 返回到 EventLoop::loop 的 while 循环里:
// EventLoop.cc
void EventLoop::loop()
{
assert(!looping_);
assertInLoopThread();
looping_ = true;
quit_ = false;
while (!quit_)
{
activeChannels_.clear();
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
++iteration_;
eventHandling_ = true;
for (Channel* channel : activeChannels_)
{
currentActiveChannel_ = channel;
currentActiveChannel_->handleEvent(pollReturnTime_);
}
currentActiveChannel_ = NULL;
eventHandling_ = false;
doPendingFunctors();
}
looping_ = false;
}
进入 for 循环调用 Channel::handleEvent:
// Channel.cc
void Channel::handleEvent(Timestamp receiveTime)
{
std::shared_ptr<void> guard;
if (tied_)
{
guard = tie_.lock();
if (guard)
{
handleEventWithGuard(receiveTime);
}
}
else
{
handleEventWithGuard(receiveTime);
}
}
下面执行 else 里的 handleEventWithGuard:
// Channel.cc
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
eventHandling_ = true;
if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
{
if (closeCallback_) closeCallback_();
}
if (revents_ & POLLNVAL)
{
}
if (revents_ & (POLLERR | POLLNVAL))
{
if (errorCallback_) errorCallback_();
}
if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
{
if (readCallback_) readCallback_(receiveTime);
}
if (revents_ & POLLOUT)
{
if (writeCallback_) writeCallback_();
}
eventHandling_ = false;
}
因为是监测到客户端连接,因此执行 readCallback_。
这里的 readCallback_ 是在构造 Tcpserver 里的 Acceptor 时绑定的。也就是 acceptChannel 对应的回调函数,而不是最早的 wakeupChannel 对应的 readCallback_。
回调函数就是 Acceptor::handleRead。也就是说,当有新连接进来时,会从 EventLoop::loop 进入这个函数。
下面跳转到 Acceptor::handleRead:
// Acceptor.cc
void Acceptor::handleRead()
{
InetAddress peerAddr;
//FIXME loop until no more
int connfd = acceptSocket_.accept(&peerAddr);
if (connfd >= 0)
{
if (newConnectionCallback_)
{
newConnectionCallback_(connfd, peerAddr);
}
else
{
sockets::close(connfd);
}
}
else
{
}
}
connfd 就是用于和客户端通信的 socket。
执行上面的 newConnectionCallback_。这也是回调函数,也就是 TcpServer::newConnection。这是在 TcpServer 构造函数的函数体里设置的。也就是在上篇文章的过程三里设置的:
// TcpServer.cc
{
acceptor_->setNewConnectionCallback(
std::bind(&TcpServer::newConnection, this, _1, _2));
}
可以看到,newConnectionCallback_ 就是 TcpServer::newConnection:
// TcpServer.cc
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
EventLoop* ioLoop = threadPool_->getNextLoop();
char buf[64];
snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);
++nextConnId_;
string connName = name_ + buf;
InetAddress localAddr(sockets::getLocalAddr(sockfd));
TcpConnectionPtr conn(new TcpConnection(ioLoop,
connName,
sockfd,
localAddr,
peerAddr));
connections_[connName] = conn;
conn->setConnectionCallback(connectionCallback_);
conn->setMessageCallback(messageCallback_);
conn->setWriteCompleteCallback(writeCompleteCallback_);
conn->setCloseCallback(
std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
}
由 EventLoop* ioLoop = threadPool_->getNextLoop() 跳转到 EventLoopThreadPool::getNextLoop():
// EventLoopThreadPool.cc
EventLoop* EventLoopThreadPool::getNextLoop()
{
EventLoop* loop = baseLoop_;
if (!loops_.empty())
{
// round-robin
loop = loops_[next_];
++next_;
if (implicit_cast<size_t>(next_) >= loops_.size())
{
next_ = 0;
}
}
return loop;
}
这个函数的作用是采用轮询算法选择一个子线程。ioLoop 就是选择的子线程的 loop。loops_ 在 start() 的时候都 push_back 过了。
下面回到 TcpServer::newConnection 构造 TcpConnection。跳转到 TcpConnection 的构造函数:
// TcpConnection.cc
TcpConnection::TcpConnection(EventLoop* loop,
const string& nameArg,
int sockfd,
const InetAddress& localAddr,
const InetAddress& peerAddr)
: loop_(CHECK_NOTNULL(loop)),
name_(nameArg),
state_(kConnecting),
reading_(true),
socket_(new Socket(sockfd)),
channel_(new Channel(loop, sockfd)),
localAddr_(localAddr),
peerAddr_(peerAddr),
highWaterMark_(64 * 1024 * 1024)
{
channel_->setReadCallback(
std::bind(&TcpConnection::handleRead, this, _1));
channel_->setWriteCallback(
std::bind(&TcpConnection::handleWrite, this));
channel_->setCloseCallback(
std::bind(&TcpConnection::handleClose, this));
channel_->setErrorCallback(
std::bind(&TcpConnection::handleError, this));
socket_->setKeepAlive(true);
}
重点关注上面的回调函数:
- Channel::readCallback_ 《= TcpConnection::handleRead
- Channel::writeCallback_《= TcpConnection::handleWrite
- Channel::closeCallback_《= TcpConnection::handleClose
- Channel::errorCallback_《= TcpConnection::handleError
再回到 TcpServer::newConnection:
// TcpServer.cc
conn->setConnectionCallback(connectionCallback_);
conn->setMessageCallback(messageCallback_);
conn->setWriteCompleteCallback(writeCompleteCallback_);
conn->setCloseCallback(
std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
把 TcpConnectin 的一些回调函数设置成 TcpServer 的函数,对应关系如下:
- TcpConnectin::connectionCallback_ 《= TcpServer::connectionCallback_
- TcpConnectin::messageCallback_《= TcpServer::messageCallback_
- TcpConnectin::writeCompleteCallback_《= TcpServer::writeCompleteCallback_
- TcpConnectin::closeCallback_《= TcpServer::removeConnection
TcpConnection::handleRead 会调用 TcpConnectin::connectionCallback_。
前两个函数在 main.cc 里设置:
// main.cc
EchoServer(muduo::net::EventLoop* loop,
const muduo::net::InetAddress& listenAddr)
: server_(loop, listenAddr, "EchoServer")
{
server_.setConnectionCallback(
std::bind(&EchoServer::onConnection, this, _1));
server_.setMessageCallback(
std::bind(&EchoServer::onMessage, this, _1, _2, _3));
}
- TcpConnectin::connectionCallback_ 《= TcpServer::connectionCallback_《= main::onConnection
- TcpConnectin::messageCallback_《= TcpServer::messageCallback_《= main::onMessage
这样串起来:Channel::readCallback_ 会调用 main::onConnection。
再回到 TcpServer::newConnection,执行:
ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
跳转到:
// EventLoop.cc
void EventLoop::runInLoop(Functor cb)
{
if (isInLoopThread())
{
cb();
}
else
{
queueInLoop(std::move(cb));
}
}
再转到 else 里的:
// EventLoop.cc
void EventLoop::queueInLoop(Functor cb)
{
{
MutexLockGuard lock(mutex_);
pendingFunctors_.push_back(std::move(cb));
}
if (!isInLoopThread() || callingPendingFunctors_)
{
wakeup();
}
}
先说说为什么要转到 queueInloop() 里呢。因为 TcpServer::newConnection 的这行:
ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
里面的 ioLoop 是子线程对应的 loop。它是这样出来的:
// TcpServer.cc::newConnection() EventLoop* ioLoop = threadPool_->getNextLoop();
接下来 wakeup():
// EventLoop.cc
void EventLoop::wakeup()
{
uint64_t one = 1;
ssize_t n = sockets::write(wakeupFd_, &one, sizeof one);
}
这个 wakeup() 函数是子线程的,wakeupFd 也是子线程的。但是这个函数是在主线程里执行的。也就是主线程给子线程发送消息唤醒子线程。
再回到 EventLoop::loop:
// EventLoop.cc
void EventLoop::loop()
{
assert(!looping_);
assertInLoopThread();
looping_ = true;
quit_ = false;
while (!quit_)
{
activeChannels_.clear();
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
++iteration_;
eventHandling_ = true;
for (Channel* channel : activeChannels_)
{
currentActiveChannel_ = channel;
currentActiveChannel_->handleEvent(pollReturnTime_);
}
currentActiveChannel_ = NULL;
eventHandling_ = false;
doPendingFunctors();
}
looping_ = false;
}
上面一直在处理:
currentActiveChannel_->handleEvent(pollReturnTime_);
总结一下这一行干了什么:构造 tcpconnection,轮询算法唤醒子线程。
后面没有进入 doPendingFunctors() 函数体。
然后继续开始 while 循环。
下面到进入子线程,颜色换一下。
在没有主线程发送消息的时候,子线程阻塞在下面的函数(epoll_wait可以设置超时时间,这样就不阻塞了):
// EPollPoller.cc
Timestamp EPollPoller::poll(int timeoutMs, ChannelList* activeChannels)
{
int numEvents = ::epoll_wait(epollfd_,
&*events_.begin(),
static_cast<int>(events_.size()),
timeoutMs);
int savedErrno = errno;
Timestamp now(Timestamp::now());
if (numEvents > 0)
{
fillActiveChannels(numEvents, activeChannels);
if (implicit_cast<size_t>(numEvents) == events_.size())
{
events_.resize(events_.size() * 2);
}
}
else if (numEvents == 0)
{}
else
{
if (savedErrno != EINTR)
{
errno = savedErrno;
}
}
return now;
}
现在有客户端连接进来了,主线程也就发消息给子线程了,那么子线程就从 epoll_wait 执行到 fillActiveChannels 函数了。
下面跳转到 fillActiveChannels:
// EPollPoller.cc
void EPollPoller::fillActiveChannels(int numEvents,
ChannelList* activeChannels) const
{
for (int i = 0; i < numEvents; ++i)
{
Channel* channel = static_cast<Channel*>(events_[i].data.ptr);
channel->set_revents(events_[i].events);
activeChannels->push_back(channel);
}
}
返回到 EPollPoller::loop。由于 EPollPoller::poll 是 EventLoop::loop 调用的。那么再从 EPollPoller::poll 返回到 EventLoop::loop 的 while 循环里:
// EventLoop.cc
void EventLoop::loop()
{
assert(!looping_);
assertInLoopThread();
looping_ = true;
quit_ = false;
while (!quit_)
{
activeChannels_.clear();
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
++iteration_;
eventHandling_ = true;
for (Channel* channel : activeChannels_)
{
currentActiveChannel_ = channel;
currentActiveChannel_->handleEvent(pollReturnTime_);
}
currentActiveChannel_ = NULL;
eventHandling_ = false;
doPendingFunctors();
}
looping_ = false;
}
进入 for 循环调用 Channel::handleEvent:
// Channel.cc
void Channel::handleEvent(Timestamp receiveTime)
{
std::shared_ptr<void> guard;
if (tied_)
{
guard = tie_.lock();
if (guard)
{
handleEventWithGuard(receiveTime);
}
}
else
{
handleEventWithGuard(receiveTime);
}
}
下面执行 else 里的 handleEventWithGuard:
// Channel.cc
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
eventHandling_ = true;
if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
{
if (closeCallback_) closeCallback_();
}
if (revents_ & POLLNVAL)
{
}
if (revents_ & (POLLERR | POLLNVAL))
{
if (errorCallback_) errorCallback_();
}
if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
{
if (readCallback_) readCallback_(receiveTime);
}
if (revents_ & POLLOUT)
{
if (writeCallback_) writeCallback_();
}
eventHandling_ = false;
}
执行 readCallback_。那么这里的 readCallback_ 是什么时候设置的呢。在上一篇文章的过程四:server.start() 里设置的。那时候启动线程池,每个线程创建 wakeupFd 的时候设置的。readCallback_ 就是 EventLoop::handleRead:
// EventLoop.cc
void EventLoop::handleRead()
{
uint64_t one = 1;
ssize_t n = sockets::read(wakeupFd_, &one, sizeof one);
}
这个函数是第一次调用哦,收到了来自主线程 waupFd 发送过来的信息。这个函数的意义嘛,可能就是确认一下是不是被唤醒了,我在上面贴出来的代码把日志删了。
返回到 EPollPoller::loop。执行 doPendingFunctors():
// EventLoop.cc
void EventLoop::doPendingFunctors()
{
std::vector<Functor> functors;
callingPendingFunctors_ = true;
{
MutexLockGuard lock(mutex_);
functors.swap(pendingFunctors_);
}
for (const Functor& functor : functors)
{
functor();
}
callingPendingFunctors_ = false;
}
pendingFunctors_ 里的回调函数是在主线程的 queueInLoop() 里添加的:
pendingFunctors_.emplace_back(cb):
主线程的 queueInLoop(cb) 是主线程的 runInLoop(cb) 调用的。主线程的 runInLoop(cb) 是主线程的 TcpServer::newConnection 调用的:
// TcpServer.cc void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr) { EventLoop* ioLoop = threadPool_->getNextLoop(); char buf[64]; snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_); ++nextConnId_; string connName = name_ + buf; InetAddress localAddr(sockets::getLocalAddr(sockfd)); TcpConnectionPtr conn(new TcpConnection(ioLoop, connName, sockfd, localAddr, peerAddr)); connections_[connName] = conn; conn->setConnectionCallback(connectionCallback_); conn->setMessageCallback(messageCallback_); conn->setWriteCompleteCallback(writeCompleteCallback_); conn->setCloseCallback( std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn)); }
pendingFunctors_ 里的回调函数就是 TcpConnection::connectEstablished。
下面跳转到 TcpConnection::connectEstablished:
// TcpConnection.cc
void TcpConnection::connectEstablished()
{
setState(kConnected);
channel_->tie(shared_from_this());
channel_->enableReading();
connectionCallback_(shared_from_this());
}
先进入 Channel::tie:
// Channel.cc
void Channel::tie(const std::shared_ptr<void>& obj)
{
tie_ = obj;
tied_ = true;
}
再执行:
channel_->enableReading();
这又是一系列的 update 操作,最终调用 epoll_ctl。
最后执行:
connectionCallback_(shared_from_this());
在讲主线程的时候说过:
TcpConnectin::connectionCallback_ 《= TcpServer::connectionCallback_《= main::onConnection
因此转到:
// main.cc
void onConnection(const muduo::net::TcpConnectionPtr& conn)
{
}
这里并没有执行什么。
到此为止,EventLoop::doPendingFunctors() 结束。
总结一下 EventLoop::doPendingFunctors() 干了什么:监听客户端连接后的操作,执行有客户端连接进来后 main.cc 里写的 onConnection()。
然后又开始在 EventLoop::loop 的 while 里循环。