前面讲到了一些muduo常用的类,这些类足以支撑起整个muduo的运行。
今天就来看看,一条tcp连接请求发送到webserver时会发生什么事情。
说明:
TcpServer
在创建时会构造一个Acceptor
对象,该对象拥有一个acceptSocket_
和对应的acceptChannel_
。
创建一个tcpserver
对象后,用户手动执行TcpServer::start()
:
void TcpServer::start()
{
if (started_.getAndSet(1) == 0)
{
threadPool_->start(threadInitCallback_);
assert(!acceptor_->listening());
loop_->runInLoop(
std::bind(&Acceptor::listen, get_pointer(acceptor_)));
}
}
首先会启用线程池EventLoopThreadPool::threadPool_
,然后调用主线程的TcpServer::runInLoop()
,将Acceptor::listen()
和Acceptor::acceptor_
的this
指针传入任务队列。因为是当前线程调用的EventLoop::runInLoop()
,所以立即执行acceptor::listen()
:
void Acceptor::listen()
{
loop_->assertInLoopThread();
listening_ = true;
acceptSocket_.listen();
acceptChannel_.enableReading();
}
启动acceptSocket_.listen()
后,调用监听文件描述符对应Channel::acceptChannel_
的channel::enableReading()
:
void enableReading() { events_ |= kReadEvent; update(); }
这里将关注的事件设置为可读,调用channel::update()
:
void Channel::update()
{
addedToLoop_ = true;
loop_->updateChannel(this);
}
调用了当前io线程的EventLoop::updateChannel()
,传入了当前Channel::channel_
对象的指针:
void EventLoop::updateChannel(Channel* channel)
{
assert(channel->ownerLoop() == this);
assertInLoopThread();
poller_->updateChannel(channel);
}
这里转而调用poller::updatechannel()
,传入了形参给的的Channel::channel_
对象指针,将Channel::channel_
注册进了poller
:
void PollPoller::updateChannel(Channel* channel)
{
Poller::assertInLoopThread();
LOG_TRACE << "fd = " << channel->fd() << " events = " << channel->events();
if (channel->index() < 0)
{
// a new one, add to pollfds_
assert(channels_.find(channel->fd()) == channels_.end());
struct pollfd pfd;
pfd.fd = channel->fd();
pfd.events = static_cast<short>(channel->events());
pfd.revents = 0;
pollfds_.push_back(pfd);
int idx = static_cast<int>(pollfds_.size())-1;
channel->set_index(idx);
channels_[pfd.fd] = channel;
}
else
{
// update existing one
assert(channels_.find(channel->fd()) != channels_.end());
assert(channels_[channel->fd()] == channel);
int idx = channel->index();
assert(0 <= idx && idx < static_cast<int>(pollfds_.size()));
struct pollfd& pfd = pollfds_[idx];
assert(pfd.fd == channel->fd() || pfd.fd == -channel->fd()-1);
pfd.fd = channel->fd();
pfd.events = static_cast<short>(channel->events());
pfd.revents = 0;
if (channel->isNoneEvent())
{
// ignore this pollfd
pfd.fd = -channel->fd()-1;
}
}
}
此时对该文件描述符的事件关注开始了!
用户手动调用EventLoop::loop()
:
void EventLoop::loop()
{
assert(!looping_);
assertInLoopThread();
looping_ = true;
quit_ = false; // FIXME: what if someone calls quit() before loop() ?
LOG_TRACE << "EventLoop " << this << " start looping";
while (!quit_)
{
activeChannels_.clear();
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
++iteration_;
if (Logger::logLevel() <= Logger::TRACE)
{
printActiveChannels();
}
// TODO sort channel by priority
eventHandling_ = true;
for (Channel* channel : activeChannels_)
{
currentActiveChannel_ = channel;
currentActiveChannel_->handleEvent(pollReturnTime_);
}
currentActiveChannel_ = NULL;
eventHandling_ = false;
doPendingFunctors();
}
LOG_TRACE << "EventLoop " << this << " stop looping";
looping_ = false;
}
poller::poll()
执行后返回了活动的通道,依次处理。对于活跃的文件描述符(这里指的是连接请求),调用了channel::handleEvent()
:
void Channel::handleEvent(Timestamp receiveTime)
{
std::shared_ptr<void> guard;
if (tied_)
{
guard = tie_.lock();
if (guard)
{
handleEventWithGuard(receiveTime);
}
}
else
{
handleEventWithGuard(receiveTime);
}
}
先判断一下该对象是否还存在(有可能已经析构),如果存在就调用channel::handleEventWithGuard()
:
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
eventHandling_ = true;
LOG_TRACE << reventsToString();
if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
{
if (logHup_)
{
LOG_WARN << "fd = " << fd_ << " Channel::handle_event() POLLHUP";
}
if (closeCallback_) closeCallback_();
}
if (revents_ & POLLNVAL)
{
LOG_WARN << "fd = " << fd_ << " Channel::handle_event() POLLNVAL";
}
if (revents_ & (POLLERR | POLLNVAL))
{
if (errorCallback_) errorCallback_();
}
if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
{
if (readCallback_) readCallback_(receiveTime);
}
if (revents_ & POLLOUT)
{
if (writeCallback_) writeCallback_();
}
eventHandling_ = false;
}
因为发生的的是可读事件,于是最终执行了Channel::readCallback_
。
由于当前执行Channel::readCallback_
的是主线程,所以我们来看看主线程里是谁给Channel::readCallback_
初始化的:
Acceptor::Acceptor(EventLoop* loop, const InetAddress& listenAddr, bool reuseport)
: loop_(loop),
acceptSocket_(sockets::createNonblockingOrDie(listenAddr.family())),
acceptChannel_(loop, acceptSocket_.fd()),
listening_(false),
idleFd_(::open("/dev/null", O_RDONLY | O_CLOEXEC))
{
assert(idleFd_ >= 0);
acceptSocket_.setReuseAddr(true);
acceptSocket_.setReusePort(reuseport);
acceptSocket_.bindAddress(listenAddr);
//在这里
acceptChannel_.setReadCallback(
std::bind(&Acceptor::handleRead, this));
}
原来Acceptor::acceptor_
在构造时初始化了Channel::readCallback_
,绑定了Acceptor::handleRead()
,并传入了Acceptor
对象的this
指针,我们来看看Acceptor::handleRead()
是个啥:
void Acceptor::handleRead()
{
loop_->assertInLoopThread();
InetAddress peerAddr;
//FIXME loop until no more
int connfd = acceptSocket_.accept(&peerAddr);
if (connfd >= 0)
{
// string hostport = peerAddr.toIpPort();
// LOG_TRACE << "Accepts of " << hostport;
//调用newConnectionCallback,转而调用
if (newConnectionCallback_)
{
newConnectionCallback_(connfd, peerAddr);
}
else
{
sockets::close(connfd);
}
}
else
{
LOG_SYSERR << "in Acceptor::handleRead";
// Read the section named "The special problem of
// accept()ing when you can't" in libev's doc.
// By Marc Lehmann, author of libev.
//处理文件描述符满的情况
if (errno == EMFILE)
{
::close(idleFd_);
idleFd_ = ::accept(acceptSocket_.fd(), NULL, NULL);
::close(idleFd_);
idleFd_ = ::open("/dev/null", O_RDONLY | O_CLOEXEC);
}
}
}
Acceptor::handleRead()
新建了一个文件描述符来接收连接,创建成功后调用了Acceptor::newConnectionCallback_(connfd, peerAddr)
,我们看看这个Acceptor::newConnectionCallback_
又是哪来的:
TcpServer::TcpServer(EventLoop* loop,
const InetAddress& listenAddr,
const string& nameArg,
Option option)
: loop_(CHECK_NOTNULL(loop)),
ipPort_(listenAddr.toIpPort()),
name_(nameArg),
acceptor_(new Acceptor(loop, listenAddr, option == kReusePort)),
threadPool_(new EventLoopThreadPool(loop, name_)),
connectionCallback_(defaultConnectionCallback),
messageCallback_(defaultMessageCallback),
nextConnId_(1)
{
acceptor_->setNewConnectionCallback(
std::bind(&TcpServer::newConnection, this, _1, _2));
}
原来是在TcpServer
构造时创建了Acceptor::acceptor_
对象并绑定了TcpServer::newConnection()
,接受_1,_2两个形参作为输入,顺藤摸瓜看看TcpServer::newConnection()
是干啥的:
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
loop_->assertInLoopThread();
EventLoop* ioLoop = threadPool_->getNextLoop();
char buf[64];
snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);
++nextConnId_;
string connName = name_ + buf;
LOG_INFO << "TcpServer::newConnection [" << name_
<< "] - new connection [" << connName
<< "] from " << peerAddr.toIpPort();
InetAddress localAddr(sockets::getLocalAddr(sockfd));
// FIXME poll with zero timeout to double confirm the new connection
// FIXME use make_shared if necessary
TcpConnectionPtr conn(new TcpConnection(ioLoop,
connName,
sockfd,
localAddr,
peerAddr));
connections_[connName] = conn;
conn->setConnectionCallback(connectionCallback_);
conn->setMessageCallback(messageCallback_);
conn->setWriteCompleteCallback(writeCompleteCallback_);
conn->setCloseCallback(
std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
}
原来是从线程池中选取了一个EventLoop::ioLoop
,并新建了一个该EventLoop::ioLoop
的TcpConnectionPtr::conn
对象,并为该TcpConnectionPtr::conn
建立了一系列回调。之后又运行了TcpConnectionPtr::conn
对象所在的IO线程中的EventLoop::runInLoop()
:
void EventLoop::runInLoop(Functor cb)
{
if (isInLoopThread())
{
cb();
}
else
{
queueInLoop(std::move(cb));
}
}
这里如果是当前IO线程就直接执行,如果是其他线程就调用EventLoop::queueInLoop
,最终会在IO线程执行。我们再看看ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn))
绑定运行的是个啥:
void TcpConnection::connectEstablished()
{
loop_->assertInLoopThread();
assert(state_ == kConnecting);
setState(kConnected);
channel_->tie(shared_from_this());
channel_->enableReading();
connectionCallback_(shared_from_this());
}
调用了该tcpconnection::channel_
对象的Channel::tie()
和Channel::enableReading()
,先来看看Channel::tie()
干了啥:
void Channel::tie(const std::shared_ptr<void>& obj)
{
tie_ = obj;
tied_ = true;
}
它让tcpconnection::channel_
对象拥有了TcpConnectionPtr::conn
的指针,并将拥有指针Channel::tied_
设置为了true
,我们再看看Channel::enableReading()
是干啥的:
void enableReading() { events_ |= kReadEvent; update(); }
就只有一句话,关注可读事件,并调用Channel::update()
:
void Channel::update()
{
addedToLoop_ = true;
loop_->updateChannel(this);
}
调用了当前IO线程中的EventLoop::updateChannel()
:
void EventLoop::updateChannel(Channel* channel)
{
assert(channel->ownerLoop() == this);
assertInLoopThread();
poller_->updateChannel(channel);
}
转而调用了poller::updateChannel()
,这里以PollPoller::updateChannel()
为例:
void PollPoller::updateChannel(Channel* channel)
{
Poller::assertInLoopThread();
LOG_TRACE << "fd = " << channel->fd() << " events = " << channel->events();
if (channel->index() < 0)
{
// a new one, add to pollfds_
assert(channels_.find(channel->fd()) == channels_.end());
struct pollfd pfd;
pfd.fd = channel->fd();
pfd.events = static_cast<short>(channel->events());
pfd.revents = 0;
pollfds_.push_back(pfd);
int idx = static_cast<int>(pollfds_.size())-1;
channel->set_index(idx);
channels_[pfd.fd] = channel;
}
else
{
// update existing one
assert(channels_.find(channel->fd()) != channels_.end());
assert(channels_[channel->fd()] == channel);
int idx = channel->index();
assert(0 <= idx && idx < static_cast<int>(pollfds_.size()));
struct pollfd& pfd = pollfds_[idx];
assert(pfd.fd == channel->fd() || pfd.fd == -channel->fd()-1);
pfd.fd = channel->fd();
pfd.events = static_cast<short>(channel->events());
pfd.revents = 0;
if (channel->isNoneEvent())
{
// ignore this pollfd
pfd.fd = -channel->fd()-1;
}
}
}
在poller中注册该channel,启用关注。至此,一条TCP连接就建立了。