客户端发送消息后muduo是怎么运行的

这篇文章承接于我的上两篇文章:

muduo核心组件分析_小猪快快跑的博客-CSDN博客

有客户端连接后muduo是怎么运行的_小猪快快跑的博客-CSDN博客

如果有不正确的地方,欢迎朋友们指正。

用 telnet 命令给服务器发送消息。

下面的颜色代表子线程。

下面是 main.cc

// main.cc
class EchoServer
{
public:
    EchoServer(muduo::net::EventLoop* loop,
             const muduo::net::InetAddress& listenAddr)
      : server_(loop, listenAddr, "EchoServer")
    {
      server_.setConnectionCallback(
          std::bind(&EchoServer::onConnection, this, _1));
      server_.setMessageCallback(
          std::bind(&EchoServer::onMessage, this, _1, _2, _3));
    }
 
    void start() 
    {
        server_.start();
    }
private:
    void onConnection(const muduo::net::TcpConnectionPtr& conn)
    {
    }
 
    void onMessage(const muduo::net::TcpConnectionPtr& conn,
                 muduo::net::Buffer* buf,
                 muduo::Timestamp time)
    {
        muduo::string msg(buf->retrieveAllAsString());
        conn->send(msg);
        conn->shutdown();
    }
 
    muduo::net::TcpServer server_;
};
 
 
int main()
{
    muduo::net::EventLoop loop;
    muduo::net::InetAddress listenAddr(2007);
    EchoServer server(&loop, listenAddr);
    server.start();
    loop.loop();
}

在客户端没有发消息的时候,子线程阻塞在下面的函数(epoll_wait可以设置超时时间,这样就不阻塞了):

// EPollPoller.cc
Timestamp EPollPoller::poll(int timeoutMs, ChannelList* activeChannels)
{
    int numEvents = ::epoll_wait(epollfd_,
        &*events_.begin(),
        static_cast<int>(events_.size()),
        timeoutMs);
    int savedErrno = errno;
    Timestamp now(Timestamp::now());
    if (numEvents > 0)
    {
        fillActiveChannels(numEvents, activeChannels);
        if (implicit_cast<size_t>(numEvents) == events_.size())
        {
            events_.resize(events_.size() * 2);
        }
    }
    else if (numEvents == 0)
    {}
    else
    {
        if (savedErrno != EINTR)
        {
            errno = savedErrno;
        }
    }
    return now;
}

此时客户端发送了一条信息,就从 epoll_wait 执行到 fillActiveChannels 函数了。

下面跳转到 fillActiveChannels:

// EPollPoller.cc
void EPollPoller::fillActiveChannels(int numEvents,
    ChannelList* activeChannels) const
{
    for (int i = 0; i < numEvents; ++i)
    {
        Channel* channel = static_cast<Channel*>(events_[i].data.ptr);
        channel->set_revents(events_[i].events);
        activeChannels->push_back(channel);
    }
}

返回到 EPollPoller::poll。由于 EPollPoller::poll 是 EventLoop::loop 调用的。那么再从 EPollPoller::poll 返回到 EventLoop::loop 的 while 循环里:

// EventLoop.cc
void EventLoop::loop()
{
    assert(!looping_);
    assertInLoopThread();
    looping_ = true;
    quit_ = false;  
    while (!quit_)
    {
        activeChannels_.clear();
        pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
        ++iteration_;
        eventHandling_ = true;
        for (Channel* channel : activeChannels_)
        {
            currentActiveChannel_ = channel;
            currentActiveChannel_->handleEvent(pollReturnTime_);
        }
        currentActiveChannel_ = NULL;
        eventHandling_ = false;
        doPendingFunctors();
    }
 
    looping_ = false;
}

进入 for 循环调用 Channel::handleEvent:

// Channel.cc
void Channel::handleEvent(Timestamp receiveTime)
{
    std::shared_ptr<void> guard;
    if (tied_)
    {
        guard = tie_.lock();
        if (guard)
        {
            handleEventWithGuard(receiveTime);
        }
    }
    else
    {
        handleEventWithGuard(receiveTime);
    }
}

进入第二个 if 里的 handleEventWithGuard(receiveTime):

// Channel.cc
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
    eventHandling_ = true;
    if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
    {
        if (closeCallback_) closeCallback_();
    }
 
    if (revents_ & POLLNVAL)
    {
    }
 
    if (revents_ & (POLLERR | POLLNVAL))
    {
        if (errorCallback_) errorCallback_();
    }
    if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
    {
        if (readCallback_) readCallback_(receiveTime);
    }
    if (revents_ & POLLOUT)
    {
        if (writeCallback_) writeCallback_();
    }
    eventHandling_ = false;
}

发生了来自客户端的读事件,进入 readCallback_:

在上一篇文章的主线程里设置了 Channel::readCallback_,回忆一下:

TcpServer::newConnection 构造 TcpConnection。跳转到 TcpConnection 的构造函数:

// TcpConnection.cc
TcpConnection::TcpConnection(EventLoop* loop,
    const string& nameArg,
    int sockfd,
    const InetAddress& localAddr,
    const InetAddress& peerAddr)
    : loop_(CHECK_NOTNULL(loop)),
    name_(nameArg),
    state_(kConnecting),
    reading_(true),
    socket_(new Socket(sockfd)),
    channel_(new Channel(loop, sockfd)),
    localAddr_(localAddr),
    peerAddr_(peerAddr),
    highWaterMark_(64 * 1024 * 1024)
{
    channel_->setReadCallback(
        std::bind(&TcpConnection::handleRead, this, _1));
    channel_->setWriteCallback(
        std::bind(&TcpConnection::handleWrite, this));
    channel_->setCloseCallback(
        std::bind(&TcpConnection::handleClose, this));
    channel_->setErrorCallback(
        std::bind(&TcpConnection::handleError, this));
    socket_->setKeepAlive(true);
}

于是跳到 TcpConnection.cc::handleRead:

// TcpConnection.cc
void TcpConnection::handleRead(Timestamp receiveTime)
{
    loop_->assertInLoopThread();
    int savedErrno = 0;
    ssize_t n = inputBuffer_.readFd(channel_->fd(), &savedErrno);
    if (n > 0)
    {
        messageCallback_(shared_from_this(), &inputBuffer_, receiveTime);
    }
    else if (n == 0)
    {
        handleClose();
    }
    else
    {
        errno = savedErrno;
        handleError();
    }
}

下面这行读取数据:

ssize_t n = inputBuffer_.readFd(channel_->fd(), &savedErrno);

然后执行  messageCallback_ 回调函数,也就是 main.cc::onMessage()。

这是在上一篇文章的 TcpServer::newConnection 里设置的:

// TcpServer.cc
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
    EventLoop* ioLoop = threadPool_->getNextLoop();
    char buf[64];
    snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);
    ++nextConnId_;
    string connName = name_ + buf;
 
    InetAddress localAddr(sockets::getLocalAddr(sockfd));
    TcpConnectionPtr conn(new TcpConnection(ioLoop,
        connName,
        sockfd,
        localAddr,
        peerAddr));
    connections_[connName] = conn;
    conn->setConnectionCallback(connectionCallback_);
    conn->setMessageCallback(messageCallback_);
    conn->setWriteCompleteCallback(writeCompleteCallback_);
    conn->setCloseCallback(
        std::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
    ioLoop->runInLoop(std::bind(&TcpConnection::connectEstablished, conn));
}

下面跳转到 main.cc::onMessage():

// main.cc
void onMessage(const muduo::net::TcpConnectionPtr& conn,
    muduo::net::Buffer* buf,
    muduo::Timestamp time)
{
    muduo::string msg(buf->retrieveAllAsString());
    conn->send(msg);
    conn->shutdown();
}

先来执行:

conn->send(msg);

跳转到 TcpConnection::send:

// TcpConnection.cc
void TcpConnection::send(Buffer* buf)
{
    if (state_ == kConnected)
    {
        if (loop_->isInLoopThread())
        {
            sendInLoop(buf->peek(), buf->readableBytes());
            buf->retrieveAll();
        }
        else
        {
            void (TcpConnection:: * fp)(const StringPiece & message) = &TcpConnection::sendInLoop;
            loop_->runInLoop(
                std::bind(fp,
                    this,     // FIXME
                    buf->retrieveAllAsString()));
            //std::forward<string>(message)));
        }
    }
}

回到 main.cc::onMessage(),执行:

conn->shutdown();

跳转到 TcpConnection::shutdown:

// TcpConnection.cc
void TcpConnection::shutdown()
{
    // FIXME: use compare and swap
    if (state_ == kConnected)
    {
        setState(kDisconnecting);
        // FIXME: shared_from_this()?
        loop_->runInLoop(std::bind(&TcpConnection::shutdownInLoop, this));
    }
}

loop_->runInLoop(std::bind(&TcpConnection::shutdownInLoop, this));

跳转到:

// EventLoop.cc
void EventLoop::runInLoop(Functor cb)
{
    if (isInLoopThread())
    {
        cb();
    }
    else
    {
        queueInLoop(std::move(cb));
    }
}

下面执行 cb(),cb 是传进来的参数,即 TcpConnection::shutdownInLoop:

// TcpConnection.cc
void TcpConnection::shutdownInLoop()
{
    loop_->assertInLoopThread();
    if (!channel_->isWriting())
    {
        // we are not writing
        socket_->shutdownWrite();
    }
}

if (!channel_->isWriting()) 是判断输出缓冲区的数据有没有发送完毕。我这里是已经发送完毕了,于是跳转到 Socket.cc:shutdownWrite:

// Socket.cc
void Socket::shutdownWrite()
{
  sockets::shutdownWrite(sockfd_);
}

再转到:

// sockets.cc
void sockets::shutdownWrite(int sockfd)
{
    if (::shutdown(sockfd, SHUT_WR) < 0)
    {
    }
}

回到 EventLoop::loop:

// EventLoop.cc
void EventLoop::loop()
{
    assert(!looping_);
    assertInLoopThread();
    looping_ = true;
    quit_ = false;  
    while (!quit_)
    {
        activeChannels_.clear();
        pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
        ++iteration_;
        eventHandling_ = true;
        for (Channel* channel : activeChannels_)
        {
            currentActiveChannel_ = channel;
            currentActiveChannel_->handleEvent(pollReturnTime_);
        }
        currentActiveChannel_ = NULL;
        eventHandling_ = false;
        doPendingFunctors();
    }
 
    looping_ = false;
}

上面都是在处理:

currentActiveChannel_->handleEvent(pollReturnTime_);

下面执行:

doPendingFunctors();

转到 EventLoop::doPendingFunctors():

// EventLoop.cc
void EventLoop::doPendingFunctors()
{
    std::vector<Functor> functors;
    callingPendingFunctors_ = true;

    {
        MutexLockGuard lock(mutex_);
        functors.swap(pendingFunctors_);
    }

    for (const Functor& functor : functors)
    {
        functor();
    }
    callingPendingFunctors_ = false;
}

functors 为空,不执行这个函数。

到目为止这个函数只在有客户端建立新连接后子线程执行过,做的是 epoll_ctl。

回到 EventLoop::loop() 的 while 循环的开头,进入 EPollPoller::poll():

// EPollPoller.cc
Timestamp EPollPoller::poll(int timeoutMs, ChannelList* activeChannels)
{
    int numEvents = ::epoll_wait(epollfd_,
        &*events_.begin(),
        static_cast<int>(events_.size()),
        timeoutMs);
    int savedErrno = errno;
    Timestamp now(Timestamp::now());
    if (numEvents > 0)
    {
        fillActiveChannels(numEvents, activeChannels);
        if (implicit_cast<size_t>(numEvents) == events_.size())
        {
            events_.resize(events_.size() * 2);
        }
    }
    else if (numEvents == 0)
    {}
    else
    {
        if (savedErrno != EINTR)
        {
            errno = savedErrno;
        }
    }
    return now;
}

此时 epoll_wait 监测到事件发生,不阻塞。执行 fillActiveChannels(numEvents, activeChannels):

// EPollPoller.cc
void EPollPoller::fillActiveChannels(int numEvents,
    ChannelList* activeChannels) const
{
    for (int i = 0; i < numEvents; ++i)
    {
        Channel* channel = static_cast<Channel*>(events_[i].data.ptr);
        channel->set_revents(events_[i].events);
        activeChannels->push_back(channel);
    }
}

回到 Eventloop::loop,执行 currentActiveChannel_->handleEvent(pollReturnTime_):

// Channel.cc
void Channel::handleEvent(Timestamp receiveTime)
{
    std::shared_ptr<void> guard;
    if (tied_)
    {
        guard = tie_.lock();
        if (guard)
        {
            handleEventWithGuard(receiveTime);
        }
    }
    else
    {
        handleEventWithGuard(receiveTime);
    }
}

下面执行第二个 if 里的 handleEventWithGuard:

// Channel.cc
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
    eventHandling_ = true;
    if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
    {
        if (closeCallback_) closeCallback_();
    }
 
    if (revents_ & POLLNVAL)
    {
    }
 
    if (revents_ & (POLLERR | POLLNVAL))
    {
        if (errorCallback_) errorCallback_();
    }
    if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
    {
        if (readCallback_) readCallback_(receiveTime);
    }
    if (revents_ & POLLOUT)
    {
        if (writeCallback_) writeCallback_();
    }
    eventHandling_ = false;
}

执行 readCallback_。此时的 readCallback_ 是 TcpConnection::handleRead。也是在上一篇文章的主线程里设置了 Channel::readCallback_。那么转到 TcpConnection::handleRead:

// TcpConnection.cc
void TcpConnection::handleRead(Timestamp receiveTime)
{
    loop_->assertInLoopThread();
    int savedErrno = 0;
    ssize_t n = inputBuffer_.readFd(channel_->fd(), &savedErrno);
    if (n > 0)
    {
        messageCallback_(shared_from_this(), &inputBuffer_, receiveTime);
    }
    else if (n == 0)
    {
        handleClose();
    }
    else
    {
        errno = savedErrno;
        handleError();
    }
}

此时没读到数据,进入 handleClose():

// TcpConnection.cc
void TcpConnection::handleClose()
{
    loop_->assertInLoopThread();
    // we don't close fd, leave it to dtor, so we can find leaks easily.
    setState(kDisconnected);
    channel_->disableAll();

    TcpConnectionPtr guardThis(shared_from_this());
    connectionCallback_(guardThis);
    // must be the last line
    closeCallback_(guardThis);
}

channel_->disableAll() 又是一系列的 update 操作。

然后执行 connectionCallback_。这里的 connectionCallback_ 是 main::onConnection。在 TcpServer::newConnection 里设置的。这里没有为 main::onConnection 函数写一些代码,可以用 if (conn->connected()) 判断是否断开。

最后执行 closeCallback_ 回调函数,即 TcpServer::removeConnection,也是在 TcpServer::newConnection 里设置的:

// TcpServer.cc::newConnection
conn->setCloseCallback(
    std::bind(&TcpServer::removeConnection, this, _1));

下面转到:

// TcpServer.cc
void TcpServer::removeConnection(const TcpConnectionPtr& conn)
{
    loop_->runInLoop(std::bind(&TcpServer::removeConnectionInLoop, this, conn));
}

调用 EventLoop::runInLoop::

// EventLoop.cc
void EventLoop::runInLoop(Functor cb)
{
    if (isInLoopThread())
    {
        cb();
    }
    else
    {
        queueInLoop(std::move(cb));
    }
}

调用 queueInLoop,因为要在主线程删除一些和这个关闭的客户端有关的东西:

// EventLoop.cc
void EventLoop::queueInLoop(Functor cb)
{
    {
        MutexLockGuard lock(mutex_);
        pendingFunctors_.push_back(std::move(cb));
    }

    if (!isInLoopThread() || callingPendingFunctors_)
    {
        wakeup();
    }
}

上面的 cb 是 removeConnectionInLoop。

然后进入 wakeup(),由子线程通过 wakeupFd 给主线程发消息。

又回到 EventLoop.loop:

// EventLoop.cc
void EventLoop::loop()
{
    assert(!looping_);
    assertInLoopThread();
    looping_ = true;
    quit_ = false;  
    while (!quit_)
    {
        activeChannels_.clear();
        pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
        ++iteration_;
        eventHandling_ = true;
        for (Channel* channel : activeChannels_)
        {
            currentActiveChannel_ = channel;
            currentActiveChannel_->handleEvent(pollReturnTime_);
        }
        currentActiveChannel_ = NULL;
        eventHandling_ = false;
        doPendingFunctors();
    }
 
    looping_ = false;
}

上面处理了 currentActiveChannel_->handleEvent(pollReturnTime_)。

下面处理 doPendingFunctors(),此时子线程没什么好处理的。因为之前已经 update 操作过了。

既然子线程给主线程 wakeup 了,那么轮到主线程了。颜色换回黑色。

EPollPoller::poll 里的 epoll_wait 返回了发生的事件,又是执行:EPollPoller.::fillActiveChannels。

然后返回 EventLoop::loop,执行 channel->handleEvent(pollReturnTime_):

// Channel.cc
void Channel::handleEvent(Timestamp receiveTime)
{
    std::shared_ptr<void> guard;
    if (tied_)
    {
        guard = tie_.lock();
        if (guard)
        {
            handleEventWithGuard(receiveTime);
        }
    }
    else
    {
        handleEventWithGuard(receiveTime);
    }
}

执行 else 里的 handleEventWithGuard(receiveTime):

// Channel.cc
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
    eventHandling_ = true;
    if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
    {
        if (closeCallback_) closeCallback_();
    }
 
    if (revents_ & POLLNVAL)
    {
    }
 
    if (revents_ & (POLLERR | POLLNVAL))
    {
        if (errorCallback_) errorCallback_();
    }
    if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
    {
        if (readCallback_) readCallback_(receiveTime);
    }
    if (revents_ & POLLOUT)
    {
        if (writeCallback_) writeCallback_();
    }
    eventHandling_ = false;
}

主线程读到了子线程发的 one,因此执行 readCallback_,也就是 EventLoop::handleRead。在第一篇文章的过程四:server.start() 里设置的。那时候启动线程池,每个线程创建 wakeupFd 的时候设置的。readCallback_ 就是 EventLoop::handleRead:

// EventLoop.cc
void EventLoop::handleRead()
{
    uint64_t one = 1;
    ssize_t n = sockets::read(wakeupFd_, &one, sizeof one);
}

再回到 EventLoop,执行 doPendingFunctors():

// EventLoop.cc
void EventLoop::doPendingFunctors()
{
    std::vector<Functor> functors;
    callingPendingFunctors_ = true;
 
    {
        MutexLockGuard lock(mutex_);
        functors.swap(pendingFunctors_);
    }
    for (const Functor& functor : functors)
    {
        functor();
    }
    callingPendingFunctors_ = false;
}

pendingFunctors_ 是在子线程的 queueInLoop 里添加的。要执行的回调函数是 TcpServer::removeConnectionInLoop。

// removeConnectionInLoop.cc
void TcpServer::removeConnectionInLoop(const TcpConnectionPtr& conn)
{
    size_t n = connections_.erase(conn->name());
    EventLoop* ioLoop = conn->getLoop();
    ioLoop->queueInLoop(
        std::bind(&TcpConnection::connectDestroyed, conn));
}

又要调用 queueInLoop(cb)。这里的 cb 是 TcpConnection::connectDestroyed。在 queueInLoop 里主线程给子线程发消息。

子线程又回到 loop 的 while 循环里了。

注意主线程没有 updata 操作,也就没有 EPOLL_CTL_DEL 操作。

主线程给子线程发送了消息,又到子线程了。

子线程又要到 loop 里的 doPendingFunctors。

doPendingFunctors 里的回调操作就是 TcpConnection::connectDestroyed:

// TcpConnection.cc
void TcpConnection::connectDestroyed()
{
    if (state_ == kConnected)
    {
        setState(kDisconnected);
        channel_->disableAll();

        connectionCallback_(shared_from_this());
    }
    channel_->remove();
}

connectionCallback_ 和前面一样没做任何操作,就是 main.cc 里定义的。

然后执行 channel_->remove():

// Channel.cc
void Channel::remove()
{
    addedToLoop_ = false;
    loop_->removeChannel(this);
}

再:

// EventLoop.cc
void EventLoop::removeChannel(Channel* channel)
{
    poller_->removeChannel(channel);
}

再:

// EPollPoller.cc
void EPollPoller::removeChannel(Channel* channel)
{
    int index = channel->index();
    if (index == kAdded)
    {
        update(EPOLL_CTL_DEL, channel);
    }
    channel->set_index(kNew);
}

然后再 update。

后面就是析构了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值