acceptor 大法好

boost asio中io_service类的几种使用

io_service类

你应该已经发现大部分使用Boost.Asio编写的代码都会使用几个io_service的实例。io_service是这个库里面
最重要的类;它负责和操作系统打交道,等待所有异步操作的结束,然后为每一个异步操作调用其完成处
理程序。
如果你选择用同步的方式来创建你的应用,你则不需要考虑我将在这一节向你展示的东西。 你有多种不同
的方式来使用io_service。在下面的例子中,我们有3个异步操作,2个socket连接操作和一个计时器等待操
作:
有一个io_service实例和一个处理线程的单线程例子:
io_service service; // 所有socket操作都由service来处理
ip::tcp::socket sock1(service); // all the socket operations are handled by service
ip::tcp::socket sock2(service);

sock1.asyncconnect( ep, connect_handler);
sock2.async_connect( ep, connect_handler);
deadline_timer t(service, boost::posixtime::seconds(5));
t.async_wait(timeout_handler);
service.run();
有一个io_service实例和多个处理线程的多线程例子:
io_service service;
ip::tcp::socket sock1(service);
ip::tcp::socket sock2(service);
sock1.asyncconnect( ep, connect_handler);
sock2.async_connect( ep, connect_handler);
deadline_timer t(service, boost::posixtime::seconds(5));
t.async_wait(timeout_handler);
for ( int i = 0; i < 5; ++i)
boost::thread( run_service);
void run_service()
{
service.run();
}
有多个io_service实例和多个处理线程的多线程例子:
io_service service[2];
ip::tcp::socket sock1(service[0]);
ip::tcp::socket sock2(service[1]);
sock1.asyncconnect( ep, connect_handler);
sock2.async_connect( ep, connect_handler);
deadline_timer t(service[0], boost::posixtime::seconds(5));
t.async_wait(timeout_handler);
for ( int i = 0; i < 2; ++i)
boost::thread( boost::bind(run_service, i));

void run_service(int idx)
{
service[idx].run();
}
首先,要注意你不能拥有多个io_service实例却只有一个线程。下面的代码片段没有任何意义:
for ( int i = 0; i < 2; ++i)
service[i].run();
上面的代码片段没有意义是因为service[1].run()需要service[0].run()先结束。因此,所有由service[1]处理的
异步操作都需要等待,这显然不是一个好主意。
在前面的3个方案中,我们在等待3个异步操作结束。为了解释它们之间的不同点,我们假设:过一会操作1
完成,然后接着操作2完成。同时我们假设每一个完成处理程序需要1秒钟来完成执行。
在第一个例子中,我们在一个线程中等待三个操作全部完成,第1个操作一完成,我们就调用它的完成处理
程序。尽管操作2紧接着完成了,但是操作2的完成处理程序需要在1秒钟后,也就是操作1的完成处理程序
完成时才会被调用。
第二个例子,我们在两个线程中等待3个异步操作结束。当操作1完成时,我们在第1个线程中调用它的完成
处理程序。当操作2完成时,紧接着,我们就在第2个线程中调用它的完成处理程序(当线程1在忙着响应操
作1的处理程序时,线程2空闲着并且可以回应任何新进来的操作)。
在第三个例子中,因为操作1是sock1的connect,操作2是sock2的connect,所以应用程序会表现得像第二
个例子一样。线程1会处理sock1 connect操作的完成处理程序,线程2会处理sock2的connect操作的完成处
理程序。然而,如果sock1的connect操作是操作1,deadline_timer t的超时操作是操作2,线程1会结束正
在处理的sock1 connect操作的完成处理程序。因而,deadline_timer t的超时操作必须等sock1 connect操
作的完成处理程序结束(等待1秒钟),因为线程1要处理sock1的连接处理程序和t的超时处理程序。

下面是你需要从前面的例子中学到的:
第一种情况是非常基础的应用程序。因为是串行的方式,所以当几个处理程序需要被同时调用时,你
通常会遇到瓶颈。如果一个处理程序需要花费很长的时间来执行,所有随后的处理程序都不得不等
待。
第二种情况是比较适用的应用程序。他是非常强壮的——如果几个处理程序被同时调用了(这是有可
能的),它们会在各自的线程里面被调用。唯一的瓶颈就是所有的处理线程都很忙的同时又有新的处
理程序被调用。然而,这是有快速的解决方式的,增加处理线程的数目即可。
第三种情况是最复杂和最难理解的。你只有在第二种情况不能满足需求时才使用它。这种情况一般就
是当你有成千上万实时(socket)连接时。你可以认为每一个处理线程(运行io_service::run()的线
程)有它自己的select/epoll循环;它等待任意一个socket连接,然后等待一个读写操作,当它发现这
种操作时,就执行。大部分情况下,你不需要担心什么,唯一你需要担心的就是当你监控的socket数
目以指数级的方式增长时(超过1000个的socket)。在那种情况下,有多个select/epoll循环会增加应
用的响应时间。
如果你觉得你的应用程序可能需要转换到第三种模式,请确保监听操作的这段代码(调用io_service::run()
的代码)和应用程序其他部分是隔离的,这样你就可以很轻松地对其进行更改。

标签: boost, asio, io_service

====================================================================================================================================================================================================================================================================================================================================================================

acceptor
acceptor(io_service);
acceptot(io_service, protocol_type);
acceptor(io_service, endpoint_type, reuse_address = true);
acceptor(io_service, protocol_type, native_handle_type);
acceptor(basic_socket_acceptor &&other);
 
types
1.0 broadcast
   Socket option to permit sending of broadcast messages.
example
   udp::socket socket(io_service);
   socket_base::broadcast option(ture);
   socket.set_option(option);
   ...
   boost::asio::socket_base::broadcast option;
   socket.get_option(option);
   bool is_set = option.value();
   
1.1 bytes_readable
   IO control command to get the amount of data that can be read without blocking.
example
   tcp::socket_base::bytes_readable command(true);
   socket.io_control(command);
   std::size_t bytes_readable = command.get();
 
1.2 debug
   Socket option to enable socket-level debugging.
example
   Setting the option:
   tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::debug option(true);
   socket.set_option(option);
   
   Getting the current option value:
   tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::debug option;
   socket.get_option(option);
   bool is_set = option.value();
   
1.3 do_not_route
   Socket option to prevent routing, use local interfaces only.
example
   Setting the option:
   tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::do_not_route option(true);
   socket.set_option(option);
   
   Getting the current option value:
   tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::do_not_route option;
   socket.get_option(option);
   bool is_set = option.value();
      
1.4 enable_connection_aborted
   Socket option to report aborted connections on accept.
   Implements a custom socket option that determines whether or not
   an accept operation is permitted to fail with boost::asio::error::connection_aborted.
   By default the option is false.
example
   Setting the option:
   tcp::acceptor acceptor(io_service);
   ...
   boost::asio::socket_base::enable_connection_aborted option(true);
   acceptor.set_option(option);
   
   Getting the current option value:
   tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::do_not_route option;
   acceptor.get_option(option);
   bool is_set = option.value();
1.5 endpoint_type
1.6 linger
   Socket option to specify whether the socket lingers on close if unsent data is present.
example
   Setting the option:
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::linger option(true, 30);
   socket.set_option(option);
   Getting the current option value:
 
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::linger option;
   socket.get_option(option);
   bool is_set = option.enabled();
   unsigned short timeout = option.timeout();
   
   boost中快速关闭socket代码片段。
   创建socket后,设置socket的linger属性
   boost::asio::ip::tcp::socket socket(io_service);
   boost::asio::socket_base::linger option(true, 0);
   socket.set_option(option);
   直接关闭socket
   socket.close();
   此时socket将会发生reset包给对方,然后直接关闭socket,并清理占有资源。
1.7 message_flags
   Bitmask type for flags that can be passed to send and receive operations.
1.8 native_handle
   The native representation of an acceptor.
1.9 non_blocking_io
   (Deprecated: Use non_blocking().) IO control command to set the blocking mode of the socket.
example
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::non_blocking_io command(true);
   socket.io_control(command);  
1.10 protocol_type
1.11 receive_buffer_size
   Socket option for the receive buffer size of a socket.
example
   Setting the option:
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::receive_buffer_size option(8192);
   socket.set_option(option);
   Getting the current option value:
 
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::receive_buffer_size option;
   socket.get_option(option);
   int size = option.value();
1.12 receive_low_watermark
   Socket option for the receive low watermark.
example
   Setting the option:
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::receive_low_watermark option(1024);
   socket.set_option(option);
   Getting the current option value:
 
   boost::asio::ip::tcp::socket socket(io_service);
   ...
   boost::asio::socket_base::receive_low_watermark option;
   socket.get_option(option);
   int size = option.value();
1.13 reuse_address
   Socket option to allow the socket to be bound to an address that is already in use.
example
   Setting the option:
   boost::asio::ip::tcp::acceptor acceptor(io_service);
   ...
   boost::asio::socket_base::reuse_address option(true);
   acceptor.set_option(option);
   Getting the current option value:
 
   boost::asio::ip::tcp::acceptor acceptor(io_service);
   ...
   boost::asio::socket_base::reuse_address option;
   acceptor.get_option(option);
   bool is_set = option.value();  
1.14 send_buffer_size
1.15 sent_low_watermark
1.16 shutdown_type
   Different ways a socket may be shutdown.
   shutdown_receive
   Shutdown the receive side of the socket.
 
   shutdown_send
   Shutdown the send side of the socket.
 
   shutdown_both
   Shutdown both send and receive on the socket.
   
2 member funtions
2.0 accept
    Accept a new connection and obtain the endpoint of the peer.
    accept(secket);
    accept(socket, error_code);
    accept(socket, endpoint);
    accept(socket, endpoint, error_code);
2.1 assign
    Assigns an existing native acceptor to the acceptor.
    assign(protocol_type, native_handle_type);
    assign(protocol_type, native_handle_type, error_code);
2.2 async_accept
    Start an asynchronous accept.
    async_accept(socket, handler);
    async_accept(socket, handler, error_code);
    void handler(error_code);
2.3 bind
    Bind the acceptor to the given local endpoint.
    bind(endpoint_type);
    bind(endpoint_type, error_code);
2.4 cancel
    Cancel all asynchronous operations associated with the acceptor.
    This function causes all outstanding asynchronous connect, send and receive operations to finish immediately, and the handlers for cancelled operations will be passed the boost::asio::error::operation_aborted error.
    cancel();
    cancel(error_code);
2.5 close
    This function is used to close the acceptor. Any asynchronous accept operations will be cancelled immediately.
    A subsequent call to open() is required before the acceptor can again be used to again perform socket accept operations.
    close();
    close(error_code);
2.6 get_io_service
    This function may be used to obtain the io_service object that the I/O object uses to dispatch handlers for asynchronous operations.
    io_service& get_io_service();
2.7 get_option
    This function is used to get the current value of an option on the acceptor.
    get_option(TOption& option);
    get_option(TOption& option, error_code&);
example
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    ...
    boost::asio::ip::tcp::acceptor::reuse_address option;
    boost::system::error_code ec;
    acceptor.get_option(option, ec);
    if (ec)
    {
      // An error occurred.
    }
    bool is_set = option.get();
2.8 io_control
    This function is used to execute an IO control command on the acceptor.
    io_control(TIoControlCommand&);
    io_control(TIoControlCommand&, error_code&);
example
    Getting the number of bytes ready to read:
 
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    ...
    boost::asio::ip::tcp::acceptor::non_blocking_io command(true);
    boost::system::error_code ec;
    socket.io_control(command, ec);
    if (ec)
    {
      // An error occurred.
    }
2.9 is_open
    Determine whether the acceptor is open.
2.10 listen
    Place the acceptor into the state where it will listen for new connections.
    This function puts the socket acceptor into the state where it may accept new connections.
    listen(int back_log = sock_base::max_connections);
    back_log: The maximum length of the queue of pending connections.
    listen(int back_log, error_code);
example
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    ...
    boost::system::error_code ec;
    acceptor.listen(boost::asio::socket_base::max_connections, ec);
    if (ec)
    {
      // An error occurred.
    }
2.11 local_endpoint
    Get the local endpoint of the acceptor.
    This function is used to obtain the locally bound endpoint of the acceptor.
    endpoint_type local_endpoint();
    endpoint_type local_endpoint(boost::system::error_code & ec);
example
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    ...
    boost::system::error_code ec;
    boost::asio::ip::tcp::endpoint endpoint = acceptor.local_endpoint(ec);
    if (ec)
    {
      // An error occurred.
    }   
2.12 native
    Get the native acceptor representation.
    This function may be used to obtain the underlying representation of the acceptor. This is intended to allow access to native acceptor functionality that is not otherwise provided.
    native_type native();
2.13 native_handle();
    native_handle_type native_handle();
    This function may be used to obtain the underlying representation of the acceptor. This is intended to allow access to native acceptor functionality that is not otherwise provided.
2.14 open
    Open the acceptor using the specified protocol.
    open(const protocol_type & protocol);
    boost::system::error_code open(const protocol_type & protocol,boost::system::error_code & ec);
    This function opens the socket acceptor so that it will use the specified protocol.
example
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    boost::system::error_code ec;
    acceptor.open(boost::asio::ip::tcp::v4(), ec);
    if (ec)
    {
      // An error occurred.
    }
2.15 set_option
    Set an option on the acceptor.
    set_option(option);
    template<typename SettableSocketOption>
    boost::system::error_code set_option(const SettableSocketOption & option, boost::system::error_code & ec);
    This function is used to set an option on the acceptor.
example
    boost::asio::ip::tcp::acceptor acceptor(io_service);
    ...
    boost::asio::ip::tcp::acceptor::reuse_address option(true);
    boost::system::error_code ec;
    acceptor.set_option(option, ec);
    if (ec)
    {
      // An error occurred.
    }

 

====================================================================================================================================================================================================================================================================================================================================================================

 

TCP网络库:Acceptor、TcpServer、TcpConnection

Acceptor类:用于接收新的TCP连接,该类是内部class,供TcpServer使用,生命期由TcpServer控制

类成员:

复制代码

class Acceptor : boost::noncopyable
{
 public:
  typedef boost::function<void (int sockfd,
                                const InetAddress&)> NewConnectionCallback;

  Acceptor(EventLoop* loop, const InetAddress& listenAddr, bool reuseport);
  ~Acceptor();

  void setNewConnectionCallback(const NewConnectionCallback& cb)
  { newConnectionCallback_ = cb; }

  bool listenning() const { return listenning_; }
  void listen();

 private:
  //调用accept()接受新连接,并回调用户callback
  void handleRead();
  //acceptChannel_所属loop对象
  EventLoop* loop_;
  //此socket是listen socket
  Socket acceptSocket_;
  //channel对象监测上述socket上的readable事件,并在channel对象的hanleEvent方法中回调handleRead(),handleRead会调用accept来接受新连接
  Channel acceptChannel_;
  NewConnectionCallback newConnectionCallback_;
  bool listenning_;
  int idleFd_;
};

复制代码

 

复制代码

//构造函数调用socket() 、bind(),即创建TCP服务端的传统步骤
//socket() bind() listen()任何一个步骤出错都会造成程序终止,故没有错误处理
//sockets::createNonblockingOrDie创建非阻塞的socket
Acceptor::Acceptor(EventLoop* loop, const InetAddress& listenAddr, bool reuseport)
  : loop_(loop),
    acceptSocket_(sockets::createNonblockingOrDie(listenAddr.family())),
    acceptChannel_(loop, acceptSocket_.fd()),
    listenning_(false),
    idleFd_(::open("/dev/null", O_RDONLY | O_CLOEXEC))
{
  assert(idleFd_ >= 0);
  acceptSocket_.setReuseAddr(true);
  acceptSocket_.setReusePort(reuseport);
  acceptSocket_.bindAddress(listenAddr);
  acceptChannel_.setReadCallback(
      boost::bind(&Acceptor::handleRead, this));
}
//调用listen(),监听listen_fd,当有新连接到达时,acceptChannel_会处理
void Acceptor::listen()
{
  loop_->assertInLoopThread();
  listenning_ = true;
  acceptSocket_.listen();
  acceptChannel_.enableReading();
}
//accept策略
//参考acceptable strategies for improving web server performance
//回调函数,在acceptChannel_的handleEvent方法中被调用,接受客户端连接
void Acceptor::handleRead()
{
  loop_->assertInLoopThread();
  InetAddress peerAddr;
  //FIXME loop until no more
  int connfd = acceptSocket_.accept(&peerAddr);
  if (connfd >= 0)
  {
    // string hostport = peerAddr.toIpPort();
    // LOG_TRACE << "Accepts of " << hostport;
    //当在handleRead建立新的客户连接后,会调用这个回调函数,我觉得它的作用是处理客户的业务逻辑,问题是什么时候设置这个回调函数(在TcpServer的构造函数中设置)
   if (newConnectionCallback_)
    {
      newConnectionCallback_(connfd, peerAddr);
    }
    else
    {
      sockets::close(connfd);
    }
  }
  else
  {
    LOG_SYSERR << "in Acceptor::handleRead";
    // Read the section named "The special problem of
    // accept()ing when you can't" in libev's doc.
    // By Marc Lehmann, author of livev.
    //本进程的文件描述符已经达到上限,由于没有socket文件描述符来表示这个连接,就无法close它.若epoll_wait是LT,则每次调用都会立刻返回,因为新连接还等待处理
    //准备一个空闲的文件描述符,在这种情况下,先关闭这个空闲的fd,然后accept拿到新socket连接的描述符,随后close它,再重新打开一个空闲文件给该空闲文件描述符
    if (errno == EMFILE)
    {
      ::close(idleFd_);
      idleFd_ = ::accept(acceptSocket_.fd(), NULL, NULL);
      ::close(idleFd_);
      idleFd_ = ::open("/dev/null", O_RDONLY | O_CLOEXEC);
    }
  }
}

复制代码

TcpServer类:管理accept获得的tcp连接.TcpServer是供用户直接使用的,生命期由用户控制.

复制代码

///TcpServer内部使用Acceptor来获得新连接的fd,它保存用户提供的connectionCallback和MessageCallback,在新建TcpConnection的
///时候会原样传给后者,TcpServer持有目前存活的TcpConnection的shared_ptr(定义为TcpConnectionPtr)
///在新连接到达时,Acceptor会回调newConnection(),后者会创建TcpConnection对象conn,把它加入ConnectionMap,设置好callback,再调用
///conn->connectEstablished(),其中会回调用户提供的ConnectionCallback.
class TcpServer : boost::noncopyable
{
 public:
  /// Starts the server if it's not listenning.
  ///
  /// It's harmless to call it multiple times.
  /// Thread safe.
  void start();

  /// Set connection callback.
  /// Not thread safe.
  void setConnectionCallback(const ConnectionCallback& cb)
  { connectionCallback_ = cb; }

  /// Set message callback.
  /// Not thread safe.
  void setMessageCallback(const MessageCallback& cb)
  { messageCallback_ = cb; }

  /// Set write complete callback.
  /// Not thread safe.
  void setWriteCompleteCallback(const WriteCompleteCallback& cb)
  { writeCompleteCallback_ = cb; }

 private:
  /// Not thread safe, but in loop
  void newConnection(int sockfd, const InetAddress& peerAddr);
  /// Thread safe.
  void removeConnection(const TcpConnectionPtr& conn);
  /// Not thread safe, but in loop
  void removeConnectionInLoop(const TcpConnectionPtr& conn);
  //key是TcpConnection对象的名字
  typedef std::map<string, TcpConnectionPtr> ConnectionMap;

  EventLoop* loop_;  // the acceptor loop
  const string ipPort_;
  const string name_;
  boost::scoped_ptr<Acceptor> acceptor_; // avoid revealing Acceptor
  boost::shared_ptr<EventLoopThreadPool> threadPool_;
  ConnectionCallback connectionCallback_;
  MessageCallback messageCallback_;
  WriteCompleteCallback writeCompleteCallback_;
  ThreadInitCallback threadInitCallback_;
  AtomicInt32 started_;
  // always in loop thread
  int nextConnId_;
  ConnectionMap connections_;
};

复制代码

 

复制代码

//新的客户连接建立后,会调用该函数,sockfd是新连接的fd,peerAddr是客户地址
//该函数会创建TcpConnection对象conn,建立对象名字到对象的映射,设置好conn上的回调函数,最后调用TcpConnection类中的connectEstablished方法
void TcpServer::newConnection(int sockfd, const InetAddress& peerAddr)
{
    loop_->assertInLoopThread();
    EventLoop* ioLoop = threadPool_->getNextLoop();
    char buf[64];
    snprintf(buf, sizeof buf, "-%s#%d", ipPort_.c_str(), nextConnId_);
    ++nextConnId_;
    string connName = name_ + buf;

    LOG_INFO << "TcpServer::newConnection [" << name_
             << "] - new connection [" << connName
             << "] from " << peerAddr.toIpPort();
    InetAddress localAddr(sockets::getLocalAddr(sockfd));
    // FIXME poll with zero timeout to double confirm the new connection
    // FIXME use make_shared if necessary
    TcpConnectionPtr conn(new TcpConnection(ioLoop,
        connName,
        sockfd,
        localAddr,
        peerAddr));
    connections_[connName] = conn;
    conn->setConnectionCallback(connectionCallback_);
    conn->setMessageCallback(messageCallback_);
    conn->setWriteCompleteCallback(writeCompleteCallback_);
    conn->setCloseCallback(
        boost::bind(&TcpServer::removeConnection, this, _1)); // FIXME: unsafe
    ioLoop->runInLoop(boost::bind(&TcpConnection::connectEstablished, conn));
}

复制代码

muduo尽量让依赖是单项的,TcpServer会用到Acceptor,但Acceptor并不知道TcpServer的存在。TcpServer会创建TcpConnection,但TcpConnection并不知道TcpServer的存在

TcpConnection类:

 

复制代码

//作用是为刚建立的客户连接conn提供channel对象进行管理,TcpConnection使用Channel来获得socket上的IO事件
void TcpConnection::connectEstablished()
{
    loop_->assertInLoopThread();
    //当前状态得是未建立连接
    assert(state_ == kConnecting);
    //将当前状态设置为已建立连接
    setState(kConnected);
    channel_->tie(shared_from_this());
    channel_->enableReading();

    connectionCallback_(shared_from_this());
}

//TcpConnection断开连接的实现
//handleRead检查read的返回值,根据返回值分别调用messageCallback_、handleClose、handleError
void TcpConnection::handleRead(Timestamp receiveTime)
{
    loop_->assertInLoopThread();
    int savedErrno = 0;
    使用buffer来读取数据
    ssize_t n = inputBuffer_.readFd(channel_->fd(), &savedErrno);
    if (n > 0)
    {
        messageCallback_(shared_from_this(), &inputBuffer_, receiveTime);
    }
    else if (n == 0)
    {
        handleClose();
    }
    else
    {
        errno = savedErrno;
        LOG_SYSERR << "TcpConnection::handleRead";
        handleError();
    }
}
//closeCallback_是TcpServer在newConnection函数中注册的,是TcpServer::removeConnection方法.TcpServer::removeConnection方法把当前TcpConnection从ConnectionMap中移除,然后调用TcpConnection::connectDestroyed
//TcpConnection::connectDestroyed()设置当前TcpConnection的channel对象不再监听任何事件,然后移除该channel对象。
void TcpConnection::handleClose()
{
    loop_->assertInLoopThread();
    LOG_TRACE << "fd = " << channel_->fd() << " state = " << stateToString();
    assert(state_ == kConnected || state_ == kDisconnecting);
    // we don't close fd, leave it to dtor, so we can find leaks easily.
    setState(kDisconnected);
    channel_->disableAll();

    TcpConnectionPtr guardThis(shared_from_this());
    connectionCallback_(guardThis);
    // must be the last line
    closeCallback_(guardThis);
}
//设置当前TcpConnection的channel对象不再监听任何事件,然后移除该channel对象。
void TcpConnection::connectDestroyed()
{
    loop_->assertInLoopThread();
    if (state_ == kConnected)
    {
        setState(kDisconnected);
        channel_->disableAll();

        connectionCallback_(shared_from_this());
    }
    channel_->remove();
}

复制代码

http://www.ccvita.com/515.html

使用Linuxepoll模型,水平触发模式;当socket可写时,会不停的触发socket可写的事件,如何处理?

第一种最普遍的方式:
需要向socket写数据的时候才把socket加入epoll,等待可写事件。
接受到可写事件后,调用write或者send发送数据。
当所有数据都写完后,把socket移出epoll。

这种方式的缺点是,即使发送很少的数据,也要把socket加入epoll,写完后在移出epoll,有一定操作代价。

一种改进的方式:
开始不把socket加入epoll,需要向socket写数据的时候,直接调用write或者send发送数据。如果返回EAGAIN,把socket加入epoll,在epoll的驱动下写数据,全部数据发送完毕后,再移出epoll。

这种方式的优点是:数据不多的时候可以避免epoll的事件处理,提高效率。

 

muduo采用level trigger,因此我们只在需要时才关注writable事件,否则就会造成busy loop

TcpConnection发送数据:

两个难点:关注writable事件的时机、发送数据的速度高于对方接收数据的速度,会造成数据在本地内存中堆积

第二个难点的解决方案:设置一个callback highWaterMarkCallback,如果输出缓冲的长度超过用户指定的大小,就会触发回调

复制代码

//sendInLoop会先尝试直接发送数据,如果一次发送完毕就不会启用WriteCallback,如果只发送了部分数据,则把剩余的数据放入outputBuffer_,并
//开始关注writable事件,以后在handlerWrite()中发送剩余的数据
void TcpConnection::sendInLoop(const void* data, size_t len)
{
    loop_->assertInLoopThread();
    ssize_t nwrote = 0;
    size_t remaining = len;
    bool faultError = false;
    if (state_ == kDisconnected)
    {
        LOG_WARN << "disconnected, give up writing";
        return;
    }
    // if no thing in output queue, try writing directly
    if (!channel_->isWriting() && outputBuffer_.readableBytes() == 0)
    {
        nwrote = sockets::write(channel_->fd(), data, len);
        if (nwrote >= 0)
        {
            remaining = len - nwrote;
            if (remaining == 0 && writeCompleteCallback_)
            {
                loop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));
            }
        }
        else // nwrote < 0
        {
            nwrote = 0;
            if (errno != EWOULDBLOCK)
            {
                LOG_SYSERR << "TcpConnection::sendInLoop";
                if (errno == EPIPE || errno == ECONNRESET) // FIXME: any others?
                {
                    faultError = true;
                }
            }
        }
    }

    assert(remaining <= len);
    if (!faultError && remaining > 0)
    {
        size_t oldLen = outputBuffer_.readableBytes();
        if (oldLen + remaining >= highWaterMark_
            && oldLen < highWaterMark_
            && highWaterMarkCallback_)
        {
            loop_->queueInLoop(boost::bind(highWaterMarkCallback_, shared_from_this(), oldLen + remaining));
        }
        outputBuffer_.append(static_cast<const char*>(data) + nwrote, remaining);
        if (!channel_->isWriting())
        {
            channel_->enableWriting();
        }
    }
}

//当socket可写时,发送outputBuffer_中的数据,一旦发送完毕,立刻停止观察writable事件,避免busy loop
void TcpConnection::handleWrite()
{
    loop_->assertInLoopThread();
    if (channel_->isWriting())
    {
        ssize_t n = sockets::write(channel_->fd(),
            outputBuffer_.peek(),
            outputBuffer_.readableBytes());
        if (n > 0)
        {
            outputBuffer_.retrieve(n);
            if (outputBuffer_.readableBytes() == 0)
            {
                channel_->disableWriting();
                if (writeCompleteCallback_)
                {
                    loop_->queueInLoop(boost::bind(writeCompleteCallback_, shared_from_this()));
                }
                if (state_ == kDisconnecting)
                {
                    shutdownInLoop();
                }
            }
        }
        else
        {
            LOG_SYSERR << "TcpConnection::handleWrite";
            // if (state_ == kDisconnecting)
            // {
            //   shutdownInLoop();
            // }
        }
    }
    else
    {
        LOG_TRACE << "Connection fd = " << channel_->fd()
                  << " is down, no more writing";
    }
}

复制代码

 http://blog.csdn.net/luojiaoqq/article/details/12780051

 

====================================================================================================================================================================================================================================================================================================================================================================

 

Boost.Asio C++ 网络编程之十:基于TCP的异步服务端

2018年03月01日 14:36:12 灿哥哥 阅读数:769

版权声明:本文为灿哥哥http://blog.csdn.net/caoshangpa原创文章,转载请标明出处。 https://blog.csdn.net/caoshangpa/article/details/79412096

       这个流程图是相当复杂的:从Boost.Asio出来你可以看到4个箭头指向on_accept,on_read,on_write和on_check_ping。这也就意味着你永远不知道哪个异步调用是下一个完成的调用,但是你可以确定的是它是这4个操作中的一个。

基于TCP的异步服务端

1.流程图

2.实现

 

#ifdef WIN32
#define _WIN32_WINNT 0x0501
#include <stdio.h>
#endif
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
using namespace boost::asio;
using namespace boost::posix_time;
io_service service;
 
class talk_to_client;
typedef boost::shared_ptr<talk_to_client> client_ptr;
typedef std::vector<client_ptr> array;
array clients;
 
#define MEM_FN(x)       boost::bind(&self_type::x, shared_from_this())
#define MEM_FN1(x,y)    boost::bind(&self_type::x, shared_from_this(),y)
#define MEM_FN2(x,y,z)  boost::bind(&self_type::x, shared_from_this(),y,z)
 
void update_clients_changed();
 
/** simple connection to server:
- logs in just with username (no password)
- all connections are initiated by the client: client asks, server answers
- server disconnects any client that hasn't pinged for 5 seconds
Possible client requests:
- gets a list of all connected clients
- ping: the server answers either with "ping ok" or "ping client_list_changed"
*/
class talk_to_client : public boost::enable_shared_from_this<talk_to_client>
    , boost::noncopyable {
    typedef talk_to_client self_type;
    talk_to_client() : sock_(service), started_(false),
        timer_(service), clients_changed_(false) {
    }
public:
    typedef boost::system::error_code error_code;
    typedef boost::shared_ptr<talk_to_client> ptr;
 
    void start() {
        started_ = true;
        clients.push_back(shared_from_this());
        last_ping = boost::posix_time::microsec_clock::local_time();
        do_read();
    }
    static ptr new_() {
        ptr new_(new talk_to_client);
        return new_;
    }
    void stop() {
        if (!started_) return;
        started_ = false;
        sock_.close();
 
        ptr self = shared_from_this();
        array::iterator it = std::find(clients.begin(), clients.end(), self);
        clients.erase(it);
        update_clients_changed();
    }
    bool started() const { return started_; }
    ip::tcp::socket & sock() { return sock_; }
    std::string username() const { return username_; }
    void set_clients_changed() { clients_changed_ = true; }
private:
    void on_read(const error_code & err, size_t bytes) {
        if (err) stop();
        if (!started()) return;
        // process the msg
        std::string msg(read_buffer_, bytes);
        if (msg.find("login ") == 0) on_login(msg);
        else if (msg.find("ping") == 0) on_ping();
        else if (msg.find("ask_clients") == 0) on_clients();
        else std::cerr << "invalid msg " << msg << std::endl;
    }
 
    void on_login(const std::string & msg) {
        std::istringstream in(msg);
        in >> username_ >> username_;
        std::cout << username_ << " logged in" << std::endl;
        do_write("login ok\n");
        update_clients_changed();
    }
    void on_ping() {
        do_write(clients_changed_ ? "ping client_list_changed\n" : "ping ok\n");
        clients_changed_ = false;
    }
    void on_clients() {
        std::string msg;
        for (array::const_iterator b = clients.begin(), e = clients.end(); b != e; ++b)
            msg += (*b)->username() + " ";
        do_write("clients " + msg + "\n");
    }
 
    void do_ping() {
        do_write("ping\n");
    }
    void do_ask_clients() {
        do_write("ask_clients\n");
    }
 
    void on_check_ping() {
        boost::posix_time::ptime now = boost::posix_time::microsec_clock::local_time();
        if ((now - last_ping).total_milliseconds() > 5000) {
            std::cout << "stopping " << username_ << " - no ping in time" << std::endl;
            stop();
        }
        last_ping = boost::posix_time::microsec_clock::local_time();
    }
    void post_check_ping() {
        timer_.expires_from_now(boost::posix_time::millisec(5000));
        timer_.async_wait(MEM_FN(on_check_ping));
    }
 
    void on_write(const error_code & err, size_t bytes) {
        do_read();
    }
    void do_read() {
        async_read(sock_, buffer(read_buffer_),
            MEM_FN2(read_complete, _1, _2), MEM_FN2(on_read, _1, _2));
        post_check_ping();
    }
    void do_write(const std::string & msg) {
        if (!started()) return;
        std::copy(msg.begin(), msg.end(), write_buffer_);
        sock_.async_write_some(buffer(write_buffer_, msg.size()),
            MEM_FN2(on_write, _1, _2));
    }
    size_t read_complete(const boost::system::error_code & err, size_t bytes) {
        if (err) return 0;
        bool found = std::find(read_buffer_, read_buffer_ + bytes, '\n') < read_buffer_ + bytes;
        return found ? 0 : 1;
    }
private:
    ip::tcp::socket sock_;
    enum { max_msg = 1024 };
    char read_buffer_[max_msg];
    char write_buffer_[max_msg];
    bool started_;
    std::string username_;
    deadline_timer timer_;
    boost::posix_time::ptime last_ping;
    bool clients_changed_;
};
 
void update_clients_changed() {
    for (array::iterator b = clients.begin(), e = clients.end(); b != e; ++b)
        (*b)->set_clients_changed();
}
 
ip::tcp::acceptor acceptor(service, ip::tcp::endpoint(ip::tcp::v4(), 8001));
 
void handle_accept(talk_to_client::ptr client, const boost::system::error_code & err) {
    client->start();
    talk_to_client::ptr new_client = talk_to_client::new_();
    acceptor.async_accept(new_client->sock(), boost::bind(handle_accept, new_client, _1));
}
 
int main(int argc, char* argv[]) {
    talk_to_client::ptr client = talk_to_client::new_();
    acceptor.async_accept(client->sock(), boost::bind(handle_accept, client, _1));
    service.run();
}

       我们已经学到了怎么写一些基础的客户端/服务端应用。我们已经避免了一些诸如内存泄漏和死锁的低级错误。所有的编码都是框架式的,这样你就可以根据你自己的需求对它们进行扩展。

 

参考链接:http://download.csdn.net/download/caoshangpa/10229882

 

====================================================================================================================================================================================================================================================================================================================================================================

 

RpcEndpoint _endpoint;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值