boost.asio的async_write_some存在的坑

6 篇文章 1 订阅

boost.asio的async_write_some看起来很方便,但使用起来很复杂,主要有两个方面:

  1. async_write_some经常与io_context不在同一线程,async_write_some同一时刻只能由一个线程调用,需要加锁,同时需要与io_context所在线程的其他异步方法同步,如关闭sock的时机。
  2. 当对方一直不读取缓冲区,就会造成本方的socket发送缓冲区爆满,此种情况需要怎样处理?

第一种情况就需要根据业务情况加各种锁了。现在只讨论当对方一直不读缓冲区的情况。测试如下
server和client建立连接后,client向server发送1000000个1024字节流,但server不读取内容。
结果:
不再回调,内存增长速度很快

发送端client在async_write_some的第2687次回调函数后就不再有回调了,但async_write_some一直在调用直到发送1000000次,内存一直在增长,所以当我们使用async_write_some时遇到这种情况,很容易造成内存耗尽,因为预先分配的buf会在回调函数中free掉,但此时回调函数永远不会被回调。

不能直接发现此类错误的话,就需要加个定时器检测状态了,用起来挺麻烦。
测试代码如下:

class LIBTEST_BOOST_DLL TcpServer
{
public:
	typedef  boost::asio::ip::tcp::socket tcp_socket_t;
	typedef  boost::asio::ip::tcp::acceptor tcp_acceptor_t;
	typedef  boost::asio::ip::tcp::endpoint tcp_endpoint_t;
	typedef  boost::asio::io_context io_context_t;
	TcpServer(io_context_t &io, const tcp_endpoint_t& endpoint);
	void Accept();
	void ReadHandler(tcp_socket_t *sock);
private:
	tcp_acceptor_t acceptor_;
	io_context_t *io_;
	uint64_t cnt_ = 0;
};
class LIBTEST_BOOST_DLL TcpClient
{
public:
	typedef  boost::asio::ip::tcp::socket tcp_socket_t;
	typedef  boost::asio::ip::tcp::endpoint tcp_endpoint_t;
	typedef  boost::asio::io_context io_context_t;
	TcpClient(io_context_t &io);
	void Connect(const tcp_endpoint_t& endpoint);
	void Write();
private:
	tcp_socket_t sock_;
	io_context_t  *io_;
	uint64_t cnt_ = 0;
};
TcpServer::TcpServer(io_context_t &io, const tcp_endpoint_t& endpoint)
	:acceptor_(io,endpoint),io_(&io)
{

}
void TcpServer::ReadHandler(tcp_socket_t *sock)
{
	char *buf = new char[1024];
	sock->async_read_some(boost::asio::buffer(buf,1024), [sock, this,buf](const std::error_code &ec, std::size_t sz) {
		delete buf;
		if (ec) {
			sock->close();
			cout << "[threadId=" << this_thread::get_id() << "] " <<  "async_read_some callback error,msg=" << ec.message()<< endl;
			delete sock;
			return;
		}
		//cout << "[" << this_thread::get_id() << "] " << &buf_[0];// << endl;
		cnt_++;
		cout << "[threadId=" << this_thread::get_id() << "] " << "async_read_some callback cnt=" << cnt_ << ",size=" << sz << endl;
		ReadHandler(sock);
		sock->close();
		sock->close();
		sock->close();
	});
}
void TcpServer::Accept()
{
	
	//auto sock = make_shared<tcp_socket_t>(*io_);
	auto sock = new tcp_socket_t(*io_);
	auto acceptHandler = [this,sock](const std::error_code &ec) {
		if (ec) {
			delete sock;
			cout << "[threadId=" << this_thread::get_id() << "] " << "async_accept callback error,msg=" << ec.message() << endl;
			return;
		}
		cout << "[threadId=" << this_thread::get_id() << "]";
		cout << " async_accept callback client from: ";
		cout << sock->remote_endpoint().address() <<":" << sock->remote_endpoint().port() << endl;
		//ReadHandler(sock);
		Accept();
	};
	acceptor_.async_accept(*sock, acceptHandler);

}

TcpClient::TcpClient(io_context_t &io)
	:sock_(io),io_(&io)
{
}
void TcpClient::Write()
{
	auto buf = new char[1024];
	memset(buf,'q',1024);
	sock_.async_write_some(boost::asio::buffer(buf,1024), [this,buf](const std::error_code &ec, std::size_t sz) {
		delete buf;
		if (ec) {
			cout << "[threadId=" << this_thread::get_id() << "] " << "async_write_some callback error,msg=" << ec.message() << endl;
			sock_.close();
			return;
		}
		cnt_++;
		cout << "[threadId=" << this_thread::get_id() << "] " << "async_write_some callback cnt=" << cnt_ << ",size=" << sz<< endl;
	});
	
}
void TcpClient::Connect(const tcp_endpoint_t& endpoint)
{
	auto connHandler = [this,endpoint](const std::error_code &ec) {
		if (ec) {
			cout << "[threadId=" << this_thread::get_id() << "] " << "async_connect callback error,msg=" << ec.message() << endl;
			//Connect(endpoint);
			return;
		}
		cout << "[threadId=" << this_thread::get_id() << "] ";
		cout << "async_connect callback to ";
		cout << this->sock_.remote_endpoint().address();
		cout << " success" << endl;;
		std::thread t([this] {
			for(int i=0;i<1000000;i++)
			Write();
		});
		t.detach();
	};
	sock_.async_connect(endpoint, connHandler);
}
int main(int argc, char *argv[])
{

	boost::asio::io_context io;
	
	boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), 1234);
	TcpServer tcp(io,endpoint);
	TcpClient client(io);
	tcp.Accept();
	client.Connect(boost::asio::ip::tcp::endpoint(boost::asio::ip::address::from_string("127.0.0.1"),1234));
	boost::asio::io_context::work worker(io);
	io.run();

	
#ifdef WIN32
	system("pause");
#endif
	return 0;
}
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值