获取客户端 IP 地址
为取得客户端IP,有三个办法:
1) thrift客户端跟nginx通讯,nginx处做一个upstream插件,该插件接收到thrift请求后,解析thrift请求,增加一个参数,该参数的tagid=32767, 自己拼接上客户端发来的ip地址,那么我们用插件新增一个参数,势必nginx使用thrift文件跟后端服务器使用同一个thrift文件,需要多一个IP参数,而客户端跟nginx之间,使用少一个IP参数的thrift文件即可。
2) 网上博文http://blog.csdn.net/hbuxiaoshe/article/details/38942869介绍的方法也是可行的,不过让人有些纠结;
3) 修改Thrift的实现,此方式,适合thrift server 和 thrift client直连。重载TServerEventHandler方式,服务器端在客户端连接上来后,会触发
class ClientIPHandler : virtual public TServerEventHandler {
public:
ClientIPHandler() {
}
virtual ~ClientIPHandler() {
}
std::string GetThriftClientIp() {
// lock::MutexLock g(&mutex);
// return thrift_client_ip[pthread_self()];
return "";
}
virtual void preServe() {
std::cout << " call preServe " << std::endl;
}
virtual void* createContext(boost::shared_ptr<TProtocol> input,
boost::shared_ptr<TProtocol> output) {
std::cout << " call createContext " << std::endl;
TFramedTransport *tbuf = dynamic_cast<TFramedTransport *>(input->getTransport().get());
if ( !tbuf ) {
std::cout << " tbuf == null" << std::endl;
} else {
TSocket *sock = dynamic_cast<TSocket *>(tbuf->getUnderlyingTransport().get());
std::cout << " ip=" << sock->getPeerAddress() << std::endl;
}
// insert when connection open
// TBufferedTransport *tbuf = dynamic_cast<TBufferedTransport *>(input->getTransport().get());
// TSocket *sock = dynamic_cast<TSocket *>(tbuf->getUnderlyingTransport().get());
// lock::MutexLock g(&mutex);
// thrift_client_ip[pthread_self()] = sock->getPeerAddress();
return NULL;
}
virtual void deleteContext(void* serverContext,
boost::shared_ptr<TProtocol>input,
boost::shared_ptr<TProtocol>output) {
std::cout << " call deleteContext " << std::endl;
// erase when connection close
// lock::MutexLock g(&mutex);
// thrift_client_ip.erase(pthread_self());
}
virtual void processContext(void* serverContext, boost::shared_ptr<TTransport> transport) {
std::cout << " call processContext " << std::endl;
TSocket *tsocket = static_cast<TSocket*>(transport.get());
if(socket){
struct sockaddr* addrPtr;
socklen_t addrLen;
addrPtr = tsocket->getCachedAddress(&addrLen);
if (addrPtr){
getnameinfo((sockaddr*)addrPtr,addrLen,(char*)serverContext,32,NULL,0,0) ;
std::cout << "111111111---------------------------------------" << std::endl;
std::cout << "serverContext=" << serverContext << std::endl;
std::cout << "111111111---------------------------------------" << std::endl;
}
{
TSocket *sock = static_cast<TSocket *>(transport.get());
if (sock)
{
std::cout << "222222222---------------------------------------" << std::endl;
std::cout << "getPeerAddress=" << sock->getPeerAddress() << std::endl;
std::cout << "getSocketInfo=" << sock->getSocketInfo() << std::endl;
std::cout << "222222222---------------------------------------" << std::endl;
}
}
}
}
private:
};
void tst_transfer_server_entry() {
OutputDbgInfo tmpOut( "tst_transfer_server_entry begin", "tst_transfer_server_entry end" ) ;
int srv_port = 9090;
// tst_thrift_threadmanager_fun();
// return ;
/*common::*/CThriftServerHelper<PhotoHandler, PhotoProcessor> thrift_server_agent((new ClientIPHandler), false);
thrift_server_agent.serve(srv_port);
return;
}
1. 前言
分析Thrift的结构动机是为了实现服务端能取到客户端的IP,因此需要对它的结构、调用流程有些了解。另外,请注意本文针对的是TNonblockingServer,不包含TThreadPoolServer、TThreadedServer和TSimpleServer。
2. 示例Service
service EchoService
{
void hello();
}
class EchoHandler : public EchoServiceIf {
private:
virtual void hello();
};
3. 网络部分类图
Thrift线程模型为若干IO线程TNonblockingIOThread(负责收发TCP连接的数据),以及主线程(负责监听TCP连接及接受连接请求)组成。
主线程不一定就是进程的主线程,哪个线程调用了TServer::run()或TServer::serve()就是本文所说的主线程。就当前最新版本(0.9.2)的Thrift而言,调用TServer::run()或TServer::serve()均可以,原因是TServer::run()除无条件的调用外TServer::serve(),没有做任何其它事。对TServer::serve()的调用实际是对TServer的实现类TNonblockingServer的serve()的调用。
简而言之,TNonblockingIOThread负责数据的收发,而TNonblockingServer负责接受连接请求。
在使用中需要注意,调用TServer::run()或TServer::serve()的线程或进程会被阻塞,阻塞进入libevent的死循环,Linux上是死循环调用epoll_wait()。
4.1. 启动准备
准备的工作包括:
1) 启动监听连接
2) 启动收发数据线程
3) 初始化运行环境
在这里,可以看到第一次对TServerEventHandler的回调:
4.2. 接受连接
从接受连接的时序过程可以看出:在该连接TConnection接收数据之前,先调用了TServerEventHandler::createContext(),这个就是获取客户端IP的机会之一,但是当前的实现没有将相关的信息作为参数传递给TServerEventHandler::createContext()。
4.3. 收发数据:执行调用
这过程中对TServerEventHandler::processContext(connectionContext_, getTSocket())进行了回调,并传递了TSocket。
6. TProtocol
TProtocol提供序列化和反序列化能力,定义了消息包的编码和解码协议,它的实现有以下几种:
1) TBinaryProtocol 二进制编解码
2) TDebugProtocol 用于调试的,可读的文本编解码
3) TJSONProtocol 基于json的编解码
4) TCompactProtocol 压缩的二进制编解码
如果需要为thrift增加一种数据类型,则需要修改TProtocol,增加对新数据类型的序列化和反序列化实现。7. TTransport
TTransport负责收发数据,可以简单的是对Socket的包装,但是也支持非Socket,比如Pipe。其中TSocket为TServerSocket使用的Transport。
8. TProtocol&TTransport
对于TNonblockingServer默认使用的是输入和输出Transport,都是以TMemoryBuffer为TTransport。
TProtocol本身没有缓冲区等,它只是序列化和反序列化。然而它依赖于TTransport,通过TTransport发送数据。以TBinaryProtocol为例:
template <class Transport_, class ByteOrder_>
uint32_t TBinaryProtocolT<Transport_, ByteOrder_>::writeI16(const int16_t i16) {
int16_t net = (int16_t)ByteOrder_::toWire16(i16);
this->trans_->write((uint8_t*)&net, 2);
return 2;
}
对比看下TTransport::write的实现:
void TSocket::write(const uint8_t* buf, uint32_t len) {
uint32_t sent = 0;
while (sent < len) {
uint32_t b = write_partial(buf + sent, len - sent); // write_partial内部调用send()
if (b == 0) {
// This should only happen if the timeout set with SO_SNDTIMEO expired.
// Raise an exception.
throw TTransportException(TTransportException::TIMED_OUT, "send timeout expired");
}
sent += b;
}
}
uint32_t TSocket::write_partial(const uint8_t* buf, uint32_t len) {
if (socket_ == THRIFT_INVALID_SOCKET) {
throw TTransportException(TTransportException::NOT_OPEN, "Called write on non-open socket");
}
uint32_t sent = 0;
int flags = 0;
#ifdef MSG_NOSIGNAL
// Note the use of MSG_NOSIGNAL to suppress SIGPIPE errors, instead we
// check for the THRIFT_EPIPE return condition and close the socket in that case
flags |= MSG_NOSIGNAL;
#endif // ifdef MSG_NOSIGNAL
int b = static_cast<int>(send(socket_, const_cast_sockopt(buf + sent), len - sent, flags));
if (b < 0) {
if (THRIFT_GET_SOCKET_ERROR == THRIFT_EWOULDBLOCK || THRIFT_GET_SOCKET_ERROR == THRIFT_EAGAIN) {
return 0;
}
// Fail on a send error
int errno_copy = THRIFT_GET_SOCKET_ERROR;
GlobalOutput.perror("TSocket::write_partial() send() " + getSocketInfo(), errno_copy);
if (errno_copy == THRIFT_EPIPE || errno_copy == THRIFT_ECONNRESET
|| errno_copy == THRIFT_ENOTCONN) {
close();
throw TTransportException(TTransportException::NOT_OPEN, "write() send()", errno_copy);
}
throw TTransportException(TTransportException::UNKNOWN, "write() send()", errno_copy);
}
// Fail on blocked send
if (b == 0) {
throw TTransportException(TTransportException::NOT_OPEN, "Socket send returned 0.");
}
return b;
}
uint32_t TSocket::write_partial(const uint8_t* buf, uint32_t len) {
if (socket_ == THRIFT_INVALID_SOCKET) {
throw TTransportException(TTransportException::NOT_OPEN, "Called write on non-open socket");
}
uint32_t sent = 0;
int flags = 0;
#ifdef MSG_NOSIGNAL
// Note the use of MSG_NOSIGNAL to suppress SIGPIPE errors, instead we
// check for the THRIFT_EPIPE return condition and close the socket in that case
flags |= MSG_NOSIGNAL;
#endif // ifdef MSG_NOSIGNAL
int b = static_cast<int>(send(socket_, const_cast_sockopt(buf + sent), len - sent, flags));
if (b < 0) {
if (THRIFT_GET_SOCKET_ERROR == THRIFT_EWOULDBLOCK || THRIFT_GET_SOCKET_ERROR == THRIFT_EAGAIN) {
return 0;
}
// Fail on a send error
int errno_copy = THRIFT_GET_SOCKET_ERROR;
GlobalOutput.perror("TSocket::write_partial() send() " + getSocketInfo(), errno_copy);
if (errno_copy == THRIFT_EPIPE || errno_copy == THRIFT_ECONNRESET
|| errno_copy == THRIFT_ENOTCONN) {
close();
throw TTransportException(TTransportException::NOT_OPEN, "write() send()", errno_copy);
}
throw TTransportException(TTransportException::UNKNOWN, "write() send()", errno_copy);
}
// Fail on blocked send
if (b == 0) {
throw TTransportException(TTransportException::NOT_OPEN, "Socket send returned 0.");
}
return b;
}
9. 数据流向关系
客户端发送数据,服务器端收到数据后触发libevent事件,然后调用Transport收数据。收完整后,调用Protocol反序列化,接着就调用服务端的代码。
前半部分在IO线程中完成,后半部分在工作线程中完成。