服务(service)向来不是一个陌生的词汇,它在我们的领域一般成对存在,分为客户端和服务器,客户端提出请求,服务器处理并给予回应。在ros节点间的通信过程中,如果需要获取对面进程的一些信息,利用话题通信需要两次来回通讯来确认该次请求与回复(有点像TCP/IP的握手),而调用ros service则只需要一次通信,因此它也广泛使用在机器人等ros软件在不同层级的指令调用中,实现前端到后端、网络层到嵌入式系统的快速调用。
那么,与话题通信的分析类似,我们仔细研读一下它的实现,先从客户端(client)和服务器(server)的定义开始。
一、服务器
我们可以看到它和话题接收器的实现很类似,都具有一个回调函数,并且要声明它的服务名,以下是它的函数原型的定义:
template<class T, class MReq, class MRes>
ServiceServer advertiseService(const std::string& service, bool(T::*srv_func)(MReq &, MRes &), T *obj)
{
AdvertiseServiceOptions ops;
ops.template init<MReq, MRes>(service, boost::bind(srv_func, obj, _1, _2));
return advertiseService(ops);
}
template<class MReq, class MRes>
void init(const std::string& _service, const boost::function<bool(MReq&, MRes&)>& _callback)
{
namespace st = service_traits;
namespace mt = message_traits;
if (st::md5sum<MReq>() != st::md5sum<MRes>())
{
ROS_FATAL("the request and response parameters to the server "
"callback function must be autogenerated from the same "
"server definition file (.srv). your advertise_servce "
"call for %s appeared to use request/response types "
"from different .srv files.", service.c_str());
ROS_BREAK();
}
service = _service;
md5sum = st::md5sum<MReq>();
datatype = st::datatype<MReq>();
req_datatype = mt::datatype<MReq>();
res_datatype = mt::datatype<MRes>();
helper = boost::make_shared<ServiceCallbackHelperT<ServiceSpec<MReq, MRes> > >(_callback);
}
我们需要留意的obs.init的helper,它也是服务所声明的回调函数指针所创造的共享指针,与话题的接收器相似。 那么我们直接转到真正的定义,nodehandle.cpp中,server也具备一个callback_queue,在进程中如果话题接收器与服务共用一个句柄,其使用的回调队列是共用的。
ServiceServer NodeHandle::advertiseService(AdvertiseServiceOptions& ops)
{
ops.service = resolveName(ops.service);
if (ops.callback_queue == 0)
{
if (callback_queue_)
{
ops.callback_queue = callback_queue_;
}
else
{
ops.callback_queue = getGlobalCallbackQueue();
}
}
if (ServiceManager::instance()->advertiseService(ops))
{
ServiceServer srv(ops.service, *this);
{
boost::mutex::scoped_lock lock(collection_->mutex_);
collection_->srvs_.push_back(srv.impl_);//这是在记录下该句柄下定义的话题与服务,在shutdown的时候需要拿出来一一结束生命周期
}
return srv;
}
return ServiceServer();
}
再看到service manager, 在这里声明了一份服务之后,分别在本地和master处都做了备份。
bool ServiceManager::advertiseService(const AdvertiseServiceOptions& ops)
{
boost::recursive_mutex::scoped_lock shutdown_lock(shutting_down_mutex_);
if (shutting_down_)
{
return false;
}
{
boost::mutex::scoped_lock lock(service_publications_mutex_);
if (isServiceAdvertised(ops.service))//在service_publications_中查找名称
{
ROS_ERROR("Tried to advertise a service that is already advertised in this node [%s]", ops.service.c_str());
return false;
}
ServicePublicationPtr pub(boost::make_shared<ServicePublication>(ops.service, ops.md5sum, ops.datatype, ops.req_datatype, ops.res_datatype, ops.helper, ops.callback_queue, ops.tracked_object));
service_publications_.push_back(pub);//记录下服务名,但也仅是记录下名称
}
//将名称、缓冲区地址、端口号送入rosmaster
XmlRpcValue args, result, payload;
args[0] = this_node::getName();
args[1] = ops.service;
char uri_buf[1024];
snprintf(uri_buf, sizeof(uri_buf), "rosrpc://%s:%d",
network::getHost().c_str(), connection_manager_->getTCPPort());
args[2] = string(uri_buf);
args[3] = xmlrpc_manager_->getServerURI();
master::execute("registerService", args, result, payload, true);
return true;
}
master::execute函数我们已经比较熟悉,是用XMLRPC的客户端呼叫服务器(rosmaster),也就是说,服务器发布一个服务,也是告知rosmaster存贮其节点名称,那么可以推断,客户端呼叫服务器时很可能也是向rosmaster求助。另外,写过相关程序的同学们应该都知道,如果在开启服务器的进程中不运行ros::spin(),服务也是无法通信的,这对我们的源码分析也提供了很大的启发。
二、客户端
1.客户端的定义
客户端的命名和发布器相似,也是用nodehandle声明服务名,它的函数原型为:
template<class MReq, class MRes>
ServiceClient serviceClient(const std::string& service_name, bool persistent = false,
const M_string& header_values = M_string())
{
ServiceClientOptions ops;
ops.template init<MReq, MRes>(service_name, persistent, header_values);
return serviceClient(ops);
}
ServiceClient NodeHandle::serviceClient(ServiceClientOptions& ops)
{
ops.service = resolveName(ops.service);
ServiceClient client(ops.service, ops.persistent, ops.header, ops.md5sum);
if (client)
{
boost::mutex::scoped_lock lock(collection_->mutex_);
collection_->srv_cs_.push_back(client.impl_);
}
return client;
}
我们可以看到,在ServiceClient类中还有一个impl类,它存储着各类信息。另外,persistent在默认情况下是false,这里的设计思想大概是设计一个常开的客户端(夺命连环call)?
ServiceClient::ServiceClient(const std::string& service_name, bool persistent, const M_string& header_values, const std::string& service_md5sum)
: impl_(new Impl)
{
impl_->name_ = service_name;
impl_->persistent_ = persistent;
impl_->header_values_ = header_values;
impl_->service_md5sum_ = service_md5sum;
if (persistent)
{
impl_->server_link_ = ServiceManager::instance()->createServiceServerLink(impl_->name_, impl_->persistent_, impl_->service_md5sum_, impl_->service_md5sum_, impl_->header_values_);
}
}
2.客户端的调用服务方法
参照ros wiki的demo,我们知道对它的使用就在于call函数,类比于publish函数,它分为创造与服务器的连接器与调用之的两部分。
bool ServiceClient::call(const SerializedMessage& req, SerializedMessage& resp, const std::string& service_md5sum)
{
if (service_md5sum != impl_->service_md5sum_)
{
ROS_ERROR("Call to service [%s] with md5sum [%s] does not match md5sum when the handle was created ([%s])", impl_->name_.c_str(), service_md5sum.c_str(), impl_->service_md5sum_.c_str());
return false;
}
ServiceServerLinkPtr link;
if (impl_->persistent_)
{
if (!impl_->server_link_)
{
impl_->server_link_ = ServiceManager::instance()->createServiceServerLink(impl_->name_, impl_->persistent_, service_md5sum, service_md5sum, impl_->header_values_);
if (!impl_->server_link_)
{
return false;
}
}
link = impl_->server_link_;
}
else
{
link = ServiceManager::instance()->createServiceServerLink(impl_->name_, impl_->persistent_, service_md5sum, service_md5sum, impl_->header_values_);
if (!link)
{
return false;
}
}
bool ret = link->call(req, resp);
link.reset();
// If we're shutting down but the node haven't finished yet, wait until we do
while (ros::isShuttingDown() && ros::ok())
{
ros::WallDuration(0.001).sleep();
}
return ret;
}
2.1.制造一个service server link
ServiceServerLinkPtr ServiceManager::createServiceServerLink(const std::string& service, bool persistent,
const std::string& request_md5sum, const std::string& response_md5sum,
const M_string& header_values)
{
boost::recursive_mutex::scoped_lock shutdown_lock(shutting_down_mutex_);
if (shutting_down_)
{
return ServiceServerLinkPtr();
}
uint32_t serv_port;
std::string serv_host;
if (!lookupService(service, serv_host, serv_port))//与master通信进行查找
{
return ServiceServerLinkPtr();
}
TransportTCPPtr transport(boost::make_shared<TransportTCP>(&poll_manager_->getPollSet()));
// Make sure to initialize the connection *before* transport->connect()
// is called, otherwise we might miss a connect error (see #434).
ConnectionPtr connection(boost::make_shared<Connection>());
connection_manager_->addConnection(connection);
connection->initialize(transport, false, HeaderReceivedFunc());//通过connection manager创造一个连接,直接在该连接中写缓冲区
if (transport->connect(serv_host, serv_port))//握手
{
ServiceServerLinkPtr client(boost::make_shared<ServiceServerLink>(service, persistent, request_md5sum, response_md5sum, header_values));
{
boost::mutex::scoped_lock lock(service_server_links_mutex_);
service_server_links_.push_back(client);
}
client->initialize(connection);
return client;
}
ROSCPP_LOG_DEBUG("Failed to connect to service [%s] (mapped=[%s]) at [%s:%d]", service.c_str(), service.c_str(), serv_host.c_str(), serv_port);
return ServiceServerLinkPtr();
}
寻找该服务的过程,顾名思义,它通过与ros master通信进行查找,查找结果由result返回,而服务器注册的一切信息由payload返回。
bool ServiceManager::lookupService(const string &name, string &serv_host, uint32_t &serv_port)
{
XmlRpcValue args, result, payload;
args[0] = this_node::getName();
args[1] = name;
if (!master::execute("lookupService", args, result, payload, false))
return false;
string serv_uri(payload);
if (!serv_uri.length()) // shouldn't happen. but let's be sure.
{
ROS_ERROR("lookupService: Empty server URI returned from master");
return false;
}
if (!network::splitURI(serv_uri, serv_host, serv_port))
{
ROS_ERROR("lookupService: Bad service uri [%s]", serv_uri.c_str());
return false;
}
return true;
}
2.2.service server link制造成功后,用该link进行呼叫
第二部分是调用服务的过程。看到ServiceServerLink::call函数。
bool ServiceServerLink::call(const SerializedMessage& req, SerializedMessage& resp)
{
CallInfoPtr info(boost::make_shared<CallInfo>());
info->req_ = req;
info->resp_ = &resp;
info->success_ = false;
info->finished_ = false;
info->call_finished_ = false;
info->caller_thread_id_ = boost::this_thread::get_id();
//ros::WallDuration(0.1).sleep();
bool immediate = false;
{
boost::mutex::scoped_lock lock(call_queue_mutex_);
if (connection_->isDropped())
{
ROSCPP_LOG_DEBUG("ServiceServerLink::call called on dropped connection for service [%s]", service_name_.c_str());
info->call_finished_ = true;
return false;
}
//类比于话题传输,第一次收到服务器端传来的流,应做一次校验,但是服务的传输一般是一次性传输,这样的队列设计,是提供另一种打开方式,夺命连环call的模式下的通信。
if (call_queue_.empty() && header_written_ && header_read_)
{
immediate = true;
}
call_queue_.push(info);//朝队列中塞进一个
}
if (immediate)
{
processNextCall();
}
{
boost::mutex::scoped_lock lock(info->finished_mutex_);
while (!info->finished_)
{
info->finished_condition_.wait(lock);
}
}
info->call_finished_ = true;
if (info->exception_string_.length() > 0)
{
ROS_ERROR("Service call failed: service [%s] responded with an error: %s", service_name_.c_str(), info->exception_string_.c_str());
}
return info->success_;
}
因此在正常模式的使用下,第一次调用processNextCall时,参数empty自然会是false,进行一次写缓冲区的操作,而下一次调用时,connection会断开该连接,一次通信结束。
void ServiceServerLink::processNextCall()
{
bool empty = false;
{
boost::mutex::scoped_lock lock(call_queue_mutex_);
if (current_call_)
{
return;
}
if (!call_queue_.empty())
{
ROS_DEBUG_NAMED("superdebug", "[%s] Client to service [%s] processing next service call", persistent_ ? "persistent" : "non-persistent", service_name_.c_str());
current_call_ = call_queue_.front();
call_queue_.pop();
}
else
{
empty = true;
}
}
if (empty)
{
if (!persistent_)
{
ROS_DEBUG_NAMED("superdebug", "Dropping non-persistent client to service [%s]", service_name_.c_str());
connection_->drop(Connection::Destructing);
}
else
{
ROS_DEBUG_NAMED("superdebug", "Keeping persistent client to service [%s]", service_name_.c_str());
}
}
else
{
SerializedMessage request;
{
boost::mutex::scoped_lock lock(call_queue_mutex_);
request = current_call_->req_;
}
connection_->write(request.buf, request.num_bytes, boost::bind(&ServiceServerLink::onRequestWritten, this, _1));
}
}
在上面的写缓冲区的动作是由connection所包装,它将ServiceServerLink::onRequestWritten函数设为回调函数,在一次来回传输成功之后再将onResponse函数作为回调函数进行read,最终反序列得到作为msg形式的response。
void ServiceServerLink::onRequestWritten(const ConnectionPtr& conn)
{
(void)conn;
//ros::WallDuration(0.1).sleep();
connection_->read(5, boost::bind(&ServiceServerLink::onResponseOkAndLength, this, _1, _2, _3, _4));
}
void ServiceServerLink::onResponseOkAndLength(const ConnectionPtr& conn, const boost::shared_array<uint8_t>& buffer, uint32_t size, bool success)
{
(void)size;
ROS_ASSERT(conn == connection_);
ROS_ASSERT(size == 5);
if (!success)
return;
uint8_t ok = buffer[0];
uint32_t len = *((uint32_t*)(buffer.get() + 1));
if (len > 1000000000)
{
ROS_ERROR("a message of over a gigabyte was " \
"predicted in tcpros. that seems highly " \
"unlikely, so I'll assume protocol " \
"synchronization is lost.");
conn->drop(Connection::Destructing);
return;
}
{
boost::mutex::scoped_lock lock(call_queue_mutex_);
if ( ok != 0 ) {
current_call_->success_ = true;
} else {
current_call_->success_ = false;
}
}
if (len > 0)
{
connection_->read(len, boost::bind(&ServiceServerLink::onResponse, this, _1, _2, _3, _4));
}
else
{
onResponse(conn, boost::shared_array<uint8_t>(), 0, true);
}
}
void ServiceServerLink::onResponse(const ConnectionPtr& conn, const boost::shared_array<uint8_t>& buffer, uint32_t size, bool success)
{
(void)conn;
ROS_ASSERT(conn == connection_);
if (!success)
return;
{
boost::mutex::scoped_lock queue_lock(call_queue_mutex_);
if (current_call_->success_)
{
*current_call_->resp_ = SerializedMessage(buffer, size);
}
else
{
current_call_->exception_string_ = std::string(reinterpret_cast<char*>(buffer.get()), size);
}
}
callFinished();
}
这些函数就像连环套一样一环扣一环,读懂它需要细心分辨,但是清楚的一点时,它不像话题那样直接往缓冲区里一扔就结束了,而是在Connection中将读和写封装得互相耦合,一次写必须带有一次读,才能收到服务器的回复。
另外,除了用客户端调用之外,还有直接用函数显式地调用的方法,即ros::service::call,它比用定义客户端更为快捷,它的实现其实就是封装了一层客户端的call,在用的时候本人更推荐这样的用法,毕竟省事。
template<class MReq, class MRes>
bool call(const std::string& service_name, MReq& req, MRes& res)
{
namespace st = service_traits;
NodeHandle nh;
ServiceClientOptions ops(ros::names::resolve(service_name), st::md5sum(req), false, M_string());
ServiceClient client = nh.serviceClient(ops);
return client.call(req, res);
}
三、ros::spin的驱动
最终的谜题就是服务器如何回复消息了。答案还是ros::spin函数的反复刷新本进程内的callback。根据上一篇ros::spin函数的详细解析,我们可以知道本进程内具有一个队列,容纳着callbackinterface,这个类是一个基类,继承它的有话题传输的SubscriptionQueue,服务传输的ServiceCallback还有我们以后要讲述的TimerQueueCallback,其实ros::spin函数并不关心队列中存在的是哪一类callback,具体的执行还是它们自带的call函数。
virtual CallResult call()
{
if (link_->getConnection()->isDropped())
{
return Invalid;
}
VoidConstPtr tracker;
if (has_tracked_object_)
{
tracker = tracked_object_.lock();
if (!tracker)
{
SerializedMessage res = serialization::serializeServiceResponse<uint32_t>(false, 0);
link_->processResponse(false, res);
return Invalid;
}
}
ServiceCallbackHelperCallParams params;
params.request = SerializedMessage(buffer_, num_bytes_);
params.connection_header = link_->getConnection()->getHeader().getValues();
try
{
bool ok = helper_->call(params);//执行服务器的回调函数
if (ok != 0)
{
link_->processResponse(true, params.response);
}
else
{
SerializedMessage res = serialization::serializeServiceResponse<uint32_t>(false, 0);
link_->processResponse(false, res);//将response写缓冲区发出
}
}
catch (std::exception& e)
{
ROS_ERROR("Exception thrown while processing service call: %s", e.what());
std_msgs::String error_string;
error_string.data = e.what();
SerializedMessage res = serialization::serializeServiceResponse(false, error_string);
link_->processResponse(false, res);
return Invalid;
}
return Success;
}
当轮转到服务器注册的callback函数时,它需要从之前的缓冲区里反序列化成request,执行服务器回调函数之后,再写一次缓冲区,客户端那一侧被锁住的请求才会得到回复,service::call函数才会返回!
最后做一个简单的总结就是,我们在研读了话题的通信方法之后,对服务的理解变得更为容易,其实它们的内部实现极为类似,都是以rosmaster为核心与中转,可以把服务的响应类比为发出话题后立即执行一次spin,看似同步其实也是异步,离不开ros::spin函数的驱动,由于服务一来一回这种性质,它的设计比话题更为精妙(仔细品味其中的布局思想,对自己的软开水平大有裨益)。当然,值得一提的是,服务有超时并返回失败的风险,同学们在撰写服务的时候尽量避免在回调函数中执行过多过分耗时的内容!