前言
本篇介绍ROS2中的service机制,涉及到server和client两部分。service的示例主要包含两个源文件:ros2/examples/rclcpp/services/minimal_service/main.cpp和ros2/examples/rclcpp/services/minimal_client/main.cpp。这部分内容比较多且涉及到了更多c++11新特性,如果有错误之处欢迎指出。
Service创建
首先看server部分代码,main函数的结构与前面subscriber非常类似:
int main(int argc, char **argv)
{
rclcpp::init(argc, argv);
g_node = rclcpp::Node::make_shared("minimal_service");
auto server = g_node->create_service<AddTwoInts>("add_two_ints", handle_service);
rclcpp::spin(g_node);
rclcpp::shutdown();
g_node = nullptr;
return 0;
}
第一步初始化节点g_node,然后调用g_node的create_service创建服务,最后spin我们的g_node节点。
这里关注create_service这个接口:
template <typename ServiceT, typename CallbackT>
typename rclcpp::Service<ServiceT>::SharedPtr
create_service(const std::string &service_name,
CallbackT &&callback,
const rmw_qos_profile_t &qos_profile = rmw_qos_profile_services_default,
rclcpp::CallbackGroup::SharedPtr group = nullptr);
create_service与ROS1中的advertiseService类似,第一个参数为服务名称,第二个参数为回调函数。
//ROS1
template<class T, class MReq, class MRes>
ServiceServer advertiseService(const std::string& service, bool(T::*srv_func)(MReq &, MRes &), T *obj)
{
AdvertiseServiceOptions ops;
ops.template init<MReq, MRes>(service, boost::bind(srv_func, obj, _1, _2));
return advertiseService(ops);
}
ROS2中service回调函数与ROS1中不同的是回调函数整个作为模板,函数参数似乎没有要求,example代码中给出了一个回调如下:
void handle_service(const std::shared_ptr<rmw_request_id_t> request_header,
const std::shared_ptr<AddTwoInts::Request> request,
const std::shared_ptr<AddTwoInts::Response> response)
{
(void)request_header;
RCLCPP_INFO(g_node->get_logger(),"request: %" PRId64 " + %" PRId64, request->a, request->b);
response->sum = request->a + request->b;
}
可以看到回调函数第一个参数较之前多了一个rmw_request_id_t标识,但回调函数中并没有做处理,我们将回调函数代码改为如下:
int handle_service(const std::shared_ptr<AddTwoInts::Request> request,
const std::shared_ptr<AddTwoInts::Response> response)
{
//(void)request_header; //去掉第一个rmw_request_id_t参数
std::cout<<"without request_header!!!"<<std::endl;
RCLCPP_INFO(g_node->get_logger(),"request: %" PRId64 " + %" PRId64, request->a, request->b);
response->sum = request->a + request->b;
return 1;//修改返回值为int
}
代码可以编译成功,启动minimal_service和minimal_client可以看到server端输出:
wenhui@ubuntu:~$ ros2 run examples_rclcpp_minimal_service service_main
test!>>>>>>>>>>>options.use_default_callbacks
rcl_subscription_event_init, event_type: 2
UnsupportedEventTypeException catch
[INFO] [1612233941.822528620] [minimal_service]: without request_header!!!
[INFO] [1612233941.822655897] [minimal_service]: request: 41 + 1
初步判断回调函数可以返回多种类型,rmw_request_id_t可有可无。
继续追踪create_service函数到ros2/rclcpp/rclcpp/include/rclcpp/create_service.hpp文件,分析下service创建过程:
template <typename ServiceT, typename CallbackT>
typename rclcpp::Service<ServiceT>::SharedPtr
create_service(std::shared_ptr<node_interfaces::NodeBaseInterface> node_base,
std::shared_ptr<node_interfaces::NodeServicesInterface> node_services,
const std::string &service_name,
CallbackT &&callback,
const rmw_qos_profile_t &qos_profile,
rclcpp::CallbackGroup::SharedPtr group)
{
rclcpp::AnyServiceCallback<ServiceT> any_service_callback;
any_service_callback.set(std::forward<CallbackT>(callback));
rcl_service_options_t service_options = rcl_service_get_default_options();
service_options.qos = qos_profile;
auto serv = Service<ServiceT>::make_shared(node_base->get_shared_rcl_node_handle(),
service_name,
any_service_callback,
service_options);
auto serv_base_ptr = std::dynamic_pointer_cast<ServiceBase>(serv);
node_services->add_service(serv_base_ptr, group);
return serv;
}
AnyServiceCallback是服务回调函数的封装类,该类中定义了两种回调函数,分别是不带header和带header的,不带header的回调刚才编译测试过:
using SharedPtrCallback = std::function<void(const std::shared_ptr<typename ServiceT::Request>,
std::shared_ptr<typename ServiceT::Response>)>;
using SharedPtrWithRequestHeaderCallback = std::function<void(const std::shared_ptr<rmw_request_id_t>,
const std::shared_ptr<typename ServiceT::Request>,
std::shared_ptr<typename ServiceT::Response>)>;
create_service中调用了AnyServiceCallback的set接口来设置回调函数,两种回调类型对应的set应当有两个,问题是如何判断调用哪一个呢?
template <typename CallbackT,
typename std::enable_if<
rclcpp::function_traits::same_arguments<CallbackT, SharedPtrCallback>::value>::type * = nullptr>
void set(CallbackT callback)
{
shared_ptr_callback_ = callback;
}
template <typename CallbackT,
typename std::enable_if<
rclcpp::function_traits::same_arguments<CallbackT, SharedPtrWithRequestHeaderCallback>::value>::type * = nullptr>
void set(CallbackT callback)
{
shared_ptr_with_request_header_callback_ = callback;
}
可以看到set的两个实现使用了std::enable_if进行模板选择,判断的条件是CallbackT是否与预定于的两种SharedPtrCallback和SharedPtrWithRequestHeaderCallback一致,如果不是这两种的任一种,则在编译阶段进行报错,报错提示如下:
~/ros2_foxy/src/ros2/examples/rclcpp/services/minimal_service/main.cpp:37:82: required from here
~/ros2_foxy/install/rclcpp/include/rclcpp/create_service.hpp:42:5: error: no matching function for call to ‘rclcpp::AnyServiceCallback<example_interfaces::srv::AddTwoInts>::set(int (&)(std::shared_ptr<example_interfaces::srv::AddTwoInts>, std::shared_ptr<example_interfaces::srv::AddTwoInts_Response_<std::allocator<void> > >))’
42 | any_service_callback.set(std::forward<CallbackT>(callback));
| ^~~~~~~~~~~~~~~~~~~~
这里的enable_if写法非常dan疼,长长的一大串,enable_if的使用方法后面看有时间的话写一篇总结。这里可以改用c++20的concept+requires,看起来会更简洁:
template<typename T>
concept callback_with_header = std::is_convertible_v<T,SharedPtrWithRequestHeaderCallback>;
template<typename T>
concept callback_without_header = std::is_convertible_v<T,SharedPtrCallback>;
...
template <typename CallbackT>
requires callback_without_header<CallbackT>
void set(CallbackT callback)
{
shared_ptr_callback_ = callback;
}
template <typename CallbackT>
requires callback_with_header<CallbackT>
void set(CallbackT callback)
{
shared_ptr_with_request_header_callback_ = callback;
}
set函数设置完回调函数对象后,开始构造一个Service对象:
Service(std::shared_ptr<rcl_node_t> node_handle,
const std::string &service_name,
AnyServiceCallback<ServiceT> any_callback,
rcl_service_options_t &service_options)
: ServiceBase(node_handle), any_callback_(any_callback)
{
using rosidl_typesupport_cpp::get_service_type_support_handle;
auto service_type_support_handle = get_service_type_support_handle<ServiceT>();
std::weak_ptr<rcl_node_t> weak_node_handle(node_handle_);
// rcl does the static memory allocation here
service_handle_ = std::shared_ptr<rcl_service_t>(
new rcl_service_t, [weak_node_handle](rcl_service_t *service) {
...
});
*service_handle_.get() = rcl_get_zero_initialized_service();
rcl_ret_t ret = rcl_service_init(service_handle_.get(),
node_handle.get(),
service_type_support_handle,
service_name.c_str(),
&service_options);
if (ret != RCL_RET_OK)
{
...
}
}
依然和subscription类似,首先分配rcl_service_t,然后调用rcl_service_init(…)在rmw层创建rmw_service_t句柄。
创建完毕后通知Executor后面进行监听。
Service回调
前面我们知道Executor在spin循环中不断执行已经ready的AnyExecutable,当创建了service后一旦有request到来就去执行execute_service(any_exec.service)
:
void Executor::execute_service(rclcpp::ServiceBase::SharedPtr service)
{
std::cout<<"execute_service ..."<<std::endl;
auto request_header = service->create_request_header(); //分配header内存
std::shared_ptr<void> request = service->create_request(); //分配request内存
take_and_do_error_handling(
"taking a service server request from service",
service->get_service_name(),
[&]() { return service->take_type_erased_request(request.get(), *request_header); }, //抓取request数据
[&]() { service->handle_request(request_header, request); }); //处理request
}
take_and_do_error_handling中先take然后handle,相关的异常处理也是标准操作:
static void take_and_do_error_handling(const char *action_description,
const char *topic_or_service_name,
std::function<bool()> take_action,
std::function<void()> handle_action)
{
bool taken = false;
try
{
taken = take_action();
}
catch (const rclcpp::exceptions::RCLError &rcl_error)
{
...
}
if (taken)
{
handle_action();
}
else
{
...
}
}
take步骤首先经过rcl层,rcl_take_request_with_info函数内部再通过rmw_take_request函数调用rmw层的接口获取中间件数据,具体的代码和前面的subscription类似,就不贴了。
handle部分与subscription类似,也是先将reqeust对象cast为实际的数据结构然后dispatch给AnyServiceCallback回调函数,不过这里由于增加了返回数据,需要先分配response的内存。
void handle_request(std::shared_ptr<rmw_request_id_t> request_header,
std::shared_ptr<void> request) override
{
auto typed_request = std::static_pointer_cast<typename ServiceT::Request>(request); //cast
auto response = std::make_shared<typename ServiceT::Response>(); //分配response
any_callback_.dispatch(request_header, typed_request, response);
send_response(*request_header, *response);
}
send_response中再通过rcl->rmw传递给中间件进行发送。
service client
client部分不需要用户设置回调函数,main函数代码如下:
int main(int argc, char *argv[])
{
rclcpp::init(argc, argv);
auto node = rclcpp::Node::make_shared("minimal_client");
auto client = node->create_client<AddTwoInts>("add_two_ints");
while (!client->wait_for_service(std::chrono::seconds(1)))
{
if (!rclcpp::ok())
{
RCLCPP_ERROR(node->get_logger(), "client interrupted while waiting for service to appear.");
return 1;
}
RCLCPP_INFO(node->get_logger(), "waiting for service to appear...");
}
auto request = std::make_shared<AddTwoInts::Request>();
request->a = 41;
request->b = 1;
auto result_future = client->async_send_request(request);
if (rclcpp::spin_until_future_complete(node, result_future) !=
rclcpp::FutureReturnCode::SUCCESS)
{
RCLCPP_ERROR(node->get_logger(), "service call failed :(");
return 1;
}
auto result = result_future.get();
RCLCPP_INFO(node->get_logger(), "result of %" PRId64 " + %" PRId64 " = %" PRId64,request->a, request->b, result->sum);
rclcpp::shutdown();
return 0;
}
重点关注的是client的创建(create_client)、等待服务(wait_for_service)、发送请求(async_send_request)以及等待响应(spin_until_future_complete)。
create_client依然是rcl->rmw两个层级的句柄,不再赘述。
wait_for_service分两步,第一步再rcl层查询ROS graph:
rcl_ret_t rcl_service_server_is_available(const rcl_node_t *node,
const rcl_client_t *client,
bool *is_available);
接下来根据rmw句柄调用中间件的服务查询接口:
rmw_ret_t rmw_service_server_is_available(const rmw_node_t *node,
const rmw_client_t *client,
bool *is_available);
在确认service服务存在后执行async_send_request进行异步请求:
SharedFuture async_send_request(SharedRequest request)
{
return async_send_request(request, [](SharedFuture) {});
}
这里绑定了一个返回值为std::shared_future对象回调函数,内层处理如下:
template <
typename CallbackT,
typename std::enable_if<
rclcpp::function_traits::same_arguments<
CallbackT,
CallbackType>::value>::type * = nullptr>
SharedFuture async_send_request(SharedRequest request, CallbackT &&cb)
{
std::lock_guard<std::mutex> lock(pending_requests_mutex_);
int64_t sequence_number;
rcl_ret_t ret = rcl_send_request(get_client_handle().get(), request.get(), &sequence_number);
if (RCL_RET_OK != ret)
{
rclcpp::exceptions::throw_from_rcl_error(ret, "failed to send request");
}
SharedPromise call_promise = std::make_shared<Promise>();
SharedFuture f(call_promise->get_future());
pending_requests_[sequence_number] =
std::make_tuple(call_promise, std::forward<CallbackType>(cb), f);
return f;
}
函数中rcl_send_request调用rcl层将请求发给rmw,中间件将数据发出后会返回一个sequence_number便于rcl识别某个请求。client将此sequence_number连同未来的期望放到pending_requests中,pending_requests是一个std::map成员:
std::map<int64_t, std::tuple<SharedPromise, CallbackType, SharedFuture>> pending_requests_;
map的关键字是rmw返回的sequence_number,内容是一个std::tuple结构,tuple中保存std::promise,std::function以及promise所对应的的std::future。
接下来就是等待这个future“变成现实”了,示例中使用spin_until_future_complete等待某一个具体的future到来:
template<typename FutureT, typename TimeRepT = int64_t, typename TimeT = std::milli>
rclcpp::FutureReturnCode
spin_until_future_complete(
rclcpp::node_interfaces::NodeBaseInterface::SharedPtr node_ptr,
const std::shared_future<FutureT> & future,
std::chrono::duration<TimeRepT, TimeT> timeout = std::chrono::duration<TimeRepT, TimeT>(-1))
{
rclcpp::executors::SingleThreadedExecutor executor;
return executors::spin_node_until_future_complete<FutureT>(executor, node_ptr, future, timeout);
}
spin_until_future_complete类似于前面topic中的spin,这里也临时创建了一个executor,然后添加node后再执行executor的接口:
template<typename ResponseT, typename TimeRepT = int64_t, typename TimeT = std::milli>
rclcpp::FutureReturnCode
spin_node_until_future_complete(
rclcpp::Executor & executor,
rclcpp::node_interfaces::NodeBaseInterface::SharedPtr node_ptr,
const std::shared_future<ResponseT> & future,
std::chrono::duration<TimeRepT, TimeT> timeout = std::chrono::duration<TimeRepT, TimeT>(-1))
{
executor.add_node(node_ptr);
auto retcode = executor.spin_until_future_complete(future, timeout);
executor.remove_node(node_ptr);
return retcode;
}
接下来才是真正处理future的部分:
template<typename ResponseT, typename TimeRepT = int64_t, typename TimeT = std::milli>
FutureReturnCode spin_until_future_complete(
const std::shared_future<ResponseT> & future,
std::chrono::duration<TimeRepT, TimeT> timeout = std::chrono::duration<TimeRepT, TimeT>(-1))
{
//先处理一次future,如果已经完成的话就不用执行后面的while
std::future_status status = future.wait_for(std::chrono::seconds(0));
if (status == std::future_status::ready) {
return FutureReturnCode::SUCCESS;
}
...
while (rclcpp::ok(this->context_) && spinning.load()) {
// Do one item of work.
spin_once_impl(timeout_left);//执行execute_client
//处理future
status = future.wait_for(std::chrono::seconds(0));
if (status == std::future_status::ready) {
return FutureReturnCode::SUCCESS;
}
...
auto now = std::chrono::steady_clock::now();
if (now >= end_time) {
return FutureReturnCode::TIMEOUT;
}
...
}
return FutureReturnCode::INTERRUPTED;
}
while循环中spin_once_impl执行execute_client,execute_client同样也是两步——take和handle:
void Executor::execute_client(
rclcpp::ClientBase::SharedPtr client)
{
auto request_header = client->create_request_header();
std::shared_ptr<void> response = client->create_response();
take_and_do_error_handling(
"taking a service client response from service",
client->get_service_name(),
[&]() { return client->take_type_erased_response(response.get(), *request_header); },
[&]() { client->handle_response(request_header, response); });
}
take部分是熟悉的rcl->rmw,重点看一下handle部分:
void handle_response(std::shared_ptr<rmw_request_id_t> request_header,
std::shared_ptr<void> response) override
{
std::unique_lock<std::mutex> lock(pending_requests_mutex_);
auto typed_response = std::static_pointer_cast<typename ServiceT::Response>(response);
int64_t sequence_number = request_header->sequence_number; //获取sequence_number
// TODO(esteve) this should throw instead since it is not expected to happen in the first place
if (this->pending_requests_.count(sequence_number) == 0)
{
RCUTILS_LOG_ERROR_NAMED("rclcpp", "Received invalid sequence number. Ignoring...");
return;
}
auto tuple = this->pending_requests_[sequence_number];
auto call_promise = std::get<0>(tuple); //根据sequence_number从map中找到异步tuple
auto callback = std::get<1>(tuple);
auto future = std::get<2>(tuple);
this->pending_requests_.erase(sequence_number);
// Unlock here to allow the service to be called recursively from one of its callbacks.
lock.unlock();
call_promise->set_value(typed_response);//这里给promise进行赋值,赋值的内容是Response的指针
callback(future);//回调函数,这里的example中回调函数为[](SharedFuture) {},可以忽略
}
最后,main函数结尾处执行future的get接口得到返回数据。
client调用服务的异步过程总结为:
- 创建client
- 发送请求并交给中间件,中间件返回sequence_number
- client以sequence_number作为键值保存对应的promise和future用来储存响应数据
- exetutor单线程spin,当收到response时执行execute_client()
- 根据response的header中sequence_number找到client中的promise
- 将response类型转换并将其指针赋给promise
- 返回promise里的数据给node
总结
service中server在subscription基础上增加了发送response;client则使用了异步回调,维护rmw的序列号map来获取服务端响应数据。