rpc服务添加以及服务器启动

老规矩先看demo调用

int main(int argc, char* argv[]) {
    // 解析命令行参数。我们推荐你也使用 gflags。
    //--port=8888 --idle_timeout_s=30 --echo_attachment=false
    gflags::ParseCommandLineFlags(&argc, &argv, true);

    // 通常你只需要一个 Server。
    brpc::Server server;

    // 你的服务的实例。
    example::EchoServiceImpl echo_service_impl;

    // 将服务添加到服务器中。注意第二个参数,因为服务是在栈上创建的,我们不想让服务器删除它,
    // 否则使用 brpc::SERVER_OWNS_SERVICE。
    if (server.AddService(&echo_service_impl, 
                          brpc::SERVER_DOESNT_OWN_SERVICE) != 0) {
        LOG(ERROR) << "Fail to add service";
        return -1;
    }

    // 启动服务器。
    brpc::ServerOptions options;
    options.idle_timeout_sec = FLAGS_idle_timeout_s;
    if (server.Start(FLAGS_port, &options) != 0) {
        LOG(ERROR) << "Fail to start EchoServer";
        return -1;
    }

    // 等待直到按下 Ctrl-C,然后调用 Stop() 和 Join() 停止并等待服务器结束。
    server.RunUntilAskedToQuit();
    return 0;
}

搭建一个rpc server,就是创建一个Server对象,设置好参数,添加Service,然后启动。Server可以包含多个Service,而Service又可以包含多个method,具体的某次请求就是针对某个method的。

Server::Server(ProfilerLinker)
    : _session_local_data_pool(NULL)
    , _status(UNINITIALIZED)
    , _builtin_service_count(0)
    , _virtual_service_count(0)
    , _failed_to_set_max_concurrency_of_method(false)
    , _am(NULL)
    , _internal_am(NULL)
    , _first_service(NULL)
    , _tab_info_list(NULL)
    , _global_restful_map(NULL)
    , _last_start_time(0)
    , _derivative_thread(INVALID_BTHREAD)
    , _keytable_pool(NULL)
    , _concurrency(0) {
    BAIDU_CASSERT(offsetof(Server, _concurrency) % 64 == 0, 
                  Server_concurrency_must_be_aligned_by_cacheline);
}

server的构造函数感觉没什么需要关注的点

struct ProfilerLinker {
    // [ Must be inlined ]
    // This function is included by user's compilation unit to force
    // linking of ProfilerStart()/ProfilerStop()
    // etc when corresponding macros are defined.
    inline ProfilerLinker() {
        
#if defined(BRPC_ENABLE_CPU_PROFILER) || defined(BAIDU_RPC_ENABLE_CPU_PROFILER)
        cpu_profiler_enabled = true;
        // compiler has no way to tell if PROFILER_LINKER_DUMMY is 0 or not,
        // so it has to link the function inside the branch.
        if (PROFILER_LINKER_DUMMY != 0/*must be false*/) {
            ProfilerStart("this_function_should_never_run");
        }
#endif
    }
};
往Server里添加Service(业务代码)

Server::AddService有3个重载

int Server::AddService(google::protobuf::Service* service,
                       ServiceOwnership ownership) {
    ServiceOptions options;
    // 设置options的ownership属性
    options.ownership = ownership;
    return AddServiceInternal(service, false, options);
}

int Server::AddService(google::protobuf::Service* service,
                       ServiceOwnership ownership,
                       const butil::StringPiece& restful_mappings) {
    ServiceOptions options;
    // 设置服务选项的所有权
    options.ownership = ownership;
    // 将 restful_mappings 转换为字符串并设置到 options 的 restful_mappings 属性中
    options.restful_mappings = restful_mappings.as_string();
    return AddServiceInternal(service, false, options);
}

int Server::AddService(google::protobuf::Service* service,
                       // 服务对象指针
                       const ServiceOptions& options) {
    return AddServiceInternal(service, false, options);
}

3个重载最终都是调用AddServiceInternal(service, false, options),区别只在于options的配置参数。

int Server::AddServiceInternal(google::protobuf::Service* service,
                               bool is_builtin_service,
                               const ServiceOptions& svc_opt)

参数有三个:protofuf Service类型的服务,是否是内部服务(brpc提供了监控之类的内部服务)的标识,以及添加的service的选项。

google::protobuf::Service是一个公共的基类,用户定义在proto文件里的Service会被预处理成继承自这个基类的一个类,用户需要继承这个类去写真正的业务代码(实现method对应的虚函数),框架层面上统一按父类google::protobuf::Service处理。AddServiceInternal就是把Service里的method拿出来映射到指定地址上,首先会调用InitializeOnce() 实现server的初始化(只会初始化一次,用pthread_once调用GlobalInitializeOrDieImpl,保证只执行一次)

const google::protobuf::ServiceDescriptor* sd = service->GetDescriptor();//return _sd
if (sd->method_count() == 0) {
        LOG(ERROR) << "service=" << sd->full_name()
                   << " does not have any method.";
        return -1;
    }

    if (InitializeOnce() != 0) {
        LOG(ERROR) << "Fail to initialize Server[" << version() << ']';
        return -1;
    }

InitializeOnce是server的一次性初始化函数

pthread_once(&register_extensions_once,
                     GlobalInitializeOrDieImpl);
_fullname_service_map.init(INITIAL_SERVICE_CAP)//初始化服务名到服务指针的映射
_service_map.init(INITIAL_SERVICE_CAP)
_method_map.init(INITIAL_SERVICE_CAP * 2)
_ssl_ctx_map.init(INITIAL_CERT_MAP)

brpc是支持多协议的,而且是同一个端口,会有一个协议解析机制来保证相对高的效率,本身内置了各种协议,用户也可以很方便地自己新增协议支持,GlobalInitializeOrDieImpl函数里面,最重要的就是注册了包括http在内的各种各样的协议支持,简单来说就是指定了各协议的消息解析函数、server端用到的request处理函数、client端用到的response处理函数

const bool is_idl_support = sd->file()->options().GetExtension(idl_support);

    Tabbed* tabbed = dynamic_cast<Tabbed*>(service);
    //遍历service中的所有方法,并建立方法与service的映射
    for (int i = 0; i < sd->method_count(); ++i) {
        const google::protobuf::MethodDescriptor* md = sd->method(i);
        MethodProperty mp;
        mp.is_builtin_service = is_builtin_service;
        mp.own_method_status = true;
        mp.params.is_tabbed = !!tabbed;
        mp.params.allow_http_body_to_pb = svc_opt.allow_http_body_to_pb;
        mp.params.pb_bytes_to_base64 = svc_opt.pb_bytes_to_base64;
        mp.service = service;
        mp.method = md;
        mp.status = new MethodStatus;
        _method_map[md->full_name()] = mp;
        if (is_idl_support && sd->name() != sd->full_name()/*has ns*/) {
            MethodProperty mp2 = mp;
            mp2.own_method_status = false;
            // have to map service_name + method_name as well because ubrpc
            // does not send the namespace before service_name.
            std::string full_name_wo_ns;
            full_name_wo_ns.reserve(sd->name().size() + 1 + md->name().size());
            full_name_wo_ns.append(sd->name());
            full_name_wo_ns.push_back('.');
            full_name_wo_ns.append(md->name());
            if (_method_map.seek(full_name_wo_ns) == NULL) {
                _method_map[full_name_wo_ns] = mp2;
            } else {
                LOG(ERROR) << '`' << full_name_wo_ns << "' already exists";
                RemoveMethodsOf(service);
                return -1;
            }
        }
    }

遍历一个服务描述符(sd)中的方法,并为每个方法创建一个MethodProperty对象,然后将其存储在一个映射(_method_map)中。

const ServiceProperty ss = {
        is_builtin_service, svc_opt.ownership, service, NULL };
    _fullname_service_map[sd->full_name()] = ss;
    _service_map[sd->name()] = ss;
    if (is_builtin_service) {
        ++_builtin_service_count;
    } else {
        if (_first_service == NULL) {
            _first_service = service;
        }
    }

    butil::StringPiece restful_mappings = svc_opt.restful_mappings;
    restful_mappings.trim_spaces();
    for (size_t i = 0; i < mappings.size(); ++i) {
            const std::string full_method_name =
                sd->full_name() + "." + mappings[i].method_name;
            MethodProperty* mp = _method_map.seek(full_method_name);
            if (mp == NULL) {
                LOG(ERROR) << "Unknown method=`" << full_method_name << '\'';
                RemoveService(service);
                return -1;
            }
            
            const std::string& svc_name = mappings[i].path.service_name;
            if (svc_name.empty()) {
                if (_global_restful_map == NULL) {
                    _global_restful_map = new RestfulMap("");
                }
                MethodProperty::OpaqueParams params;
                params.is_tabbed = !!tabbed;
                params.allow_http_body_to_pb = svc_opt.allow_http_body_to_pb;
                params.pb_bytes_to_base64 = svc_opt.pb_bytes_to_base64;
                if (!_global_restful_map->AddMethod(
                        mappings[i].path, service, params,
                        mappings[i].method_name, mp->status)) {
                    LOG(ERROR) << "Fail to map `" << mappings[i].path
                               << "' to `" << full_method_name << '\'';
                    RemoveService(service);
                    return -1;
                }
                if (mp->http_url == NULL) {
                    mp->http_url = new std::string(mappings[i].path.to_string());
                } else {
                    if (!mp->http_url->empty()) {
                        mp->http_url->append(" @");
                    }
                    mp->http_url->append(mappings[i].path.to_string());
                }
                continue;
            }
            ServiceProperty* sp = _fullname_service_map.seek(svc_name);
            ServiceProperty* sp2 = _service_map.seek(svc_name);
            if (((!!sp) != (!!sp2)) ||
                (sp != NULL && sp->service != sp2->service)) {
                LOG(ERROR) << "Impossible: _fullname_service and _service_map are"
                        " inconsistent before inserting " << svc_name;
                RemoveService(service);
                return -1;
            }
            RestfulMap* m = NULL;
            if (sp == NULL) {
                m = new RestfulMap(mappings[i].path.service_name);
            } else {
                m = sp->restful_map;
            }
            MethodProperty::OpaqueParams params;
            params.is_tabbed = !!tabbed;
            params.allow_http_body_to_pb = svc_opt.allow_http_body_to_pb;
            params.pb_bytes_to_base64 = svc_opt.pb_bytes_to_base64;
            if (!m->AddMethod(mappings[i].path, service, params,
                              mappings[i].method_name, mp->status)) {
                LOG(ERROR) << "Fail to map `" << mappings[i].path << "' to `"
                           << sd->full_name() << '.' << mappings[i].method_name
                           << '\'';
                if (sp == NULL) {
                    delete m;
                }
                RemoveService(service);
                return -1;
            }
            if (mp->http_url == NULL) {
                mp->http_url = new std::string(mappings[i].path.to_string());
            } else {
                if (!mp->http_url->empty()) {
                    mp->http_url->append(" @");
                }
                mp->http_url->append(mappings[i].path.to_string());
            }
            if (sp == NULL) {
                ServiceProperty ss =
                    { false, SERVER_DOESNT_OWN_SERVICE, NULL, m };
                _fullname_service_map[svc_name] = ss;
                _service_map[svc_name] = ss;
                ++_virtual_service_count;
            }
        }
    }
  1. 初始化方法映射:
    • 遍历服务中的所有方法,并为每个方法创建一个MethodProperty对象,其中包含与该方法相关的所有信息。
    • 将这些MethodProperty对象添加到_method_map中,以便后续根据方法名快速查找。
    • 如果服务有命名空间,并且支持IDL(Interface Definition Language),则还会创建一个没有命名空间的MethodProperty对象,并将其添加到映射中。
  1. 添加服务到映射:
    • 创建一个ServiceProperty对象,其中包含服务的相关信息,并将其添加到_fullname_service_map和_service_map中。
    • 根据is_builtin_service的值更新_builtin_service_count。
    • 如果这是第一个添加的服务,则更新_first_service指针。
  1. 处理RESTful映射:
    • 如果svc_opt中的restful_mappings不为空,则解析这些映射,并将它们与对应的方法关联起来。
    • 这允许服务根据RESTful风格的URL进行调用。
    • 使用RestfulMap来存储这些映射,以便在请求到达时可以快速找到正确的方法。
  1. 处理Tab服务:
    • 如果服务是Tabbed类型(可能是一个支持某种标签机制的服务),则获取其标签信息并添加到_tab_info_list中。
    • 检查新添加的标签信息是否有效。

2.设置服务器参数

brpc提供了丰富的参数设置,包括最大并发,是否开启ssl、关闭闲置连接时间等,具体的可以参考官方文档,这里不再赘述,主要是通过Serveroption类型指定参数和直接调用set_xxx来直接设置。

3.启动服务器

调用Start函数来启动服务器,和Addservice一样,start也有多种调用方式的重载,最终都是调用的StartInternal

int Server::Start(const char* ip_str, PortRange port_range,
                  const ServerOptions *opt) {
    butil::ip_t ip;
    // 将字符串ip_str转换为ip地址,并保存到ip变量中
    if (butil::str2ip(ip_str, &ip) != 0 &&
        // 如果转换失败,则尝试将ip_str作为主机名进行解析,并保存到ip变量中
        butil::hostname2ip(ip_str, &ip) != 0) {
        // 如果两种转换方式都失败,则打印错误日志,并返回-1
        LOG(ERROR) << "Invalid address=`" << ip_str << '\'';
        return -1;
    }
    // 调用StartInternal函数,传入转换后的ip地址、端口范围和选项进行启动
    return StartInternal(ip, port_range, opt);
}
int Server::Start(const butil::EndPoint& endpoint, const ServerOptions* opt) {
    // 调用StartInternal函数,传入endpoint的ip、端口范围(这里端口范围只有一个端口),以及opt参数
    return StartInternal(
        // 传入endpoint的ip
        endpoint.ip, 
        // 传入端口范围,由于这里只有一个端口,所以起始端口和结束端口都是endpoint.port
        PortRange(endpoint.port, endpoint.port), 
        // 传入opt参数
        opt);
}

在StartInternal函数里,首先是一些准备工作,根据option进行了一些设置,包括ssl设置以及是否创建tls数据、提前启动好需要的bthread等。然后就是在指定的ip和端口范围(在范围内不断尝试,成功了就停止继续尝试)上启动监听,一个server也只支持监听一个端口。

配置解析和初始化
int Server::StartInternal(const butil::ip_t& ip,
                          const PortRange& port_range,
                          const ServerOptions *opt) {
    std::unique_ptr<Server, RevertServerStatus> revert_server(this);
    if (_failed_to_set_max_concurrency_of_method) {
        _failed_to_set_max_concurrency_of_method = false;
        LOG(ERROR) << "previous call to MaxConcurrencyOf() was failed, "
            "fix it before starting server";
        return -1;
    }
    if (InitializeOnce() != 0) {
        LOG(ERROR) << "Fail to initialize Server[" << version() << ']';
        return -1;
    }
    const Status st = status();
    if (st != READY) {
        if (st == RUNNING) {
            LOG(ERROR) << "Server[" << version() << "] is already running on "
                       << _listen_addr; 
        } else {
            LOG(ERROR) << "Can't start Server[" << version()
                       << "] which is " << status_str(status());
        }
        return -1;
    }
    if (opt) {
        _options = *opt;
    } else {
        // Always reset to default options explicitly since `_options'
        // may be the options for the last run or even bad options
        _options = ServerOptions();
    }

    if (!_options.h2_settings.IsValid(true/*log_error*/)) {
        LOG(ERROR) << "Invalid h2_settings";
        return -1;
    }

    if (_options.http_master_service) {
        // Check requirements for http_master_service:
        //  has "default_method" & request/response have no fields
        const google::protobuf::ServiceDescriptor* sd =
            _options.http_master_service->GetDescriptor();
        const google::protobuf::MethodDescriptor* md =
            sd->FindMethodByName("default_method");
        if (md == NULL) {
            LOG(ERROR) << "http_master_service must have a method named `default_method'";
            return -1;
        }
        if (md->input_type()->field_count() != 0) {
            LOG(ERROR) << "The request type of http_master_service must have "
                "no fields, actually " << md->input_type()->field_count();
            return -1;
        }
        if (md->output_type()->field_count() != 0) {
            LOG(ERROR) << "The response type of http_master_service must have "
                "no fields, actually " << md->output_type()->field_count();
            return -1;
        }
    }

    // CAUTION:
    //   Following code may run multiple times if this server is started and
    //   stopped more than once. Reuse or delete previous resources!

    if (_options.session_local_data_factory) {
        if (_session_local_data_pool == NULL) {
            _session_local_data_pool =
                new (std::nothrow) SimpleDataPool(_options.session_local_data_factory);
            if (NULL == _session_local_data_pool) {
                LOG(ERROR) << "Fail to new SimpleDataPool";
                return -1;
            }
        } else {
            _session_local_data_pool->Reset(_options.session_local_data_factory);
        }
        _session_local_data_pool->Reserve(_options.reserved_session_local_data);
    }
    _keytable_pool = new bthread_keytable_pool_t;
    if (bthread_keytable_pool_init(_keytable_pool) != 0) {
        LOG(ERROR) << "Fail to init _keytable_pool";
        delete _keytable_pool;
        _keytable_pool = NULL;
        return -1;
    }
    if (_options.thread_local_data_factory) {
        _tl_options.thread_local_data_factory = _options.thread_local_data_factory;
        if (bthread_key_create2(&_tl_options.tls_key, DestroyServerTLS,
                                _options.thread_local_data_factory) != 0) {
            LOG(ERROR) << "Fail to create thread-local key";
            return -1;
        }
        if (_options.reserved_thread_local_data) {
            bthread_keytable_pool_reserve(_keytable_pool,
                                          _options.reserved_thread_local_data,
                                          _tl_options.tls_key,
                                          CreateServerTLS,
                                          _options.thread_local_data_factory);
        }
    } else {
        _tl_options = ThreadLocalOptions();
    }
    if (_options.bthread_init_count != 0 &&
        _options.bthread_init_fn != NULL) {
        // Create some special bthreads to call the init functions. The
        // bthreads will not quit until all bthreads finish the init function.
        BthreadInitArgs* init_args
            = new BthreadInitArgs[_options.bthread_init_count];
        size_t ncreated = 0;
        for (size_t i = 0; i < _options.bthread_init_count; ++i, ++ncreated) {
            init_args[i].bthread_init_fn = _options.bthread_init_fn;
            init_args[i].bthread_init_args = _options.bthread_init_args;
            init_args[i].result = false;
            init_args[i].done = false;
            init_args[i].stop = false;
            bthread_attr_t tmp = BTHREAD_ATTR_NORMAL;
            tmp.keytable_pool = _keytable_pool;
            if (bthread_start_background(
                    &init_args[i].th, &tmp, BthreadInitEntry, &init_args[i]) != 0) {
                break;
            }
        }
        // Wait until all created bthreads finish the init function.
        for (size_t i = 0; i < ncreated; ++i) {
            while (!init_args[i].done) {
                bthread_usleep(1000);
            }
        }
        // Stop and join created bthreads.
        for (size_t i = 0; i < ncreated; ++i) {
            init_args[i].stop = true;
        }
        for (size_t i = 0; i < ncreated; ++i) {
            bthread_join(init_args[i].th, NULL);
        }
        size_t num_failed_result = 0;
        for (size_t i = 0; i < ncreated; ++i) {
            if (!init_args[i].result) {
                ++num_failed_result;
            }
        }
        delete [] init_args;
        if (ncreated != _options.bthread_init_count) {
            LOG(ERROR) << "Fail to create "
                       << _options.bthread_init_count - ncreated << " bthreads";
            return -1;
        }
        if (num_failed_result != 0) {
            LOG(ERROR) << num_failed_result << " bthread_init_fn failed";
            return -1;
        }
    }

这个函数属实有点长,一段段看下来,

判断Server的成员变量_failed_to_set_max_concurrency_of_method是否设置(最大并发数)

执行InitializeOnce,在添加Service的时候执行的部分GlobalInitializeOrDie是pthread_once的不会重复执行,但是另外一部分像service_map.init这些还会执行一遍。

判断server的status,不是READY就报错返回-1.

判断传入参数opt(ServerOptions对象)是否有效,没有就调用调用默认构造给成员变量赋值

判断配置的h2设置是否有效。_options.h2_settings.IsValid

_options.http_master_service是http代理,默认是NULL

_options.session_local_data_factory会话本地数据池(_session_local_data_pool)的初始化和配置

bthread的启动准备,在设置参数后调用bthread_start_background创建bthread任务,没有实质任务时(!init_args[i].done)会usleep让出cpu

FreeSSLContexts();
    if (_options.has_ssl_options()) {
        CertInfo& default_cert = _options.mutable_ssl_options()->default_cert;
        if (default_cert.certificate.empty()) {
            LOG(ERROR) << "default_cert is empty";
            return -1;
        }
        if (AddCertificate(default_cert) != 0) {
            return -1;
        }
        _default_ssl_ctx = _ssl_ctx_map.begin()->second.ctx;

        const std::vector<CertInfo>& certs = _options.mutable_ssl_options()->certs;
        for (size_t i = 0; i < certs.size(); ++i) {
            if (AddCertificate(certs[i]) != 0) {
                return -1;
            }
        }
    }
    _concurrency = 0;
    
    if (_options.has_builtin_services &&
        _builtin_service_count <= 0 &&
        AddBuiltinServices() != 0) {
        LOG(ERROR) << "Fail to add builtin services";
        return -1;
    }
    //如果服务器多次启动/停止,并且其中一个选项将has_builtin_service设置为true,
//则将启用内置服务以供以后重新启动。检查此案例并向用户报告。
    if (!_options.has_builtin_services && _builtin_service_count > 0) {
        LOG(ERROR) << "A server started/stopped for multiple times must be "
            "consistent on ServerOptions.has_builtin_services";
        return -1;
    }

SSl相关配置初始化

AddBuiltinServices添加服务

for (ServiceMap::const_iterator it = _fullname_service_map.begin();
         it != _fullname_service_map.end(); ++it) {
        if (it->second.restful_map) {
            it->second.restful_map->PrepareForFinding();
        }
    }
    if (_global_restful_map) {
        _global_restful_map->PrepareForFinding();
    }

    if (_options.num_threads > 0) {
        if (FLAGS_usercode_in_pthread) {
            _options.num_threads += FLAGS_usercode_backup_threads;
        }
        if (_options.num_threads < BTHREAD_MIN_CONCURRENCY) {
            _options.num_threads = BTHREAD_MIN_CONCURRENCY;
        }
        bthread_setconcurrency(_options.num_threads);
    }

    for (MethodMap::iterator it = _method_map.begin();
        it != _method_map.end(); ++it) {
        if (it->second.is_builtin_service) {
            it->second.status->SetConcurrencyLimiter(NULL);
        } else {
            const AdaptiveMaxConcurrency* amc = &it->second.max_concurrency;
            if (amc->type() == AdaptiveMaxConcurrency::UNLIMITED()) {
                amc = &_options.method_max_concurrency;
            }
            ConcurrencyLimiter* cl = NULL;
            if (!CreateConcurrencyLimiter(*amc, &cl)) {
                LOG(ERROR) << "Fail to create ConcurrencyLimiter for method";
                return -1;
            }
            it->second.status->SetConcurrencyLimiter(cl);
        }
    }

准备RESTful映射

设置并发线程数

为每个方法设置并发限制器

listen
if (port_range.min_port > port_range.max_port) {
        LOG(ERROR) << "Invalid port_range=[" << port_range.min_port << '-'
                   << port_range.max_port << ']';
        return -1;
    }
    _listen_addr.ip = ip;
    for (int port = port_range.min_port; port <= port_range.max_port; ++port) {
        _listen_addr.port = port;
        butil::fd_guard sockfd(tcp_listen(_listen_addr, FLAGS_reuse_addr));
        if (sockfd < 0) {
            if (port != port_range.max_port) { // not the last port, try next
                continue;
            }
            if (port_range.min_port != port_range.max_port) {
                LOG(ERROR) << "Fail to listen " << ip
                           << ":[" << port_range.min_port << '-'
                           << port_range.max_port << ']';
            } else {
                LOG(ERROR) << "Fail to listen " << _listen_addr;
            }
            return -1;
        }
        if (_listen_addr.port == 0) {
            // port=0 makes kernel dynamically select a port from
            // https://en.wikipedia.org/wiki/Ephemeral_port
            _listen_addr.port = get_port_from_fd(sockfd);
            if (_listen_addr.port <= 0) {
                LOG(ERROR) << "Fail to get port from fd=" << sockfd;
                return -1;
            }
        }
        if (_am == NULL) {
            _am = BuildAcceptor();
            if (NULL == _am) {
                LOG(ERROR) << "Fail to build acceptor";
                return -1;
            }
        }
        // Set `_status' to RUNNING before accepting connections
        // to prevent requests being rejected as ELOGOFF
        _status = RUNNING;
        time(&_last_start_time);
        GenerateVersionIfNeeded();
        g_running_server_count.fetch_add(1, butil::memory_order_relaxed);

        // Pass ownership of `sockfd' to `_am'
        if (_am->StartAccept(sockfd, _options.idle_timeout_sec,
                             _default_ssl_ctx) != 0) {
            LOG(ERROR) << "Fail to start acceptor";
            return -1;
        }
        sockfd.release();
        break; // stop trying
    }
    if (_options.internal_port >= 0 && _options.has_builtin_services) {
        if (_options.internal_port  == _listen_addr.port) {
            LOG(ERROR) << "ServerOptions.internal_port=" << _options.internal_port
                       << " is same with port=" << _listen_addr.port << " to Start()";
            return -1;
        }
        if (_options.internal_port == 0) {
            LOG(ERROR) << "ServerOptions.internal_port cannot be 0, which"
                " allocates a dynamic and probabaly unfiltered port,"
                " against the purpose of \"being internal\".";
            return -1;
        }
        butil::EndPoint internal_point = _listen_addr;
        internal_point.port = _options.internal_port;
        butil::fd_guard sockfd(tcp_listen(internal_point, FLAGS_reuse_addr));
        if (sockfd < 0) {
            LOG(ERROR) << "Fail to listen " << internal_point << " (internal)";
            return -1;
        }
        if (NULL == _internal_am) {
            _internal_am = BuildAcceptor();
            if (NULL == _internal_am) {
                LOG(ERROR) << "Fail to build internal acceptor";
                return -1;
            }
        }
        // Pass ownership of `sockfd' to `_internal_am'
        if (_internal_am->StartAccept(sockfd, _options.idle_timeout_sec,
                                      _default_ssl_ctx) != 0) {
            LOG(ERROR) << "Fail to start internal_acceptor";
            return -1;
        }
        sockfd.release();
    }
    PutPidFileIfNeeded();

    // Launch _derivative_thread.
    CHECK_EQ(INVALID_BTHREAD, _derivative_thread);
    if (bthread_start_background(&_derivative_thread, NULL,
                                 UpdateDerivedVars, this) != 0) {
        LOG(ERROR) << "Fail to create _derivative_thread";
        return -1;
    }

    // Print tips to server launcher.
    int http_port = _listen_addr.port;
    std::ostringstream server_info;
    server_info << "Server[" << version() << "] is serving on port="
                << _listen_addr.port;
    if (_options.internal_port >= 0 && _options.has_builtin_services) {
        http_port = _options.internal_port;
        server_info << " and internal_port=" << _options.internal_port;
    }
    LOG(INFO) << server_info.str() << '.';

    if (_options.has_builtin_services) {
        LOG(INFO) << "Check out http://" << butil::my_hostname() << ':'
                  << http_port << " in web browser.";
    } else {
        LOG(WARNING) << "Builtin services are disabled according to "
            "ServerOptions.has_builtin_services";
    }
    // For trackme reporting
    SetTrackMeAddress(butil::EndPoint(butil::my_ip(), http_port));
    revert_server.release();
    return 0;
}

设置监听端口,tcp_listen调用的是socket api。

将状态改成RUNNING,g_running_server_count计数+1

GenerateVersionIfNeeded根据服务器支持的不同服务类型(用户服务、nshead服务、thrift服务和rtmp服务)生成一个版本字符串,其中不同的服务类型用+分隔。如果某个服务类型没有设置,则不会出现在版本字符串中。

PutPidFileIfNeeded根据给定的 PID 文件路径(_options.pid_file)来创建所需的目录结构

_am==NULL调用BuildAcceptor构建接收器

Acceptor* Server::BuildAcceptor() {
    std::set<std::string> whitelist;
    for (butil::StringSplitter sp(_options.enabled_protocols.c_str(), ' ');
         sp; ++sp) {
        std::string protocol(sp.field(), sp.length());
        whitelist.insert(protocol);
    }
    const bool has_whitelist = !whitelist.empty();
    Acceptor* acceptor = new (std::nothrow) Acceptor(_keytable_pool);
    if (NULL == acceptor) {
        LOG(ERROR) << "Fail to new Acceptor";
        return NULL;
    }
    InputMessageHandler handler;
    std::vector<Protocol> protocols;
    ListProtocols(&protocols);
    for (size_t i = 0; i < protocols.size(); ++i) {
        if (protocols[i].process_request == NULL) {
            // The protocol does not support server-side.
            continue;
        }
        if (has_whitelist &&
            !is_http_protocol(protocols[i].name) &&
            !whitelist.erase(protocols[i].name)) {
            // the protocol is not allowed to serve.
            RPC_VLOG << "Skip protocol=" << protocols[i].name;
            continue;
        }
        // `process_request' is required at server side
        handler.parse = protocols[i].parse;
        handler.process = protocols[i].process_request;
        handler.verify = protocols[i].verify;
        handler.arg = this;
        handler.name = protocols[i].name;
        if (acceptor->AddHandler(handler) != 0) {
            LOG(ERROR) << "Fail to add handler into Acceptor("
                       << acceptor << ')';
            delete acceptor;
            return NULL;
        }
    }
    if (!whitelist.empty()) {
        std::ostringstream err;
        err << "ServerOptions.enabled_protocols has unknown protocols=`";
        for (std::set<std::string>::const_iterator it = whitelist.begin();
             it != whitelist.end(); ++it) {
            err << *it << ' ';
        }
        err << '\'';
        delete acceptor;
        LOG(ERROR) << err.str();
        return NULL;
    }
    return acceptor;
}

首先通过ListProtocols拿到所有注册支持的协议,然后遍历这些协议,通过AddHandler添加,handler是处理message的,注意到这里handler.process都是protocols[i]里面的process_request,也就是对应协议在服务端使用用来处理过接收到的请求的,对应的如果是客户端用的则是process_response。

int Acceptor::StartAccept(int listened_fd, int idle_timeout_sec,
                          const std::shared_ptr<SocketSSLContext>& ssl_ctx) {
    // 检查监听文件描述符是否有效
    if (listened_fd < 0) {
        LOG(FATAL) << "Invalid listened_fd=" << listened_fd;
        return -1;
    }

    // 锁定互斥锁
    BAIDU_SCOPED_LOCK(_map_mutex);

    // 如果状态为未初始化,则进行初始化
    if (_status == UNINITIALIZED) {
        if (Initialize() != 0) {
            LOG(FATAL) << "Fail to initialize Acceptor";
            return -1;
        }
        _status = READY;
    }

    // 如果状态不为就绪状态,则抛出致命错误
    if (_status != READY) {
        LOG(FATAL) << "Acceptor hasn't stopped yet: status=" << status();
        return -1;
    }

    // 如果空闲超时时间大于0,则启动后台线程关闭空闲连接
    if (idle_timeout_sec > 0) {
        if (bthread_start_background(&_close_idle_tid, NULL,
                                     CloseIdleConnections, this) != 0) {
            LOG(FATAL) << "Fail to start bthread";
            return -1;
        }
    }

    // 设置空闲超时时间和SSL上下文
    _idle_timeout_sec = idle_timeout_sec;
    _ssl_ctx = ssl_ctx;

    // 在锁内创建_acception_id,以确保在OnNewConnections函数运行时,能够看到下面设置的合理字段
    // Creation of _acception_id is inside lock so that OnNewConnections
    // (which may run immediately) should see sane fields set below.
    SocketOptions options;
    options.fd = listened_fd;
    options.user = this;
    options.on_edge_triggered_events = OnNewConnections;
    if (Socket::Create(options, &_acception_id) != 0) {
        // 在析构函数中会停止空闲连接的线程
        // Close-idle-socket thread will be stopped inside destructor
        LOG(FATAL) << "Fail to create _acception_id";
        return -1;
    }

    // 设置监听文件描述符和状态为运行中
    _listened_fd = listened_fd;
    _status = RUNNING;
    return 0;
}

brpc是采用epoll来处理事件的,用的是边缘触发,options.on_edge_triggered_events是epoll边缘触发事件到来后的处理函数,也就是OnNewConnections作为事件的处理函数,顾名思义也就是新连接到来后使用的处理函数,

关于Socket类型,就是对fd等资源的的封装方便再多线程环境下使用,官方介绍是这样的:
和fd相关的数据均在Socket中,是rpc最复杂的结构之一,这个结构的独特之处在于用64位的SocketId指代Socket对象以方便在多线程环境下使用fd。

Socket::Create函数是根据options新建socket并把id存入第二个参数中,内部最重要的操作就是用options.on_edge_triggered_event所指代的函数进行epoll add,在当前服务端start的场景下,也就是在监听fd上用OnNewConnections注册epoll事件处理新过来的连接,至此启动完成,后续等待epoll事件进行相应处理。

负责第一步处理epoll事件的则是EventDispatcher,是分发epoll event的模块,负责把fd上边缘触发的事件分发给消费者(具体的业务处理函数),可以有多个,分别运行在不同的bthread上,具体的数量取决于参数,它所做的事情就是启动后不断去epoll_wait,获得epoll事件后交由相应函数处理,如果是epoll_in事件,调用Socket::StartInputEvent,如果是epoll_out,调用Socket::HandleEpollOut,

void EventDispatcher::Run() {
    while (!_stop) {
        epoll_event e[32];
        int n = epoll_wait(_epfd, e, ARRAY_SIZE(e), 0);
        if (n == 0) {
            n = epoll_wait(_epfd, e, ARRAY_SIZE(e), -1);
        }
        const int n = epoll_wait(_epfd, e, ARRAY_SIZE(e), -1);
        if (_stop) {
            // epoll_ctl/epoll_wait should have some sort of memory fencing
            // guaranteeing that we(after epoll_wait) see _stop set before
            // epoll_ctl.
            break;
        }
        if (n < 0) {
            if (EINTR == errno) {
                // We've checked _stop, no wake-up will be missed.
                continue;
            }
            PLOG(FATAL) << "Fail to epoll_wait epfd=" << _epfd;
            break;
        }
        for (int i = 0; i < n; ++i) {
            if (e[i].events & (EPOLLIN | EPOLLERR | EPOLLHUP)
                || (e[i].events & has_epollrdhup)
                ) {
                // We don't care about the return value.
                Socket::StartInputEvent(e[i].data.u64, e[i].events,
                                        _consumer_thread_attr);
            }
        }
        for (int i = 0; i < n; ++i) {
            if (e[i].events & (EPOLLOUT | EPOLLERR | EPOLLHUP)) {
                // We don't care about the return value.
                Socket::HandleEpollOut(e[i].data.u64);
            }
        }
    }
}

StartAccept后,服务器基本就启动完成了。再回到最开始的实例里,最后会调用server.RunUntilAskedToQuit()

RegisterQuitSignalOrDie里主要是用signal函数注册了退出信号,一旦有退出信号s_signal_quit会为true,从而跳出循环并停止server。

brpc的启动过程基本就这些,socket部分还不够清晰以后再细看。

  • 8
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值