chromium启动流程(二)-UtilityProcess进程启动流程

通过chromium启动流程(一)-browser进程启动流程 一文我们知道 browser进程通过RunBrowser 函数启动,而其他进程则通过RunOtherNamedProcessTypeMain函数启动,今天我们以NetworkService进程的启动来分析其他进程启动, 同时也说明service进程的运行方式。
content/app/content_main_runner_impl.cc

715 int NO_STACK_PROTECTOR
 716 RunOtherNamedProcessTypeMain(const std::string& process_type,
 717                              MainFunctionParams main_function_params,
      ......
 726   static const MainFunction kMainFunctions[] = {
 727 #if BUILDFLAG(ENABLE_PPAPI)
 728     {switches::kPpapiPluginProcess, PpapiPluginMain},
 729 #endif  // BUILDFLAG(ENABLE_PPAPI)
 730     {switches::kUtilityProcess, UtilityMain},
 731     {switches::kRendererProcess, RendererMain},
 732     {switches::kGpuProcess, GpuMain},
 733   };
 734 
      ......
 762 
 763   for (size_t i = 0; i < std::size(kMainFunctions); ++i) {
 764     if (process_type == kMainFunctions[i].name) {
 765       auto exit_code =
 766           delegate->RunProcess(process_type, std::move(main_function_params));
 767       if (absl::holds_alternative<int>(exit_code)) {
 768         DCHECK_GE(absl::get<int>(exit_code), 0);
 769         return absl::get<int>(exit_code);
 770       }
 771       return kMainFunctions[i].function(
 772           std::move(absl::get<MainFunctionParams>(exit_code)));
 773     }
 774   }
 775 
 776 #if BUILDFLAG(USE_ZYGOTE)
 777   // Zygote startup is special -- see RunZygote comments above
 778   // for why we don't use ZygoteMain directly.
 779   if (process_type == switches::kZygoteProcess)
 780     return RunZygote(delegate);
 781 #endif  // BUILDFLAG(USE_ZYGOTE)
 782 
 783   // If it's a process we don't know about, the embedder should know.
 784   auto exit_code =
 785       delegate->RunProcess(process_type, std::move(main_function_params));
 786   DCHECK(absl::holds_alternative<int>(exit_code));
 787   DCHECK_GE(absl::get<int>(exit_code), 0);
 788   return absl::get<int>(exit_code);
 789 }

函数763-774行根据启动参数process_type 选择要执行的启动函数进行执行。
process_type 有5种, 其中四种对应726-734行描述的kPpapiPluginProcess、kUtilityProcess、kRendererProcess、kGpuProcess。 另外一种是776-780行处理的kZygoteProcess。 784-788行 对于其他类型的process_type,则执行delegate->RunProcess 启动。

NetWorkService 属于kUtilityProcess 类型进程,我们分析UtilityMain函数。
content/utility/utility_main.cc

198 // Mainline routine for running as the utility process.
199 int UtilityMain(MainFunctionParams parameters) {
200   base::MessagePumpType message_pump_type =
201       parameters.command_line->HasSwitch(switches::kMessageLoopTypeUi)
202           ? base::MessagePumpType::UI
203           : base::MessagePumpType::DEFAULT;
204 
      ......
240   // The main task executor of the utility process.
241   base::SingleThreadTaskExecutor main_thread_task_executor(message_pump_type);
242   const std::string utility_sub_type =
243       parameters.command_line->GetSwitchValueASCII(switches::kUtilitySubType);
244   SetUtilityThreadName(utility_sub_type);
245 
      ......
361   ChildProcess utility_process(base::ThreadType::kDefault);
362   GetContentClient()->utility()->PostIOThreadCreated(
363       utility_process.io_task_runner());
364   base::RunLoop run_loop;
365   utility_process.set_main_thread(
366       new UtilityThreadImpl(run_loop.QuitClosure()));
367 
      ......
435   run_loop.Run();
436 
437 #if defined(LEAK_SANITIZER)
438   // Invoke LeakSanitizer before shutting down the utility thread, to avoid
439   // reporting shutdown-only leaks.
440   __lsan_do_leak_check();
441 #endif
442 
443   return 0;
444 }
445 

240-244行创建TaskExecutor。
361-362行创建ChildProcess 用于Service 绑定。
364行创建run_loop, 435行进入loop循环。

content/child/child_process.cc
我们看ChildProcess创建.

 64 ChildProcess::ChildProcess(base::ThreadType io_thread_type,
 65                            std::unique_ptr<base::ThreadPoolInstance::InitParams>
 66                                thread_pool_init_params)
 67     : resetter_(&child_process, this, nullptr),
 68       io_thread_(std::make_unique<ChildIOThread>()) {
     ......
 87 
 88   // Start ThreadPoolInstance if not already done. A ThreadPoolInstance
 89   // should already exist, and may already be running when ChildProcess is
 90   // instantiated in the browser process or in a test process.
 91   //
 92   // There are 3 possibilities:
 93   //
 94   // 1. ChildProcess is actually being constructed on a thread in the browser
 95   //    process (eg. for single-process mode). The ThreadPool was already
 96   //    started on the main thread, but this happened before the ChildProcess
 97   //    thread was created, which creates a happens-before relationship. So
 98   //    it's safe to check WasStartedUnsafe().
 99   // 2. ChildProcess is being constructed in a test. The ThreadPool was
100   //    already started by TaskEnvironment on the main thread. Depending on
101   //    the test, ChildProcess might be constructed on the main thread or
102   //    another thread that was created after the test start. Either way, it's
103   //    safe to check WasStartedUnsafe().
104   // 3. ChildProcess is being constructed in a subprocess from ContentMain, on
105   //    the main thread. This is the same thread that created the ThreadPool
106   //    so it's safe to check WasStartedUnsafe().
107   //
108   // Note that the only case we expect WasStartedUnsafe() to return true
109   // should be running on the main thread. So if there's a logic error and a
110   // stale read causes WasStartedUnsafe() to return false after the
111   // ThreadPool was started, Start() will correctly DCHECK as it's called on the
112   // wrong thread. (The result never flips from true to false so a stale read
113   // should never return true.)
114   auto* thread_pool = base::ThreadPoolInstance::Get();
115   DCHECK(thread_pool);
116   if (!thread_pool->WasStartedUnsafe()) {
117     if (thread_pool_init_params)
118       thread_pool->Start(*thread_pool_init_params.get());
119     else
120       thread_pool->StartWithDefaultParams();
121     initialized_thread_pool_ = true;
122   }
123 
     .....
143   CHECK(io_thread_->StartWithOptions(std::move(thread_options)));
144 }

ChildProcess 创建了一个io_thread 和一个线程池,并启动,这些线程都是给服务使用。

在看UtilityThreadImpl的创建,UtilityThreadImpl 代表主线程。
content/utility/utility_thread_impl.cc


UtilityThreadImpl::UtilityThreadImpl(base::RepeatingClosure quit_closure)
    : ChildThreadImpl(std::move(quit_closure),
                      ChildThreadImpl::Options::Builder()
                          .WithLegacyIPCChannel(false)
                          .ServiceBinder(GetServiceBinder())
                          .ExposesInterfacesToBrowser()
                          .Build()) {
  Init();
}

UtilityThreadImpl父类为ChildThreadImpl, 这里创建ChildThreadImpl, 并执行UtilityThreadImpl的Init进行初始化。 我们看ChildThreadImpl的创建。 这里使用Build模式,这里比较关键的参数是ServiceBinder用于绑定服务,后面进行分析。 先看ChildThreadImpl构造函数。
content/child/child_thread_impl.cc

ChildThreadImpl::ChildThreadImpl(base::RepeatingClosure quit_closure,
                                 const Options& options)
    : resetter_(&child_thread_impl, this),
......

  Init(options);
}

函数比较简单,直接执行Init方法
content/child/child_thread_impl.cc

599 void ChildThreadImpl::Init(const Options& options) {
600   TRACE_EVENT0("startup", "ChildThreadImpl::Init");
601   on_channel_error_called_ = false;
602   main_thread_runner_ = base::SingleThreadTaskRunner::GetCurrentDefault();
      ......
624 
625   mojo::ScopedMessagePipeHandle child_process_pipe_for_receiver;
626   mojo::ScopedMessagePipeHandle child_process_host_pipe_for_remote;
627   mojo::ScopedMessagePipeHandle legacy_ipc_bootstrap_pipe;
628   if (!IsInBrowserProcess()) {
           .......
655   } else {
656     child_process_pipe_for_receiver =
657         options.mojo_invitation->ExtractMessagePipe(
658             kChildProcessReceiverAttachmentName);
659     child_process_host_pipe_for_remote =
660         options.mojo_invitation->ExtractMessagePipe(
661             kChildProcessHostRemoteAttachmentName);
662     if (options.with_legacy_ipc_channel) {
663       legacy_ipc_bootstrap_pipe = options.mojo_invitation->ExtractMessagePipe(
664           kLegacyIpcBootstrapAttachmentName);
665     }
666   }
667 
668   // Now that we've recovered the message pipe for the ChildProcessHost, build
669   // our |child_process_host_| with it.
670   mojo::PendingRemote<mojom::ChildProcessHost> remote_host(
671       std::move(child_process_host_pipe_for_remote), /*version=*/0u);
672   child_process_host_ = mojo::SharedRemote<mojom::ChildProcessHost>(
673       std::move(remote_host), GetIOTaskRunner());
674 
      ......
      
736   ChildThreadImpl::GetIOTaskRunner()->PostTask(
737       FROM_HERE,
738       base::BindOnce(&IOThreadState::BindChildProcessReceiver, io_thread_state_,
739                      mojo::PendingReceiver<mojom::ChildProcess>(
740                          std::move(child_process_pipe_for_receiver))));
741 
742   int connection_timeout = kConnectionTimeoutS;
743   std::string connection_override =
744       base::CommandLine::ForCurrentProcess()->GetSwitchValueASCII(
745           switches::kIPCConnectionTimeout);
746   if (!connection_override.empty()) {
747     int temp;
748     if (base::StringToInt(connection_override, &temp))
749       connection_timeout = temp;
750   }
751 
752   if (!options.with_legacy_ipc_channel) {
753     child_process_host_->Ping(
754         base::BindOnce(&ChildThreadImpl::OnChannelConnected,
755                        base::Unretained(this), /*unused=*/0));
756   }
757   main_thread_runner_->PostDelayedTask(
758       FROM_HERE,
759       base::BindOnce(&ChildThreadImpl::EnsureConnected,
760                      channel_connected_factory_->GetWeakPtr()),
761       base::Seconds(connection_timeout));
762 
      .......
775 }

656-665行接受客户端邀请,建立发送个接收的MessagePipe。
668-674 建立ChildProcess 发送端。
736-740 建立ChildProcess接收端。
ChildProcess的作用是保证主进程和子进程的链接,以及子进程的状态维护、service的绑定。
753-756行ping一下对端进程。保证对端(主进程)存活。
757-762行调用EnsureConnected函数,检查链接超时。

我们来看一下ChildProcess接收端的建立过程。也就是ChildProcess 服务实现。 736-740 调用了ChildThreadImpl::IOThreadState的BindChildProcessReceiver方法。
content/child/child_thread_impl.cc

class ChildThreadImpl::IOThreadState
    : public base::RefCountedThreadSafe<IOThreadState>,
      public mojom::ChildProcess 

IOThreadState继承自mojom::ChildProcess 说明它是ChildProcess服务的实现。
content/child/child_thread_impl.cc

  void BindChildProcessReceiver(
      mojo::PendingReceiver<mojom::ChildProcess> receiver) {
    receiver_.Bind(std::move(receiver));
  }

BindChildProcessReceiver 函数将receiver_和PendingReceiver 绑定,说明开始监听Pipe的消息。

我们知道主进程启动Utility进程后会调用BindServiceInterface绑定服务,回顾一下浏览器进程中的服务启动,
content/browser/service_process_host_impl.cc

// TODO(crbug.com/977637): Once UtilityProcessHost is used only by service
// processes, its logic can be inlined here.
void LaunchServiceProcess(mojo::GenericPendingReceiver receiver,
                          ServiceProcessHost::Options options,
                          sandbox::mojom::Sandbox sandbox) {
  UtilityProcessHost* host =
      new UtilityProcessHost(std::make_unique<UtilityProcessClient>(
          *receiver.interface_name(), options.site,
          std::move(options.process_callback)));
    ......
  host->Start();
  host->GetChildProcess()->BindServiceInterface(std::move(receiver));
}

所以我们直接分析服务端的服务绑定, 也就是具体服务接口的启动,这里亿NetworkService为例。
content/browser/service_process_host_impl.cc

362   void BindServiceInterface(mojo::GenericPendingReceiver receiver) override {
363     if (service_binder_)
364       service_binder_.Run(&receiver);
365 
366     if (receiver) {
367       main_thread_task_runner_->PostTask(
368           FROM_HERE, base::BindOnce(&ChildThreadImpl::BindServiceInterface,
369                                     weak_main_thread_, std::move(receiver)));
370     }
371   }
372 

这里参数receiver 包含和客户端通信的通道。 364行调用service_binder_.Run方法,将这个通信通道和具体的服务绑定。
回来分析ServiceBinder的获取。
content/utility/utility_thread_impl.cc

ChildThreadImpl::Options::ServiceBinder GetServiceBinder() {
  auto& storage = ServiceBinderImpl::GetInstanceStorage();
  // NOTE: This may already be initialized from a previous call if we're in
  // single-process mode.
  if (!storage)
    storage.emplace(base::SingleThreadTaskRunner::GetCurrentDefault());
  return base::BindRepeating(&ServiceBinderImpl::BindServiceInterface,
                             base::Unretained(&storage.value()));
}

ServiceBinder实际上绑定了ServiceBinderImpl::BindServiceInterface方法。

 54   void BindServiceInterface(mojo::GenericPendingReceiver* receiver) {
 55     // Set a crash key so utility process crash reports indicate which service
 56     // was running in the process.
        ......
 60     const std::string& service_name = receiver->interface_name().value();
 61     base::debug::SetCrashKeyString(service_name_crash_key, service_name);
 62 
 63     // Traces should also indicate the service name.
 64     if (base::CurrentProcess::GetInstance().IsProcessNameEmpty()) {
 65       base::CurrentProcess::GetInstance().SetProcessType(
 66           GetCurrentProcessType(service_name));
 67     }
 68 
 69     // Ensure the ServiceFactory is (lazily) initialized.
 70     if (!io_thread_services_) {
 71       io_thread_services_ = std::make_unique<mojo::ServiceFactory>();
 72       RegisterIOThreadServices(*io_thread_services_);
 73     }
 74 
 75     // Note that this is balanced by `termination_callback` below, which is
 76     // always eventually run as long as the process does not begin shutting
 77     // down beforehand.
 78     ++num_service_instances_;
 79 
 80     auto termination_callback =
 81         base::BindOnce(&ServiceBinderImpl::OnServiceTerminated,
 82                        weak_ptr_factory_.GetWeakPtr());
 83     if (io_thread_services_->CanRunService(*receiver)) {
 84       io_thread_services_->RunService(std::move(*receiver),
 85                                       std::move(termination_callback));
 86       return;
 87     }
 88 
 89     termination_callback =
 90         base::BindOnce(base::IgnoreResult(&base::SequencedTaskRunner::PostTask),
 91                        base::SingleThreadTaskRunner::GetCurrentDefault(),
 92                        FROM_HERE, std::move(termination_callback));
 93     main_thread_task_runner_->PostTask(
 94         FROM_HERE,
 95         base::BindOnce(&ServiceBinderImpl::TryRunMainThreadService,
 96                        std::move(*receiver), std::move(termination_callback)));
 97   }

函数会根据Service 的名称判断是否能在io服务线程执行,如果可以在io服务线程执行,就调用io_thread_services_->RunService执行,否则只能在主线程执行,调用TryRunMainThreadService在主线程执行。

mojo/public/cpp/bindings/service_factory.cc

bool ServiceFactory::CanRunService(
    const GenericPendingReceiver& receiver) const {
  DCHECK(receiver.is_valid());
  return base::Contains(constructors_, *receiver.interface_name());
}

CanRunService主要判断service名称是否注册在constructors_ 中。 constructors_是一个map数据结构。

std::map<std::string, Constructor> constructors_

  template <typename Func>
  void Add(Func func) {
    using Interface = typename internal::ServiceFactoryTraits<Func>::Interface;
    if (internal::GetRuntimeFeature_IsEnabled<Interface>()) {
      constructors_[Interface::Name_] =
          base::BindRepeating(&RunConstructor<Func>, func);
    }
  }

调用Add方法可以向constructors_众注册服务启动方法。

ServiceFactory创建后,会马上调用RegisterIOThreadServices函数,注册那些需要在io线程执行的服务。
content/utility/services.cc

void RegisterIOThreadServices(mojo::ServiceFactory& services) {
  // The network service runs on the IO thread because it needs a message
  // loop of type IO that can get notified when pipes have data.
  services.Add(RunNetworkService);
......
}

这里会注册RunNetworkService 函数,用于启动NetworkService。

接下来我们继续看io_thread_services_->RunService 函数,如何启动NetworkService。
mojo/public/cpp/bindings/service_factory.cc

bool ServiceFactory::RunService(GenericPendingReceiver receiver,
                                base::OnceClosure termination_callback) {
  .......
  auto it = constructors_.find(*receiver.interface_name());
  if (it == constructors_.end())
    return false;

  auto instance = it->second.Run(std::move(receiver));
  ......
  return true;
}

函数比较简单,找到对应服务的启动方法,然后执行。 我们分析NetworkService的启动方法RunNetworkService。
content/utility/services.cc

auto RunNetworkService(
    mojo::PendingReceiver<network::mojom::NetworkService> receiver) {
 ......
  return std::make_unique<network::NetworkService>(
      std::move(binders), std::move(receiver),
      /*delay_initialization_until_set_client=*/true);
}

函数创建NetworkService 实例。
services/network/network_service.cc

class COMPONENT_EXPORT(NETWORK_SERVICE) NetworkService
    : public mojom::NetworkService 

NetworkService 实现 mojom::NetworkService。

services/network/network_service.cc

 353 NetworkService::NetworkService(
 354     std::unique_ptr<service_manager::BinderRegistry> registry,
 355     mojo::PendingReceiver<mojom::NetworkService> receiver,
 356     bool delay_initialization_until_set_client)
 357     : net_log_(net::NetLog::Get()), registry_(std::move(registry)) {
 358   DCHECK(!g_network_service);
 359   g_network_service = this;
       ......
 372 
 373   if (receiver.is_valid()) {
 374     Bind(std::move(receiver));
 375   }
 376 
 377   if (!delay_initialization_until_set_client) {
 378     Initialize(mojom::NetworkServiceParams::New());
 379   }
 380 }

373-375行绑定MessagePipe, 开始监听消息。
378 行初始化。

继续分析初始化函数

382 void NetworkService::Initialize(mojom::NetworkServiceParamsPtr params,
 383                                 bool mock_network_change_notifier) {
 384   if (initialized_) {
 385     return;
 386   }
 387 
 388   initialized_ = true;
 389 
       ......

 475   network_service_proxy_allow_list_ =
 476       std::make_unique<NetworkServiceProxyAllowList>(
 477           params->ip_protection_proxy_bypass_policy);
 478 
 479   network_service_resource_block_list_ =
 480       std::make_unique<NetworkServiceResourceBlockList>();
 481 
 482 #if BUILDFLAG(IS_CT_SUPPORTED)
 483   constexpr size_t kMaxSCTAuditingCacheEntries = 1024;
 484   sct_auditing_cache_ =
 485       std::make_unique<SCTAuditingCache>(kMaxSCTAuditingCacheEntries);
 486 #endif
 487 
 488   if (base::FeatureList::IsEnabled(features::kGetCookiesStringUma)) {
 489     metrics_updater_ = std::make_unique<RestrictedCookieManagerMetrics>();
 490   }
 491 }
 492 

函数比较简单,也没有什么要特别说明的。

到这里UtilityProcess进程的启动我们也基本分析完了,还有一些细节后面用到的时候继续分析。

  • 20
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值