ClickHouse源码阅读(0000 0010) —— 根据日志分析ClickHouse Server启动和终止过程

在Mac OS系统的CLion中启动ClickHouse Server,启动过程日志和终止过程日志如下:

/Users/***/ClionProjects/Beautiful1205/ClickHouse/cmake-build-debug/dbms/programs/clickhouse server --config-file /Users/***/ClionProjects/Beautiful1205/ClickHouse/dbms/programs/server/config.xml
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to console
2019.12.01 15:11:15.005981 [ 1 ] {} <Information> : Starting ClickHouse 19.8.4.1 with revision 54420
2019.12.01 15:11:15.032576 [ 1 ] {} <Information> Application: starting up
2019.12.01 15:11:15.038040 [ 1 ] {} <Information> StatusFile: Status file ./status already exists - unclean restart. Contents:
PID: 85729
Started at: 2019-12-01 15:11:04
Revision: 54420

2019.12.01 15:11:15.038272 [ 1 ] {} <Warning> Application: Cannot set max number of file descriptors to 4294967295. Try to specify max_open_files according to your system limits. error: Invalid argument
2019.12.01 15:11:15.038294 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2019.12.01 15:11:15.038304 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone `Asia/Shanghai'.
2019.12.01 15:11:15.039489 [ 1 ] {} <Debug> ConfigReloader: Loading config `/Users/***/ClionProjects/Beautiful1205/ClickHouse/dbms/programs/server/users.xml'
Include not found: networks
2019.12.01 15:11:15.051880 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 4.00 GiB because the system has low amount of memory
2019.12.01 15:11:15.052469 [ 1 ] {} <Information> Application: Mark cache size was lowered to 4.00 GiB because the system has low amount of memory
2019.12.01 15:11:15.052556 [ 1 ] {} <Information> Application: Loading metadata from ./
2019.12.01 15:11:15.053303 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables.
2019.12.01 15:11:15.089614 [ 1 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2019.12.01 15:11:15.090809 [ 1 ] {} <Debug> system.query_log: Loading data parts
2019.12.01 15:11:15.133242 [ 1 ] {} <Debug> system.query_log: Loaded data parts (4 items)
2019.12.01 15:11:15.141778 [ 1 ] {} <Debug> system.query_thread_log: Loading data parts
2019.12.01 15:11:15.167779 [ 1 ] {} <Debug> system.query_thread_log: Loaded data parts (4 items)
2019.12.01 15:11:15.168053 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2019.12.01 15:11:15.171765 [ 1 ] {} <Information> DatabaseOrdinary (test): Total 5 tables.
2019.12.01 15:11:15.176008 [ 23 ] {} <Debug> test.mergetree: Loading data parts
2019.12.01 15:11:15.176392 [ 21 ] {} <Debug> test.***: Loading data parts
2019.12.01 15:11:15.177977 [ 20 ] {} <Debug> test.***: Loading data parts
2019.12.01 15:11:15.180604 [ 21 ] {} <Debug> test.***: Loaded data parts (2 items)
2019.12.01 15:11:15.182018 [ 22 ] {} <Debug> test.***: Loading data parts
2019.12.01 15:11:15.182151 [ 22 ] {} <Debug> test.***: Loaded data parts (0 items)
2019.12.01 15:11:15.182197 [ 23 ] {} <Debug> test.mergetree: Loaded data parts (3 items)
2019.12.01 15:11:15.192478 [ 21 ] {} <Debug> test.minmax_idx: Loading data parts
2019.12.01 15:11:15.195303 [ 21 ] {} <Debug> test.minmax_idx: Loaded data parts (1 items)
2019.12.01 15:11:15.195805 [ 20 ] {} <Debug> test.***: Loaded data parts (4 items)
2019.12.01 15:11:15.196098 [ 1 ] {} <Information> DatabaseOrdinary (test): Starting up tables.
2019.12.01 15:11:15.196483 [ 1 ] {} <Debug> Application: Loaded metadata.
2019.12.01 15:11:15.196575 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2019.12.01 15:11:15.197119 [ 1 ] {} <Information> Application: TaskStats is not implemented for this OS. IO accounting will be disabled.
2019.12.01 15:11:15.197530 [ 1 ] {} <Information> Application: Listening http://[::1]:8123
2019.12.01 15:11:15.197599 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): [::1]:9000
2019.12.01 15:11:15.197664 [ 1 ] {} <Information> Application: Listening for replica communication (interserver) http://[::1]:9009
2019.12.01 15:11:15.197724 [ 1 ] {} <Information> Application: Listening http://127.0.0.1:8123
2019.12.01 15:11:15.197775 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 127.0.0.1:9000
2019.12.01 15:11:15.197828 [ 1 ] {} <Information> Application: Listening for replica communication (interserver) http://127.0.0.1:9009
2019.12.01 15:11:15.198023 [ 1 ] {} <Information> Application: Available RAM: 8.00 GiB; physical cores: 4; logical cores: 8.
2019.12.01 15:11:15.198036 [ 1 ] {} <Information> Application: Ready for connections.
2019.12.01 15:11:17.206592 [ 37 ] {} <Debug> ConfigReloader: Loading config `/Users/***/ClionProjects/Beautiful1205/ClickHouse/dbms/programs/server/config.xml'
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression


2019.12.01 15:11:20.708417 [ 41 ] {} <Information> Application: Received termination signal (Terminated: 15)
2019.12.01 15:11:20.708745 [ 1 ] {} <Debug> Application: Received termination signal.
2019.12.01 15:11:20.708787 [ 1 ] {} <Debug> Application: Waiting for current connections to close.

Process finished with exit code 9

一、概述

ClickHouse 应该是基于POCO这个C++类库搭建的,采用了Poco Application 框架来搭建Application应用。

官方文档:https://pocoproject.org/documentation.html

这些应用包括server、client、local、benchmark、performance-test、extract-from-config、compressor、format、copier、obfuscator等。

poco Application 框架提供了一个 int run(int argc, char** argv); 方法用于执行应用。 run()方法会依次调用类的 void initialize();  ->  int main();   ->  void uninitialize(); 关于poco Application 框架,这里有篇文章可以参考:Poco Application 框架学习(1)

框架目前只理解到了这里,ClickHouse中这部分具体实现如下图所示:

二、Server启动过程分析

大概理解了poco Application 框架,兜兜转转,先执行 initialize()方法,来到Application.cpp的main()方法,可以看到main()方法是重载的,对应着不同的Application应用的具体实现。如下图所示:

接着来到了Server.cpp文件ClickHouse Server启动的main()方法这里,主要过程包括: 解析参数配置,初始化server,再启动服务监听端口。总体上看应该是在初始化上下文global_context。上下文中包含了执行SQL所依赖的所有内容(settings, available functions, data types, aggregate functions, databases...)。

关键步骤包括:

1)注册相关函数;

2)设置max_open_files参数;

3)初始化时区;

4)创建tmp目录;5)创建flags目录;6)创建user_files目录;

7)加载users.xml配置;加载config.xml配置;

8)设置缓存的marks文件的大小;

9)加载元数据信息,具体代码实现如下:

        LOG_INFO(log, "Loading metadata from " + path);
        try {
            loadMetadataSystem(*global_context);//加载system数据库和该库下的数据表
            /// After attaching system databases we can initialize system log.
            global_context->initializeSystemLogs();//初始化system日志
            /// After the system database is created, attach virtual system tables (in addition to query_log and part_log)
            attachSystemTablesServer(*global_context->getDatabase("system"), has_zookeeper);//加载system数据库下的数据表
            /// Then, load remaining databases
            loadMetadata(*global_context);//加载其他的数据库和该库下的数据表
        }
        catch (...) {
            tryLogCurrentException(log, "Caught exception while loading metadata");
            throw;
        }
        LOG_DEBUG(log, "Loaded metadata.");

10)绑定监听端口,针对http_port、https_port、tcp_port、tcp_port_secure、interserver_http_port、interserver_https_port、mysql_port这些端口,启动监听;

11)加载users.xml配置;加载config.xml配置;(这个过程来了两遍,可能是防止前面代码中由修改配置的行为,具体不是很明白)

12)Server服务端准备完毕,打印Ready for connections.

13)之后服务端会尝试立刻加载dictionaries、定期计算一些度量等。具体作用待明确;

14)定义了两处服务终止时需要进行的一些操作 SCOPE_EXIT({})。

15)等待处理中断信号, 停止服务器进程

三、Server终止过程分析

这个就比较简单了,具体过程包括取消所有的SQL执行,重置设置等等。

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值