摘要:
clickhouse-聚合-临时数据写入磁盘-分析
参数:
max_memory_usage 设置为8GB
set max_memory_usage=8589934592;
max_bytes_before_external_group_by 设置为1GB
set max_bytes_before_external_group_by=1073741824;
测试数据:
- TPCH的10GB数据量
- Q18去除子查询之后
select
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice,
sum(l_quantity)
from
customer,
orders,
lineitem
where
c_custkey = o_custkey
and o_orderkey = l_orderkey
group by
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice
order by
o_totalprice desc,
o_orderdate
limit 10;
核心流程:
AggregatingTransform::initGenerate
(gdb) bt
#0 DB::AggregatingTransform::initGenerate (this=0x7f4046671a18) at ../src/Processors/Transforms/AggregatingTransform.cpp:540
#1 0x000000002d48a4cf in DB::AggregatingTransform::work (this=0x7f4046671a18) at ../src/Processors/Transforms/AggregatingTransform.cpp:489
#2 0x000000002d077a03 in DB::executeJob (node=0x7f4046752900, read_progress_callback=0x7f40466e1b80) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#3 0x000000002d077719 in DB::ExecutionThreadContext::executeTask (this=0x7f40a976c380) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#4 0x000000002d053861 in DB::PipelineExecutor::executeStepImpl (this=0x7f4046697018, thread_num=15, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#5 0x000000002d053b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f4046697018, thread_num=15) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#6 0x000000002d055416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f4052544088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#7 0x000000002d055375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#8 0x000000002d055321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#9 0x000000002d055232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#10 0x000000002d05511a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f40b430a720) at ../src/Common/ThreadPool.h:210
#11 0x000000002d055055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#12 0x000000002d05501d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#13 0x000000002d054ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f40b430a720) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#14 0x000000002d054fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f4052544348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#15 0x000000001a3bf0a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f4052544348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#16 0x000000001a3ba9d5 in std::__1::function<void ()>::operator()() const (this=0x7f4052544348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#17 0x000000001a4c7b6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f417a54b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#18 0x000000001a4cf3e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const
(this=0x7f40b2fe9aa8) at ../src/Common/ThreadPool.cpp:145
#19 0x000000001a4cf375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#20 0x000000001a4cf2a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#21 0x000000001a4cec02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f40b2fe9aa0)
at ../contrib/llvm-project/libcxx/include/thread:295
#22 0x00007f417b4f9802 in start_thread () from /lib64/libc.so.6
#23 0x00007f417b499450 in clone3 () from /lib64/libc.so.6
Aggregator::writeToTemporaryFileImpl
(gdb) bt
#0 DB::Aggregator::writeToTemporaryFileImpl<DB::AggregationMethodSerialized<TwoLevelHashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, TwoLevelHashTableGrower<8ul>, Allocator<true, true>, HashMapTable> > > (this=0x7f34322d9210, data_variants=..., method=..., out=...)
at ../src/Interpreters/Aggregator.cpp:1706
#1 0x000000002aa0562e in DB::Aggregator::writeToTemporaryFile (this=0x7f34322d9210, data_variants=..., max_temp_file_size=3512367045) at ../src/Interpreters/Aggregator.cpp:1612
#2 0x000000002aa049cc in DB::Aggregator::executeOnBlock (this=0x7f34322d9210, columns=..., row_begin=0, row_end=900335, result=..., key_columns=..., aggregate_columns=...,
no_more_keys=@0x7f343236c930: false) at ../src/Interpreters/Aggregator.cpp:1585
#3 0x000000002d48c813 in DB::AggregatingTransform::consume (this=0x7f343236c818, chunk=...) at ../src/Processors/Transforms/AggregatingTransform.cpp:533
#4 0x000000002d48a501 in DB::AggregatingTransform::work (this=0x7f343236c818) at ../src/Processors/Transforms/AggregatingTransform.cpp:492
#5 0x000000002d077a03 in DB::executeJob (node=0x7f34323b0700, read_progress_callback=0x7f3432244060) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#6 0x000000002d077719 in DB::ExecutionThreadContext::executeTask (this=0x7f3494903b60) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#7 0x000000002d053861 in DB::PipelineExecutor::executeStepImpl (this=0x7f3432291018, thread_num=4, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#8 0x000000002d053b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f3432291018, thread_num=4) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#9 0x000000002d055416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f343e15c088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#10 0x000000002d055375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#11 0x000000002d055321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#12 0x000000002d055232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#13 0x000000002d05511a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f34948e6730) at ../src/Common/ThreadPool.h:210
#14 0x000000002d055055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#15 0x000000002d05501d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#16 0x000000002d054ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f34948e6730) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#17 0x000000002d054fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f343e15c348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#18 0x000000001a3bf0a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f343e15c348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#19 0x000000001a3ba9d5 in std::__1::function<void ()>::operator()() const (this=0x7f343e15c348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#20 0x000000001a4c7b6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f356534b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#21 0x000000001a4cf3e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const
(this=0x7f34a25c4008) at ../src/Common/ThreadPool.cpp:145
#22 0x000000001a4cf375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#23 0x000000001a4cf2a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#24 0x000000001a4cec02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f34a25c4000)
--Type <RET> for more, q to quit, c to continue without paging--
at ../contrib/llvm-project/libcxx/include/thread:295
#25 0x00007f356620e802 in start_thread () from /lib64/libc.so.6
#26 0x00007f35661ae450 in clone3 () from /lib64/libc.so.6
NativeWriter::write
#0 DB::NativeWriter::write (this=0x7f59040960a0, block=...) at ../src/Formats/NativeWriter.cpp:69
#1 0x000000002ba862da in DB::TemporaryFileStream::OutputWriter::write (this=0x7f5904096000, block=...) at ../src/Interpreters/TemporaryDataOnDisk.cpp:135
#2 0x000000002ba84711 in DB::TemporaryFileStream::write (this=0x7f5972f69100, block=...) at ../src/Interpreters/TemporaryDataOnDisk.cpp:239
#3 0x000000002aa93513 in DB::Aggregator::writeToTemporaryFileImpl<DB::AggregationMethodSerialized<TwoLevelHashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, TwoLevelHashTableGrower<8ul>, Allocator<true, true>, HashMapTable> > > (this=0x7f5905a6ae10, data_variants=..., method=..., out=...)
at ../src/Interpreters/Aggregator.cpp:1722
#4 0x000000002aa0562e in DB::Aggregator::writeToTemporaryFile (this=0x7f5905a6ae10, data_variants=..., max_temp_file_size=3515327323) at ../src/Interpreters/Aggregator.cpp:1612
#5 0x000000002aa049cc in DB::Aggregator::executeOnBlock (this=0x7f5905a6ae10, columns=..., row_begin=0, row_end=898603, result=..., key_columns=..., aggregate_columns=...,
no_more_keys=@0x7f5905aa5b30: false) at ../src/Interpreters/Aggregator.cpp:1585
#6 0x000000002d48c813 in DB::AggregatingTransform::consume (this=0x7f5905aa5a18, chunk=...) at ../src/Processors/Transforms/AggregatingTransform.cpp:533
#7 0x000000002d48a501 in DB::AggregatingTransform::work (this=0x7f5905aa5a18) at ../src/Processors/Transforms/AggregatingTransform.cpp:492
#8 0x000000002d077a03 in DB::executeJob (node=0x7f5905b14a00, read_progress_callback=0x7f5905af4900) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#9 0x000000002d077719 in DB::ExecutionThreadContext::executeTask (this=0x7f596e8f5140) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#10 0x000000002d053861 in DB::PipelineExecutor::executeStepImpl (this=0x7f5905ab5618, thread_num=12, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#11 0x000000002d053b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f5905ab5618, thread_num=12) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#12 0x000000002d055416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f590e817088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#13 0x000000002d055375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#14 0x000000002d055321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#15 0x000000002d055232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#16 0x000000002d05511a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f596e8c9130) at ../src/Common/ThreadPool.h:210
#17 0x000000002d055055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#18 0x000000002d05501d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#19 0x000000002d054ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f596e8c9130) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#20 0x000000002d054fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f590e817348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#21 0x000000001a3bf0a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f590e817348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#22 0x000000001a3ba9d5 in std::__1::function<void ()>::operator()() const (this=0x7f590e817348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#23 0x000000001a4c7b6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f5a3e34b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#24 0x000000001a4cf3e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const
(this=0x7f597027f328) at ../src/Common/ThreadPool.cpp:145
#25 0x000000001a4cf375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#26 0x000000001a4cf2a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()--Type <RET> for more, q to quit, c to continue without paging--
#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#27 0x000000001a4cec02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f597027f320)
at ../contrib/llvm-project/libcxx/include/thread:295
#28 0x00007f5a3f1ae802 in start_thread () from /lib64/libc.so.6
#29 0x00007f5a3f14e450 in clone3 () from /lib64/libc.so.6
MergingAggregatedBucketTransform::transform
#0 DB::MergingAggregatedBucketTransform::transform (this=0x7ff2bb134718, chunk=...) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:318
#1 0x0000000026b6e4a2 in DB::ISimpleTransform::transform (this=0x7ff2bb134718, input_chunk=..., output_chunk=...) at ../src/Processors/ISimpleTransform.h:32
#2 0x000000002d037c01 in DB::ISimpleTransform::work (this=0x7ff2bb134718) at ../src/Processors/ISimpleTransform.cpp:89
#3 0x000000002d07ca03 in DB::executeJob (node=0x7ff250b11800, read_progress_callback=0x7ff24d0388a0) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#4 0x000000002d07c719 in DB::ExecutionThreadContext::executeTask (this=0x7ff2bb0dece0) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#5 0x000000002d058861 in DB::PipelineExecutor::executeStepImpl (this=0x7ff24cf4fc18, thread_num=10, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#6 0x000000002d058b97 in DB::PipelineExecutor::executeSingleThread (this=0x7ff24cf4fc18, thread_num=10) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#7 0x000000002d05a416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7ff260757088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#8 0x000000002d05a375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#9 0x000000002d05a321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#10 0x000000002d05a232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#11 0x000000002d05a11a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7ff2bb0c3640) at ../src/Common/ThreadPool.h:210
#12 0x000000002d05a055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#13 0x000000002d05a01d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#14 0x000000002d059ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7ff2bb0c3640) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#15 0x000000002d059fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7ff260757348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#16 0x000000001a3c40a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7ff260757348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#17 0x000000001a3bf9d5 in std::__1::function<void ()>::operator()() const (this=0x7ff260757348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#18 0x000000001a4ccb6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7ff38534b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#19 0x000000001a4d43e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const (this=0x7ff2b7f96ea8) at ../src/Common/ThreadPool.cpp:145
#20 0x000000001a4d4375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#21 0x000000001a4d42a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#22 0x000000001a4d3c02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7ff2b7f96ea0)
at ../contrib/llvm-project/libcxx/include/thread:295
#23 0x00007ff3862b1802 in start_thread () from /lib64/libc.so.6
#24 0x00007ff386251450 in clone3 () from /lib64/libc.so.6
GroupingAggregatedTransform::prepare
#0 DB::GroupingAggregatedTransform::prepare (this=0x7f4bb874cc18) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:125
#1 0x000000001a6e18fe in DB::IProcessor::prepare (this=0x7f4bb874cc18) at ../src/Processors/IProcessor.h:190
#2 0x000000002d068264 in DB::ExecutingGraph::updateNode (this=0x7f4bc9b2f580, pid=276, queue=..., async_queue=...) at ../src/Processors/Executors/ExecutingGraph.cpp:276
#3 0x000000002d05891f in DB::PipelineExecutor::executeStepImpl (this=0x7f4b5283fc18, thread_num=0, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:248
#4 0x000000002d058b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f4b5283fc18, thread_num=0) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#5 0x000000002d05a416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f4b5e1b5088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#6 0x000000002d05a375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#7 0x000000002d05a321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#8 0x000000002d05a232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#9 0x000000002d05a11a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f4bc26c6f40) at ../src/Common/ThreadPool.h:210
#10 0x000000002d05a055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#11 0x000000002d05a01d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#12 0x000000002d059ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f4bc26c6f40) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#13 0x000000002d059fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f4b5e1b5348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#14 0x000000001a3c40a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f4b5e1b5348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#15 0x000000001a3bf9d5 in std::__1::function<void ()>::operator()() const (this=0x7f4b5e1b5348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#16 0x000000001a4ccb6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f4c8c34b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#17 0x000000001a4d43e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const (this=0x7f4bc666afc8) at ../src/Common/ThreadPool.cpp:145
#18 0x000000001a4d4375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#19 0x000000001a4d42a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#20 0x000000001a4d3c02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f4bc666afc0)
at ../contrib/llvm-project/libcxx/include/thread:295
#21 0x00007f4c8d237802 in start_thread () from /lib64/libc.so.6
#22 0x00007f4c8d1d7450 in clone3 () from /lib64/libc.so.6
GroupingAggregatedTransform::addChunk
#0 DB::GroupingAggregatedTransform::addChunk (this=0x7f4bb874cc18, chunk=..., input=45) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:270
#1 0x000000002d4f0fb9 in DB::GroupingAggregatedTransform::readFromAllInputs (this=0x7f4bb874cc18) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:50
#2 0x000000002d4f24ba in DB::GroupingAggregatedTransform::prepare (this=0x7f4bb874cc18) at ../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:140
#3 0x000000001a6e18fe in DB::IProcessor::prepare (this=0x7f4bb874cc18) at ../src/Processors/IProcessor.h:190
#4 0x000000002d068264 in DB::ExecutingGraph::updateNode (this=0x7f4bc9b2f580, pid=276, queue=..., async_queue=...) at ../src/Processors/Executors/ExecutingGraph.cpp:276
#5 0x000000002d05891f in DB::PipelineExecutor::executeStepImpl (this=0x7f4b5283fc18, thread_num=3, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:248
#6 0x000000002d058b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f4b5283fc18, thread_num=3) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#7 0x000000002d05a416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f4b5a9ae088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#8 0x000000002d05a375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#9 0x000000002d05a321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#10 0x000000002d05a232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#11 0x000000002d05a11a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f4bc26c7530) at ../src/Common/ThreadPool.h:210
#12 0x000000002d05a055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#13 0x000000002d05a01d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#14 0x000000002d059ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f4bc26c7530) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#15 0x000000002d059fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f4b5a9ae348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#16 0x000000001a3c40a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f4b5a9ae348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#17 0x000000001a3bf9d5 in std::__1::function<void ()>::operator()() const (this=0x7f4b5a9ae348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#18 0x000000001a4ccb6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f4c8c34b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#19 0x000000001a4d43e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const (this=0x7f4bc9ac5fe8) at ../src/Common/ThreadPool.cpp:145
#20 0x000000001a4d4375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#21 0x000000001a4d42a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#22 0x000000001a4d3c02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f4bc9ac5fe0)
at ../contrib/llvm-project/libcxx/include/thread:295
#23 0x00007f4c8d237802 in start_thread () from /lib64/libc.so.6
#24 0x00007f4c8d1d7450 in clone3 () from /lib64/libc.so.6
Aggregator::executeImplBatch
#0 DB::Aggregator::executeImplBatch<false, false, false, DB::AggregationMethodSerialized<TwoLevelHashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, TwoLevelHashTableGrower<8ul>, Allocator<true, true>, HashMapTable> > > (this=0x7f6991a1c210, method=..., state=...,
aggregates_pool=0x7f69fcc98118, row_begin=0, row_end=716105, aggregate_instructions=0x7f6acbb07fa0, overflow_row=0x0) at ../src/Interpreters/Aggregator.cpp:1073
#1 0x000000002aa8f140 in DB::Aggregator::executeImpl<DB::AggregationMethodSerialized<TwoLevelHashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, TwoLevelHashTableGrower<8ul>, Allocator<true, true>, HashMapTable> > > (this=0x7f6991a1c210, method=..., aggregates_pool=0x7f69fcc98118,
row_begin=0, row_end=716105, key_columns=..., aggregate_instructions=0x7f6acbb07fa0, no_more_keys=false, overflow_row=0x0) at ../src/Interpreters/Aggregator.cpp:1050
#2 0x000000002aa05710 in DB::Aggregator::executeImpl (this=0x7f6991a1c210, result=..., row_begin=0, row_end=716105, key_columns=..., aggregate_instructions=0x7f6acbb07fa0,
no_more_keys=false, overflow_row=0x0) at ../src/Interpreters/Aggregator.cpp:1006
#3 0x000000002aa09730 in DB::Aggregator::executeOnBlock (this=0x7f6991a1c210, columns=..., row_begin=0, row_end=716105, result=..., key_columns=..., aggregate_columns=...,
no_more_keys=@0x7f6991a2cb30: false) at ../src/Interpreters/Aggregator.cpp:1551
#4 0x000000002d491813 in DB::AggregatingTransform::consume (this=0x7f6991a2ca18, chunk=...) at ../src/Processors/Transforms/AggregatingTransform.cpp:533
#5 0x000000002d48f501 in DB::AggregatingTransform::work (this=0x7f6991a2ca18) at ../src/Processors/Transforms/AggregatingTransform.cpp:492
#6 0x000000002d07ca03 in DB::executeJob (node=0x7f6991a6f600, read_progress_callback=0x7f69f77800e0) at ../src/Processors/Executors/ExecutionThreadContext.cpp:47
#7 0x000000002d07c719 in DB::ExecutionThreadContext::executeTask (this=0x7f6a093f3a60) at ../src/Processors/Executors/ExecutionThreadContext.cpp:92
#8 0x000000002d058861 in DB::PipelineExecutor::executeStepImpl (this=0x7f69f774e818, thread_num=13, yield_flag=0x0) at ../src/Processors/Executors/PipelineExecutor.cpp:229
#9 0x000000002d058b97 in DB::PipelineExecutor::executeSingleThread (this=0x7f69f774e818, thread_num=13) at ../src/Processors/Executors/PipelineExecutor.cpp:195
#10 0x000000002d05a416 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const (this=0x7f69a55ef088) at ../src/Processors/Executors/PipelineExecutor.cpp:320
#11 0x000000002d05a375 in std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#12 0x000000002d05a321 in std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
#13 0x000000002d05a232 in std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) (
__f=..., __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
#14 0x000000002d05a11a in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}::operator()() (this=0x7f6995d91160) at ../src/Common/ThreadPool.h:210
#15 0x000000002d05a055 in std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#16 0x000000002d05a01d in std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
#17 0x000000002d059ff5 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f6995d91160) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
#18 0x000000002d059fc0 in std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f69a55ef348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
#19 0x000000001a3c40a6 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f69a55ef348)
at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
#20 0x000000001a3bf9d5 in std::__1::function<void ()>::operator()() const (this=0x7f69a55ef348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
#21 0x000000001a4ccb6e in ThreadPoolImpl<std::__1::thread>::worker (this=0x7f6acbb4b280, thread_it=...) at ../src/Common/ThreadPool.cpp:315
#22 0x000000001a4d43e4 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const (this=0x7f69f59b15c8) at ../src/Common/ThreadPool.cpp:145
#23 0x000000001a4d4375 in std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
#24 0x000000001a4d42a5 in std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, --Type <RET> for more, q to quit, c to continue without paging--
std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
#25 0x000000001a4d3c02 in std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(void*) (__vp=0x7f69f59b15c0)
at ../contrib/llvm-project/libcxx/include/thread:295
#26 0x00007f6accb2a802 in start_thread () from /lib64/libc.so.6
#27 0x00007f6accaca450 in clone3 () from /lib64/libc.so.6
核心函数:
AggregatingTransform::initGenerate
void AggregatingTransform::initGenerate()
{
if (is_generate_initialized)
return;
is_generate_initialized = true;
/// If there was no data, and we aggregate without keys, and we must return single row with the result of empty aggregation.
/// To do this, we pass a block with zero rows to aggregate.
if (variants.empty() && params->params.keys_size == 0 && !params->params.empty_result_for_aggregation_by_empty_set)
{
if (params->params.only_merge)
params->aggregator.mergeOnBlock(getInputs().front().getHeader(), variants, no_more_keys);
else
params->aggregator.executeOnBlock(getInputs().front().getHeader(), variants, key_columns, aggregate_columns, no_more_keys);
}
double elapsed_seconds = watch.elapsedSeconds();
size_t rows = variants.sizeWithoutOverflowRow();
LOG_DEBUG(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({:.3f} rows/sec., {}/sec.)",
src_rows, rows, ReadableSize(src_bytes),
elapsed_seconds, src_rows / elapsed_seconds,
ReadableSize(src_bytes / elapsed_seconds));
if (params->aggregator.hasTemporaryData())
{
if (variants.isConvertibleToTwoLevel())
variants.convertToTwoLevel();
/// Flush data in the RAM to disk also. It's easier than merging on-disk and RAM data.
if (!variants.empty())
params->aggregator.writeToTemporaryFile(variants);
}
if (many_data->num_finished.fetch_add(1) + 1 < many_data->variants.size())
return;
if (!params->aggregator.hasTemporaryData())
{
auto prepared_data = params->aggregator.prepareVariantsToMerge(many_data->variants);
auto prepared_data_ptr = std::make_shared<ManyAggregatedDataVariants>(std::move(prepared_data));
processors.emplace_back(std::make_shared<ConvertingAggregatedToChunksTransform>(params, std::move(prepared_data_ptr), max_threads));
}
else
{
/// If there are temporary files with partially-aggregated data on the disk,
/// then read and merge them, spending the minimum amount of memory.
ProfileEvents::increment(ProfileEvents::ExternalAggregationMerge);
if (many_data->variants.size() > 1)
{
/// It may happen that some data has not yet been flushed,
/// because at the time thread has finished, no data has been flushed to disk, and then some were.
for (auto & cur_variants : many_data->variants)
{
if (cur_variants->isConvertibleToTwoLevel())
cur_variants->convertToTwoLevel();
if (!cur_variants->empty())
params->aggregator.writeToTemporaryFile(*cur_variants);
}
}
const auto & tmp_data = params->aggregator.getTemporaryData();
Pipe pipe;
{
Pipes pipes;
for (auto * tmp_stream : tmp_data.getStreams())
pipes.emplace_back(Pipe(std::make_unique<SourceFromNativeStream>(tmp_stream)));
pipe = Pipe::unitePipes(std::move(pipes));
}
size_t num_streams = tmp_data.getStreams().size();
size_t compressed_size = tmp_data.getStat().compressed_size;
size_t uncompressed_size = tmp_data.getStat().uncompressed_size;
LOG_DEBUG(
log,
"Will merge {} temporary files of size {} compressed, {} uncompressed.",
num_streams,
ReadableSize(compressed_size),
ReadableSize(uncompressed_size));
addMergingAggregatedMemoryEfficientTransform(pipe, params, temporary_data_merge_threads);
processors = Pipe::detachProcessors(std::move(pipe));
}
}
addMergingAggregatedMemoryEfficientTransform
void addMergingAggregatedMemoryEfficientTransform(
Pipe & pipe,
AggregatingTransformParamsPtr params,
size_t num_merging_processors)
{
pipe.addTransform(std::make_shared<GroupingAggregatedTransform>(pipe.getHeader(), pipe.numOutputPorts(), params));
if (num_merging_processors <= 1)
{
/// --> GroupingAggregated --> MergingAggregatedBucket -->
pipe.addTransform(std::make_shared<MergingAggregatedBucketTransform>(params));
return;
}
/// --> --> MergingAggregatedBucket -->
/// --> GroupingAggregated --> ResizeProcessor --> MergingAggregatedBucket --> SortingAggregated -->
/// --> --> MergingAggregatedBucket -->
pipe.resize(num_merging_processors);
pipe.addSimpleTransform([params](const Block &)
{
return std::make_shared<MergingAggregatedBucketTransform>(params);
});
pipe.addTransform(std::make_shared<SortingAggregatedTransform>(num_merging_processors, params));
}
Aggregator::writeToTemporaryFile
void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants, size_t max_temp_file_size) const
{
if (!tmp_data)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot write to temporary file because temporary file is not initialized");
Stopwatch watch;
size_t rows = data_variants.size();
auto & out_stream = tmp_data->createStream(getHeader(false), max_temp_file_size);
ProfileEvents::increment(ProfileEvents::ExternalAggregationWritePart);
LOG_DEBUG(log, "Writing part of aggregation data into temporary file {}", out_stream.getPath());
/// Flush only two-level data and possibly overflow data.
#define M(NAME) \
else if (data_variants.type == AggregatedDataVariants::Type::NAME) \
writeToTemporaryFileImpl(data_variants, *data_variants.NAME, out_stream);
if (false) {} // NOLINT
APPLY_FOR_VARIANTS_TWO_LEVEL(M)
#undef M
else
throw Exception(ErrorCodes::UNKNOWN_AGGREGATED_DATA_VARIANT, "Unknown aggregated data variant");
/// NOTE Instead of freeing up memory and creating new hash tables and arenas, you can re-use the old ones.
data_variants.init(data_variants.type);
data_variants.aggregates_pools = Arenas(1, std::make_shared<Arena>());
data_variants.aggregates_pool = data_variants.aggregates_pools.back().get();
if (params.overflow_row || data_variants.type == AggregatedDataVariants::Type::without_key)
{
AggregateDataPtr place = data_variants.aggregates_pool->alignedAlloc(total_size_of_aggregate_states, align_aggregate_states);
createAggregateStates(place);
data_variants.without_key = place;
}
auto stat = out_stream.finishWriting();
ProfileEvents::increment(ProfileEvents::ExternalAggregationCompressedBytes, stat.compressed_size);
ProfileEvents::increment(ProfileEvents::ExternalAggregationUncompressedBytes, stat.uncompressed_size);
ProfileEvents::increment(ProfileEvents::ExternalProcessingCompressedBytesTotal, stat.compressed_size);
ProfileEvents::increment(ProfileEvents::ExternalProcessingUncompressedBytesTotal, stat.uncompressed_size);
double elapsed_seconds = watch.elapsedSeconds();
double compressed_size = stat.compressed_size;
double uncompressed_size = stat.uncompressed_size;
LOG_DEBUG(log,
"Written part in {:.3f} sec., {} rows, {} uncompressed, {} compressed,"
" {:.3f} uncompressed bytes per row, {:.3f} compressed bytes per row, compression rate: {:.3f}"
" ({:.3f} rows/sec., {}/sec. uncompressed, {}/sec. compressed)",
elapsed_seconds,
rows,
ReadableSize(uncompressed_size),
ReadableSize(compressed_size),
static_cast<double>(uncompressed_size) / rows,
static_cast<double>(compressed_size) / rows,
static_cast<double>(uncompressed_size) / compressed_size,
static_cast<double>(rows) / elapsed_seconds,
ReadableSize(static_cast<double>(uncompressed_size) / elapsed_seconds),
ReadableSize(static_cast<double>(compressed_size) / elapsed_seconds));
}
Aggregator::writeToTemporaryFileImpl
template <typename Method>
void Aggregator::writeToTemporaryFileImpl(
AggregatedDataVariants & data_variants,
Method & method,
TemporaryFileStream & out) const
{
size_t max_temporary_block_size_rows = 0;
size_t max_temporary_block_size_bytes = 0;
auto update_max_sizes = [&](const Block & block)
{
size_t block_size_rows = block.rows();
size_t block_size_bytes = block.bytes();
if (block_size_rows > max_temporary_block_size_rows)
max_temporary_block_size_rows = block_size_rows;
if (block_size_bytes > max_temporary_block_size_bytes)
max_temporary_block_size_bytes = block_size_bytes;
};
for (UInt32 bucket = 0; bucket < Method::Data::NUM_BUCKETS; ++bucket)
{
Block block = convertOneBucketToBlock(data_variants, method, data_variants.aggregates_pool, false, bucket);
out.write(block);
update_max_sizes(block);
}
if (params.overflow_row)
{
Block block = prepareBlockAndFillWithoutKey(data_variants, false, true);
out.write(block);
update_max_sizes(block);
}
/// Pass ownership of the aggregate functions states:
/// `data_variants` will not destroy them in the destructor, they are now owned by ColumnAggregateFunction objects.
data_variants.aggregator = nullptr;
LOG_DEBUG(log, "Max size of temporary block: {} rows, {}.", max_temporary_block_size_rows, ReadableSize(max_temporary_block_size_bytes));
}
executeJob
static void executeJob(ExecutingGraph::Node * node, ReadProgressCallback * read_progress_callback)
{
try
{
node->processor->work();
/// Update read progress only for source nodes.
bool is_source = node->back_edges.empty();
if (is_source && read_progress_callback)
{
if (auto read_progress = node->processor->getReadProgress())
{
if (read_progress->counters.total_rows_approx)
read_progress_callback->addTotalRowsApprox(read_progress->counters.total_rows_approx);
if (!read_progress_callback->onProgress(read_progress->counters.read_rows, read_progress->counters.read_bytes, read_progress->limits))
node->processor->cancel();
}
}
}
catch (Exception & exception)
{
if (checkCanAddAdditionalInfoToException(exception))
exception.addMessage("While executing " + node->processor->getName());
throw;
}
}
AggregatingStep::transformPipeline
void AggregatingStep::transformPipeline(QueryPipelineBuilder & pipeline, const BuildQueryPipelineSettings & settings)
{
QueryPipelineProcessorsCollector collector(pipeline, this);
/// Forget about current totals and extremes. They will be calculated again after aggregation if needed.
pipeline.dropTotalsAndExtremes();
bool allow_to_use_two_level_group_by = pipeline.getNumStreams() > 1 || params.max_bytes_before_external_group_by != 0;
/// optimize_aggregation_in_order
if (!sort_description_for_merging.empty())
{
/// two-level aggregation is not supported anyway for in order aggregation.
allow_to_use_two_level_group_by = false;
/// It is incorrect for in order aggregation.
params.stats_collecting_params.disable();
}
if (!allow_to_use_two_level_group_by)
{
params.group_by_two_level_threshold = 0;
params.group_by_two_level_threshold_bytes = 0;
}
/** Two-level aggregation is useful in two cases:
* 1. Parallel aggregation is done, and the results should be merged in parallel.
* 2. An aggregation is done with store of temporary data on the disk, and they need to be merged in a memory efficient way.
*/
const auto src_header = pipeline.getHeader();
auto transform_params = std::make_shared<AggregatingTransformParams>(src_header, std::move(params), final);
if (!grouping_sets_params.empty())
{
const size_t grouping_sets_size = grouping_sets_params.size();
const size_t streams = pipeline.getNumStreams();
auto input_header = pipeline.getHeader();
pipeline.transform([&](OutputPortRawPtrs ports)
{
Processors copiers;
copiers.reserve(ports.size());
for (auto * port : ports)
{
auto copier = std::make_shared<CopyTransform>(input_header, grouping_sets_size);
connect(*port, copier->getInputPort());
copiers.push_back(copier);
}
return copiers;
});
pipeline.transform([&](OutputPortRawPtrs ports)
{
assert(streams * grouping_sets_size == ports.size());
Processors processors;
for (size_t i = 0; i < grouping_sets_size; ++i)
{
Aggregator::Params params_for_set
{
grouping_sets_params[i].used_keys,
transform_params->params.aggregates,
transform_params->params.overflow_row,
transform_params->params.max_rows_to_group_by,
transform_params->params.group_by_overflow_mode,
transform_params->params.group_by_two_level_threshold,
transform_params->params.group_by_two_level_threshold_bytes,
transform_params->params.max_bytes_before_external_group_by,
transform_params->params.empty_result_for_aggregation_by_empty_set,
transform_params->params.tmp_data_scope,
transform_params->params.max_threads,
transform_params->params.min_free_disk_space,
transform_params->params.compile_aggregate_expressions,
transform_params->params.min_count_to_compile_aggregate_expression,
transform_params->params.max_block_size,
transform_params->params.enable_prefetch,
/* only_merge */ false,
transform_params->params.stats_collecting_params};
auto transform_params_for_set = std::make_shared<AggregatingTransformParams>(src_header, std::move(params_for_set), final);
if (streams > 1)
{
auto many_data = std::make_shared<ManyAggregatedData>(streams);
for (size_t j = 0; j < streams; ++j)
{
auto aggregation_for_set = std::make_shared<AggregatingTransform>(input_header, transform_params_for_set, many_data, j, merge_threads, temporary_data_merge_threads);
// For each input stream we have `grouping_sets_size` copies, so port index
// for transform #j should skip ports of first (j-1) streams.
connect(*ports[i + grouping_sets_size * j], aggregation_for_set->getInputs().front());
ports[i + grouping_sets_size * j] = &aggregation_for_set->getOutputs().front();
processors.push_back(aggregation_for_set);
}
}
else
{
auto aggregation_for_set = std::make_shared<AggregatingTransform>(input_header, transform_params_for_set);
connect(*ports[i], aggregation_for_set->getInputs().front());
ports[i] = &aggregation_for_set->getOutputs().front();
processors.push_back(aggregation_for_set);
}
}
if (streams > 1)
{
OutputPortRawPtrs new_ports;
new_ports.reserve(grouping_sets_size);
for (size_t i = 0; i < grouping_sets_size; ++i)
{
size_t output_it = i;
auto resize = std::make_shared<ResizeProcessor>(ports[output_it]->getHeader(), streams, 1);
auto & inputs = resize->getInputs();
for (auto input_it = inputs.begin(); input_it != inputs.end(); output_it += grouping_sets_size, ++input_it)
connect(*ports[output_it], *input_it);
new_ports.push_back(&resize->getOutputs().front());
processors.push_back(resize);
}
ports.swap(new_ports);
}
assert(ports.size() == grouping_sets_size);
auto output_header = transform_params->getHeader();
if (group_by_use_nulls)
convertToNullable(output_header, params.keys);
for (size_t set_counter = 0; set_counter < grouping_sets_size; ++set_counter)
{
const auto & header = ports[set_counter]->getHeader();
/// Here we create a DAG which fills missing keys and adds `__grouping_set` column
auto dag = std::make_shared<ActionsDAG>(header.getColumnsWithTypeAndName());
ActionsDAG::NodeRawConstPtrs outputs;
outputs.reserve(output_header.columns() + 1);
auto grouping_col = ColumnConst::create(ColumnUInt64::create(1, set_counter), 0);
const auto * grouping_node = &dag->addColumn(
{ColumnPtr(std::move(grouping_col)), std::make_shared<DataTypeUInt64>(), "__grouping_set"});
grouping_node = &dag->materializeNode(*grouping_node);
outputs.push_back(grouping_node);
const auto & missing_columns = grouping_sets_params[set_counter].missing_keys;
const auto & used_keys = grouping_sets_params[set_counter].used_keys;
auto to_nullable_function = FunctionFactory::instance().get("toNullable", nullptr);
for (size_t i = 0; i < output_header.columns(); ++i)
{
auto & col = output_header.getByPosition(i);
const auto missing_it = std::find_if(
missing_columns.begin(), missing_columns.end(), [&](const auto & missing_col) { return missing_col == col.name; });
const auto used_it = std::find_if(
used_keys.begin(), used_keys.end(), [&](const auto & used_col) { return used_col == col.name; });
if (missing_it != missing_columns.end())
{
auto column_with_default = col.column->cloneEmpty();
col.type->insertDefaultInto(*column_with_default);
auto column = ColumnConst::create(std::move(column_with_default), 0);
const auto * node = &dag->addColumn({ColumnPtr(std::move(column)), col.type, col.name});
node = &dag->materializeNode(*node);
outputs.push_back(node);
}
else
{
const auto * column_node = dag->getOutputs()[header.getPositionByName(col.name)];
if (used_it != used_keys.end() && group_by_use_nulls && column_node->result_type->canBeInsideNullable())
outputs.push_back(&dag->addFunction(to_nullable_function, { column_node }, col.name));
else
outputs.push_back(column_node);
}
}
dag->getOutputs().swap(outputs);
auto expression = std::make_shared<ExpressionActions>(dag, settings.getActionsSettings());
auto transform = std::make_shared<ExpressionTransform>(header, expression);
connect(*ports[set_counter], transform->getInputPort());
processors.emplace_back(std::move(transform));
}
return processors;
});
aggregating = collector.detachProcessors(0);
return;
}
if (!sort_description_for_merging.empty())
{
if (pipeline.getNumStreams() > 1)
{
/** The pipeline is the following:
*
* --> AggregatingInOrder --> MergingAggregatedBucket
* --> AggregatingInOrder --> FinishAggregatingInOrder --> ResizeProcessor --> MergingAggregatedBucket
* --> AggregatingInOrder --> MergingAggregatedBucket
*/
auto many_data = std::make_shared<ManyAggregatedData>(pipeline.getNumStreams());
size_t counter = 0;
pipeline.addSimpleTransform([&](const Block & header)
{
/// We want to merge aggregated data in batches of size
/// not greater than 'aggregation_in_order_max_block_bytes'.
/// So, we reduce 'max_bytes' value for aggregation in 'merge_threads' times.
return std::make_shared<AggregatingInOrderTransform>(
header, transform_params,
sort_description_for_merging, group_by_sort_description,
max_block_size, aggregation_in_order_max_block_bytes / merge_threads,
many_data, counter++);
});
aggregating_in_order = collector.detachProcessors(0);
auto transform = std::make_shared<FinishAggregatingInOrderTransform>(
pipeline.getHeader(),
pipeline.getNumStreams(),
transform_params,
group_by_sort_description,
max_block_size,
aggregation_in_order_max_block_bytes);
pipeline.addTransform(std::move(transform));
/// Do merge of aggregated data in parallel.
pipeline.resize(merge_threads);
const auto & required_sort_description = memoryBoundMergingWillBeUsed() ? group_by_sort_description : SortDescription{};
pipeline.addSimpleTransform(
[&](const Block &)
{ return std::make_shared<MergingAggregatedBucketTransform>(transform_params, required_sort_description); });
if (memoryBoundMergingWillBeUsed())
{
pipeline.addTransform(
std::make_shared<SortingAggregatedForMemoryBoundMergingTransform>(pipeline.getHeader(), pipeline.getNumStreams()));
}
aggregating_sorted = collector.detachProcessors(1);
}
else
{
pipeline.addSimpleTransform([&](const Block & header)
{
return std::make_shared<AggregatingInOrderTransform>(
header, transform_params,
sort_description_for_merging, group_by_sort_description,
max_block_size, aggregation_in_order_max_block_bytes);
});
pipeline.addSimpleTransform([&](const Block & header)
{
return std::make_shared<FinalizeAggregatedTransform>(header, transform_params);
});
aggregating_in_order = collector.detachProcessors(0);
}
finalizing = collector.detachProcessors(2);
return;
}
/// If there are several sources, then we perform parallel aggregation
if (pipeline.getNumStreams() > 1)
{
/// Add resize transform to uniformly distribute data between aggregating streams.
if (!storage_has_evenly_distributed_read)
pipeline.resize(pipeline.getNumStreams(), true, true);
auto many_data = std::make_shared<ManyAggregatedData>(pipeline.getNumStreams());
size_t counter = 0;
pipeline.addSimpleTransform([&](const Block & header)
{
return std::make_shared<AggregatingTransform>(header, transform_params, many_data, counter++, merge_threads, temporary_data_merge_threads);
});
pipeline.resize(should_produce_results_in_order_of_bucket_number ? 1 : params.max_threads, true /* force */);
aggregating = collector.detachProcessors(0);
}
else
{
pipeline.addSimpleTransform([&](const Block & header) { return std::make_shared<AggregatingTransform>(header, transform_params); });
pipeline.resize(should_produce_results_in_order_of_bucket_number ? 1 : params.max_threads, false /* force */);
aggregating = collector.detachProcessors(0);
}
}
Aggregator::executeImplBatch
template <bool no_more_keys, bool use_compiled_functions, bool prefetch, typename Method>
void NO_INLINE Aggregator::executeImplBatch(
Method & method,
typename Method::State & state,
Arena * aggregates_pool,
size_t row_begin,
size_t row_end,
AggregateFunctionInstruction * aggregate_instructions,
AggregateDataPtr overflow_row) const
{
using KeyHolder = decltype(state.getKeyHolder(0, std::declval<Arena &>()));
/// During processing of row #i we will prefetch HashTable cell for row #(i + prefetch_look_ahead).
PrefetchingHelper prefetching;
size_t prefetch_look_ahead = prefetching.getInitialLookAheadValue();
/// Optimization for special case when there are no aggregate functions.
if (params.aggregates_size == 0)
{
if constexpr (no_more_keys)
return;
/// For all rows.
AggregateDataPtr place = aggregates_pool->alloc(0);
for (size_t i = row_begin; i < row_end; ++i)
{
if constexpr (prefetch && HasPrefetchMemberFunc<decltype(method.data), KeyHolder>)
{
if (i == row_begin + prefetching.iterationsToMeasure())
prefetch_look_ahead = prefetching.calcPrefetchLookAhead();
if (i + prefetch_look_ahead < row_end)
{
auto && key_holder = state.getKeyHolder(i + prefetch_look_ahead, *aggregates_pool);
method.data.prefetch(std::move(key_holder));
}
}
state.emplaceKey(method.data, i, *aggregates_pool).setMapped(place);
}
return;
}
/// Optimization for special case when aggregating by 8bit key.
if constexpr (!no_more_keys && std::is_same_v<Method, typename decltype(AggregatedDataVariants::key8)::element_type>)
{
/// We use another method if there are aggregate functions with -Array combinator.
bool has_arrays = false;
for (AggregateFunctionInstruction * inst = aggregate_instructions; inst->that; ++inst)
{
if (inst->offsets)
{
has_arrays = true;
break;
}
}
if (!has_arrays && !hasSparseArguments(aggregate_instructions))
{
for (AggregateFunctionInstruction * inst = aggregate_instructions; inst->that; ++inst)
{
inst->batch_that->addBatchLookupTable8(
row_begin,
row_end,
reinterpret_cast<AggregateDataPtr *>(method.data.data()),
inst->state_offset,
[&](AggregateDataPtr & aggregate_data)
{
aggregate_data = aggregates_pool->alignedAlloc(total_size_of_aggregate_states, align_aggregate_states);
createAggregateStates(aggregate_data);
},
state.getKeyData(),
inst->batch_arguments,
aggregates_pool);
}
return;
}
}
/// NOTE: only row_end-row_start is required, but:
/// - this affects only optimize_aggregation_in_order,
/// - this is just a pointer, so it should not be significant,
/// - and plus this will require other changes in the interface.
std::unique_ptr<AggregateDataPtr[]> places(new AggregateDataPtr[row_end]);
/// For all rows.
for (size_t i = row_begin; i < row_end; ++i)
{
AggregateDataPtr aggregate_data = nullptr;
if constexpr (!no_more_keys)
{
if constexpr (prefetch && HasPrefetchMemberFunc<decltype(method.data), KeyHolder>)
{
if (i == row_begin + prefetching.iterationsToMeasure())
prefetch_look_ahead = prefetching.calcPrefetchLookAhead();
if (i + prefetch_look_ahead < row_end)
{
auto && key_holder = state.getKeyHolder(i + prefetch_look_ahead, *aggregates_pool);
method.data.prefetch(std::move(key_holder));
}
}
auto emplace_result = state.emplaceKey(method.data, i, *aggregates_pool);
/// If a new key is inserted, initialize the states of the aggregate functions, and possibly something related to the key.
if (emplace_result.isInserted())
{
/// exception-safety - if you can not allocate memory or create states, then destructors will not be called.
emplace_result.setMapped(nullptr);
aggregate_data = aggregates_pool->alignedAlloc(total_size_of_aggregate_states, align_aggregate_states);
#if USE_EMBEDDED_COMPILER
if constexpr (use_compiled_functions)
{
const auto & compiled_aggregate_functions = compiled_aggregate_functions_holder->compiled_aggregate_functions;
compiled_aggregate_functions.create_aggregate_states_function(aggregate_data);
if (compiled_aggregate_functions.functions_count != aggregate_functions.size())
{
static constexpr bool skip_compiled_aggregate_functions = true;
createAggregateStates<skip_compiled_aggregate_functions>(aggregate_data);
}
#if defined(MEMORY_SANITIZER)
/// We compile only functions that do not allocate some data in Arena. Only store necessary state in AggregateData place.
for (size_t aggregate_function_index = 0; aggregate_function_index < aggregate_functions.size(); ++aggregate_function_index)
{
if (!is_aggregate_function_compiled[aggregate_function_index])
continue;
auto aggregate_data_with_offset = aggregate_data + offsets_of_aggregate_states[aggregate_function_index];
auto data_size = params.aggregates[aggregate_function_index].function->sizeOfData();
__msan_unpoison(aggregate_data_with_offset, data_size);
}
#endif
}
else
#endif
{
createAggregateStates(aggregate_data);
}
emplace_result.setMapped(aggregate_data);
}
else
aggregate_data = emplace_result.getMapped();
assert(aggregate_data != nullptr);
}
else
{
/// Add only if the key already exists.
auto find_result = state.findKey(method.data, i, *aggregates_pool);
if (find_result.isFound())
aggregate_data = find_result.getMapped();
else
aggregate_data = overflow_row;
}
places[i] = aggregate_data;
}
#if USE_EMBEDDED_COMPILER
if constexpr (use_compiled_functions)
{
std::vector<ColumnData> columns_data;
for (size_t i = 0; i < aggregate_functions.size(); ++i)
{
if (!is_aggregate_function_compiled[i])
continue;
AggregateFunctionInstruction * inst = aggregate_instructions + i;
size_t arguments_size = inst->that->getArgumentTypes().size(); // NOLINT
for (size_t argument_index = 0; argument_index < arguments_size; ++argument_index)
columns_data.emplace_back(getColumnData(inst->batch_arguments[argument_index]));
}
auto add_into_aggregate_states_function = compiled_aggregate_functions_holder->compiled_aggregate_functions.add_into_aggregate_states_function;
add_into_aggregate_states_function(row_begin, row_end, columns_data.data(), places.get());
}
#endif
/// Add values to the aggregate functions.
for (size_t i = 0; i < aggregate_functions.size(); ++i)
{
#if USE_EMBEDDED_COMPILER
if constexpr (use_compiled_functions)
if (is_aggregate_function_compiled[i])
continue;
#endif
AggregateFunctionInstruction * inst = aggregate_instructions + i;
if (inst->offsets)
inst->batch_that->addBatchArray(row_begin, row_end, places.get(), inst->state_offset, inst->batch_arguments, inst->offsets, aggregates_pool);
else if (inst->has_sparse_arguments)
inst->batch_that->addBatchSparse(row_begin, row_end, places.get(), inst->state_offset, inst->batch_arguments, aggregates_pool);
else
inst->batch_that->addBatch(row_begin, row_end, places.get(), inst->state_offset, inst->batch_arguments, aggregates_pool);
}
}
聚合写磁盘日志:
2023.02.10 20:37:31.132939 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.133521 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.145087 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.146234 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.212701 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.212986 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.235648 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.236181 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.255100 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.255397 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.257787 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.257909 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.262983 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> AggregatingTransform: Aggregating
2023.02.10 20:37:31.263184 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Aggregation method: serialized
2023.02.10 20:37:31.427304 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> MemoryTracker: Current memory usage (for query): 3.00 GiB.
2023.02.10 20:37:32.005273 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.48 GiB, peak 4.48 GiB, free memory in arenas 78.00 MiB, will set to 4.35 GiB (RSS), difference: -136.76 MiB
2023.02.10 20:37:32.775464 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.779167 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.810370 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.829024 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.840999 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.872675 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:32.904684 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> Aggregator: Converting aggregation data to two-level.
2023.02.10 20:37:33.002875 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.32 GiB, peak 4.48 GiB, free memory in arenas 132.90 MiB, will set to 4.56 GiB (RSS), difference: 245.42 MiB
2023.02.10 20:37:34.001386 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.25 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.002735 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613aaaaaa
2023.02.10 20:37:34.003989 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.70 GiB, peak 4.73 GiB, free memory in arenas 140.14 MiB, will set to 4.71 GiB (RSS), difference: 5.61 MiB
2023.02.10 20:37:34.060384 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.26 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.062218 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613baaaaa
2023.02.10 20:37:34.065246 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.26 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.066696 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613caaaaa
2023.02.10 20:37:34.105046 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.27 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.107155 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613daaaaa
2023.02.10 20:37:34.137376 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.24 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.138497 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613eaaaaa
2023.02.10 20:37:34.150694 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.23 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.152425 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613faaaaa
2023.02.10 20:37:34.171662 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.23 GiB on local disk `_tmp_default`, having unreserved 39.26 GiB.
2023.02.10 20:37:34.172282 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613gaaaaa
2023.02.10 20:37:35.003550 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.46 GiB, peak 4.73 GiB, free memory in arenas 141.16 MiB, will set to 4.43 GiB (RSS), difference: -34.19 MiB
2023.02.10 20:37:35.623713 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 964 rows, 49.89 KiB.
2023.02.10 20:37:35.635407 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.634 sec., 225234 rows, 11.42 MiB uncompressed, 5.33 MiB compressed, 53.186 uncompressed bytes per row, 24.800 compressed bytes per row, compression rate: 2.145 (137808.606 rows/sec., 6.99 MiB/sec. uncompressed, 3.26 MiB/sec. compressed)
2023.02.10 20:37:35.672945 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 947 rows, 49.01 KiB.
2023.02.10 20:37:35.676381 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 952 rows, 49.27 KiB.
2023.02.10 20:37:35.686735 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.622 sec., 225577 rows, 11.44 MiB uncompressed, 5.40 MiB compressed, 53.186 uncompressed bytes per row, 25.098 compressed bytes per row, compression rate: 2.119 (139053.417 rows/sec., 7.05 MiB/sec. uncompressed, 3.33 MiB/sec. compressed)
2023.02.10 20:37:35.690445 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 966 rows, 50.00 KiB.
2023.02.10 20:37:35.695309 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.635 sec., 225149 rows, 11.42 MiB uncompressed, 5.42 MiB compressed, 53.186 uncompressed bytes per row, 25.249 compressed bytes per row, compression rate: 2.106 (137689.501 rows/sec., 6.98 MiB/sec. uncompressed, 3.32 MiB/sec. compressed)
2023.02.10 20:37:35.697783 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 955 rows, 49.43 KiB.
2023.02.10 20:37:35.700710 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.564 sec., 224876 rows, 11.41 MiB uncompressed, 5.40 MiB compressed, 53.187 uncompressed bytes per row, 25.160 compressed bytes per row, compression rate: 2.114 (143824.545 rows/sec., 7.30 MiB/sec. uncompressed, 3.45 MiB/sec. compressed)
2023.02.10 20:37:35.710832 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.606 sec., 225478 rows, 11.44 MiB uncompressed, 5.39 MiB compressed, 53.186 uncompressed bytes per row, 25.065 compressed bytes per row, compression rate: 2.122 (140396.883 rows/sec., 7.12 MiB/sec. uncompressed, 3.36 MiB/sec. compressed)
2023.02.10 20:37:35.716700 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 957 rows, 49.53 KiB.
2023.02.10 20:37:35.726570 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.576 sec., 224881 rows, 11.41 MiB uncompressed, 5.39 MiB compressed, 53.187 uncompressed bytes per row, 25.144 compressed bytes per row, compression rate: 2.115 (142677.853 rows/sec., 7.24 MiB/sec. uncompressed, 3.42 MiB/sec. compressed)
2023.02.10 20:37:35.737304 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 961 rows, 49.74 KiB.
2023.02.10 20:37:35.746642 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.575 sec., 226129 rows, 11.47 MiB uncompressed, 5.38 MiB compressed, 53.186 uncompressed bytes per row, 24.964 compressed bytes per row, compression rate: 2.131 (143555.250 rows/sec., 7.28 MiB/sec. uncompressed, 3.42 MiB/sec. compressed)
2023.02.10 20:37:36.002963 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.75 GiB, peak 4.73 GiB, free memory in arenas 127.23 MiB, will set to 3.91 GiB (RSS), difference: 157.30 MiB
2023.02.10 20:37:37.003561 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.07 GiB, peak 4.73 GiB, free memory in arenas 127.21 MiB, will set to 4.03 GiB (RSS), difference: -44.86 MiB
2023.02.10 20:37:38.004000 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.24 GiB, peak 4.73 GiB, free memory in arenas 127.21 MiB, will set to 4.22 GiB (RSS), difference: -25.07 MiB
2023.02.10 20:37:39.006610 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.50 GiB, peak 4.73 GiB, free memory in arenas 100.41 MiB, will set to 4.45 GiB (RSS), difference: -51.03 MiB
2023.02.10 20:37:39.163363 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.163826 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613haaaaa
2023.02.10 20:37:39.166541 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.168046 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613iaaaaa
2023.02.10 20:37:39.185181 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.186032 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613jaaaaa
2023.02.10 20:37:39.199760 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.202190 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613kaaaaa
2023.02.10 20:37:39.218797 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.220288 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613laaaaa
2023.02.10 20:37:39.234810 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.236573 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613maaaaa
2023.02.10 20:37:39.244683 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.22 GiB.
2023.02.10 20:37:39.245990 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613naaaaa
2023.02.10 20:37:40.002968 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.37 GiB, peak 4.73 GiB, free memory in arenas 136.52 MiB, will set to 4.38 GiB (RSS), difference: 9.56 MiB
2023.02.10 20:37:40.670828 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 782 rows, 40.47 KiB.
2023.02.10 20:37:40.677275 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.511 sec., 179166 rows, 9.10 MiB uncompressed, 4.27 MiB compressed, 53.234 uncompressed bytes per row, 24.987 compressed bytes per row, compression rate: 2.130 (118588.282 rows/sec., 6.02 MiB/sec. uncompressed, 2.83 MiB/sec. compressed)
2023.02.10 20:37:40.689326 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 778 rows, 40.27 KiB.
2023.02.10 20:37:40.698772 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.536 sec., 181235 rows, 9.20 MiB uncompressed, 4.32 MiB compressed, 53.232 uncompressed bytes per row, 24.969 compressed bytes per row, compression rate: 2.132 (118008.022 rows/sec., 5.99 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:40.747567 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:40.755982 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 778 rows, 40.27 KiB.
2023.02.10 20:37:40.756288 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.571 sec., 180632 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.965 compressed bytes per row, compression rate: 2.132 (114952.107 rows/sec., 5.84 MiB/sec. uncompressed, 2.74 MiB/sec. compressed)
2023.02.10 20:37:40.762947 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.544 sec., 180871 rows, 9.18 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.962 compressed bytes per row, compression rate: 2.133 (117117.113 rows/sec., 5.95 MiB/sec. uncompressed, 2.79 MiB/sec. compressed)
2023.02.10 20:37:40.783162 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 779 rows, 40.32 KiB.
2023.02.10 20:37:40.791678 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.557 sec., 181056 rows, 9.19 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.990 compressed bytes per row, compression rate: 2.130 (116272.947 rows/sec., 5.90 MiB/sec. uncompressed, 2.77 MiB/sec. compressed)
2023.02.10 20:37:40.794420 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:40.801395 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.602 sec., 180245 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.975 compressed bytes per row, compression rate: 2.131 (112525.423 rows/sec., 5.71 MiB/sec. uncompressed, 2.68 MiB/sec. compressed)
2023.02.10 20:37:40.808499 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 792 rows, 40.99 KiB.
2023.02.10 20:37:40.814891 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.570 sec., 179589 rows, 9.12 MiB uncompressed, 4.28 MiB compressed, 53.234 uncompressed bytes per row, 25.002 compressed bytes per row, compression rate: 2.129 (114361.607 rows/sec., 5.81 MiB/sec. uncompressed, 2.73 MiB/sec. compressed)
2023.02.10 20:37:41.004178 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.80 GiB, peak 4.73 GiB, free memory in arenas 124.26 MiB, will set to 3.90 GiB (RSS), difference: 108.72 MiB
2023.02.10 20:37:42.003161 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.07 GiB, peak 4.73 GiB, free memory in arenas 124.24 MiB, will set to 4.01 GiB (RSS), difference: -54.51 MiB
2023.02.10 20:37:43.003969 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.22 GiB, peak 4.73 GiB, free memory in arenas 118.21 MiB, will set to 4.19 GiB (RSS), difference: -30.12 MiB
2023.02.10 20:37:44.004587 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.47 GiB, peak 4.73 GiB, free memory in arenas 93.00 MiB, will set to 4.42 GiB (RSS), difference: -47.73 MiB
2023.02.10 20:37:44.232463 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.236322 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613oaaaaa
2023.02.10 20:37:44.286675 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.287196 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613paaaaa
2023.02.10 20:37:44.315295 [ 1797 ] {} <Debug> DNSResolver: Updating DNS cache
2023.02.10 20:37:44.315497 [ 1797 ] {} <Debug> DNSResolver: Updated DNS cache
2023.02.10 20:37:44.340855 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.14 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.341405 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613qaaaaa
2023.02.10 20:37:44.395594 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.396113 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613raaaaa
2023.02.10 20:37:44.404822 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.13 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.407145 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613saaaaa
2023.02.10 20:37:44.416001 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.418103 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613taaaaa
2023.02.10 20:37:44.471882 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.19 GiB.
2023.02.10 20:37:44.472391 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613uaaaaa
2023.02.10 20:37:45.004051 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.37 GiB, peak 4.73 GiB, free memory in arenas 136.04 MiB, will set to 4.39 GiB (RSS), difference: 23.79 MiB
2023.02.10 20:37:45.648999 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 776 rows, 40.16 KiB.
2023.02.10 20:37:45.656111 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.424 sec., 180015 rows, 9.14 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.984 compressed bytes per row, compression rate: 2.131 (126429.931 rows/sec., 6.42 MiB/sec. uncompressed, 3.01 MiB/sec. compressed)
2023.02.10 20:37:45.787908 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 768 rows, 39.75 KiB.
2023.02.10 20:37:45.795987 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.509 sec., 180286 rows, 9.15 MiB uncompressed, 4.35 MiB compressed, 53.233 uncompressed bytes per row, 25.299 compressed bytes per row, compression rate: 2.104 (119435.798 rows/sec., 6.06 MiB/sec. uncompressed, 2.88 MiB/sec. compressed)
2023.02.10 20:37:45.858919 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 785 rows, 40.63 KiB.
2023.02.10 20:37:45.865490 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.525 sec., 179875 rows, 9.13 MiB uncompressed, 4.28 MiB compressed, 53.233 uncompressed bytes per row, 24.976 compressed bytes per row, compression rate: 2.131 (117960.245 rows/sec., 5.99 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:45.906580 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 775 rows, 40.11 KiB.
2023.02.10 20:37:45.912077 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.517 sec., 179376 rows, 9.11 MiB uncompressed, 4.27 MiB compressed, 53.234 uncompressed bytes per row, 24.966 compressed bytes per row, compression rate: 2.132 (118268.088 rows/sec., 6.00 MiB/sec. uncompressed, 2.82 MiB/sec. compressed)
2023.02.10 20:37:45.970291 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 767 rows, 39.70 KiB.
2023.02.10 20:37:45.976009 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 800 rows, 41.41 KiB.
2023.02.10 20:37:45.976910 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.572 sec., 180338 rows, 9.16 MiB uncompressed, 4.33 MiB compressed, 53.233 uncompressed bytes per row, 25.153 compressed bytes per row, compression rate: 2.116 (114696.222 rows/sec., 5.82 MiB/sec. uncompressed, 2.75 MiB/sec. compressed)
2023.02.10 20:37:45.984986 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.569 sec., 180442 rows, 9.16 MiB uncompressed, 4.30 MiB compressed, 53.233 uncompressed bytes per row, 25.008 compressed bytes per row, compression rate: 2.129 (114977.647 rows/sec., 5.84 MiB/sec. uncompressed, 2.74 MiB/sec. compressed)
2023.02.10 20:37:46.003618 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.87 GiB, peak 4.73 GiB, free memory in arenas 125.11 MiB, will set to 3.95 GiB (RSS), difference: 88.39 MiB
2023.02.10 20:37:46.004664 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 785 rows, 40.63 KiB.
2023.02.10 20:37:46.016651 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.545 sec., 180275 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.966 compressed bytes per row, compression rate: 2.132 (116686.123 rows/sec., 5.92 MiB/sec. uncompressed, 2.78 MiB/sec. compressed)
2023.02.10 20:37:47.003911 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.00 GiB, peak 4.73 GiB, free memory in arenas 123.44 MiB, will set to 3.96 GiB (RSS), difference: -35.08 MiB
2023.02.10 20:37:48.003410 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.10 GiB, peak 4.73 GiB, free memory in arenas 117.40 MiB, will set to 4.09 GiB (RSS), difference: -19.71 MiB
2023.02.10 20:37:48.996628 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 2.99 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:48.999196 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613vaaaaa
2023.02.10 20:37:49.005089 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.36 GiB, peak 4.73 GiB, free memory in arenas 99.82 MiB, will set to 4.34 GiB (RSS), difference: -19.97 MiB
2023.02.10 20:37:49.573831 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.575041 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613waaaaa
2023.02.10 20:37:49.810581 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.811255 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613xaaaaa
2023.02.10 20:37:49.853701 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.855350 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613yaaaaa
2023.02.10 20:37:49.882184 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.12 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.883280 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.11 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.884476 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613zaaaaa
2023.02.10 20:37:49.885926 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613abaaaa
2023.02.10 20:37:49.919542 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.11 GiB on local disk `_tmp_default`, having unreserved 39.16 GiB.
2023.02.10 20:37:49.920430 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613bbaaaa
2023.02.10 20:37:50.004006 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.45 GiB, peak 4.73 GiB, free memory in arenas 136.23 MiB, will set to 4.44 GiB (RSS), difference: -9.73 MiB
2023.02.10 20:37:50.573485 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:50.580516 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.584 sec., 180225 rows, 9.15 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.974 compressed bytes per row, compression rate: 2.132 (113754.239 rows/sec., 5.77 MiB/sec. uncompressed, 2.71 MiB/sec. compressed)
2023.02.10 20:37:51.003710 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.24 GiB, peak 4.73 GiB, free memory in arenas 134.98 MiB, will set to 4.26 GiB (RSS), difference: 15.66 MiB
2023.02.10 20:37:51.055345 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 783 rows, 40.53 KiB.
2023.02.10 20:37:51.065132 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.491 sec., 180996 rows, 9.19 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.969 compressed bytes per row, compression rate: 2.132 (121353.009 rows/sec., 6.16 MiB/sec. uncompressed, 2.89 MiB/sec. compressed)
2023.02.10 20:37:51.288408 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:51.295473 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.485 sec., 179344 rows, 9.10 MiB uncompressed, 4.33 MiB compressed, 53.234 uncompressed bytes per row, 25.313 compressed bytes per row, compression rate: 2.103 (120765.782 rows/sec., 6.13 MiB/sec. uncompressed, 2.92 MiB/sec. compressed)
2023.02.10 20:37:51.377213 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 784 rows, 40.58 KiB.
2023.02.10 20:37:51.383646 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 774 rows, 40.06 KiB.
2023.02.10 20:37:51.386705 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.505 sec., 180690 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.958 compressed bytes per row, compression rate: 2.133 (120077.044 rows/sec., 6.10 MiB/sec. uncompressed, 2.86 MiB/sec. compressed)
2023.02.10 20:37:51.391441 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.538 sec., 179228 rows, 9.10 MiB uncompressed, 4.30 MiB compressed, 53.234 uncompressed bytes per row, 25.162 compressed bytes per row, compression rate: 2.116 (116531.085 rows/sec., 5.92 MiB/sec. uncompressed, 2.80 MiB/sec. compressed)
2023.02.10 20:37:51.401773 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 782 rows, 40.47 KiB.
2023.02.10 20:37:51.408625 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.489 sec., 180680 rows, 9.17 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.995 compressed bytes per row, compression rate: 2.130 (121311.116 rows/sec., 6.16 MiB/sec. uncompressed, 2.89 MiB/sec. compressed)
2023.02.10 20:37:51.416294 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 764 rows, 39.54 KiB.
2023.02.10 20:37:51.422094 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.539 sec., 179923 rows, 9.13 MiB uncompressed, 4.32 MiB compressed, 53.233 uncompressed bytes per row, 25.181 compressed bytes per row, compression rate: 2.114 (116899.583 rows/sec., 5.93 MiB/sec. uncompressed, 2.81 MiB/sec. compressed)
2023.02.10 20:37:52.003859 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.86 GiB, peak 4.73 GiB, free memory in arenas 116.44 MiB, will set to 3.93 GiB (RSS), difference: 67.98 MiB
2023.02.10 20:37:53.002999 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.11 GiB, peak 4.73 GiB, free memory in arenas 110.43 MiB, will set to 4.06 GiB (RSS), difference: -48.56 MiB
2023.02.10 20:37:54.003883 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.28 GiB, peak 4.73 GiB, free memory in arenas 16.57 MiB, will set to 4.27 GiB (RSS), difference: -17.29 MiB
2023.02.10 20:37:54.044228 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 2.93 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:54.045063 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613cbaaaa
2023.02.10 20:37:54.757869 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.02 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:54.759505 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613dbaaaa
2023.02.10 20:37:55.004887 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.42 GiB, peak 4.73 GiB, free memory in arenas 24.93 MiB, will set to 4.39 GiB (RSS), difference: -29.13 MiB
2023.02.10 20:37:55.162643 [ 1615 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Information> TCPHandler: Query was cancelled.
2023.02.10 20:37:55.322442 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.10 GiB on local disk `_tmp_default`, having unreserved 39.13 GiB.
2023.02.10 20:37:55.323246 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613ebaaaa
2023.02.10 20:37:55.692368 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.692916 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613fbaaaa
2023.02.10 20:37:55.696002 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.697863 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613gbaaaa
2023.02.10 20:37:55.704975 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.705756 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613hbaaaa
2023.02.10 20:37:55.709936 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Trace> DiskLocal: Reserved 3.09 GiB on local disk `_tmp_default`, having unreserved 39.12 GiB.
2023.02.10 20:37:55.711498 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Writing part of aggregation data into temporary file ./tmp/1613ibaaaa
2023.02.10 20:37:55.825763 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 794 rows, 41.10 KiB.
2023.02.10 20:37:55.831723 [ 1831 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.788 sec., 180747 rows, 9.18 MiB uncompressed, 4.31 MiB compressed, 53.232 uncompressed bytes per row, 24.992 compressed bytes per row, compression rate: 2.130 (101099.002 rows/sec., 5.13 MiB/sec. uncompressed, 2.41 MiB/sec. compressed)
2023.02.10 20:37:56.003683 [ 1827 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 4.33 GiB, peak 4.73 GiB, free memory in arenas 53.52 MiB, will set to 4.33 GiB (RSS), difference: 2.70 MiB
2023.02.10 20:37:56.286569 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 796 rows, 41.20 KiB.
2023.02.10 20:37:56.292846 [ 1809 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.535 sec., 179932 rows, 9.13 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.983 compressed bytes per row, compression rate: 2.131 (117210.093 rows/sec., 5.95 MiB/sec. uncompressed, 2.79 MiB/sec. compressed)
2023.02.10 20:37:56.560494 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 771 rows, 39.91 KiB.
2023.02.10 20:37:56.565874 [ 1806 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.244 sec., 179893 rows, 9.13 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.979 compressed bytes per row, compression rate: 2.131 (144631.558 rows/sec., 7.34 MiB/sec. uncompressed, 3.45 MiB/sec. compressed)
2023.02.10 20:37:56.806713 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 775 rows, 40.11 KiB.
2023.02.10 20:37:56.820729 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 795 rows, 41.15 KiB.
2023.02.10 20:37:56.827589 [ 1820 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.136 sec., 179598 rows, 9.12 MiB uncompressed, 4.28 MiB compressed, 53.234 uncompressed bytes per row, 24.996 compressed bytes per row, compression rate: 2.130 (158159.948 rows/sec., 8.03 MiB/sec. uncompressed, 3.77 MiB/sec. compressed)
2023.02.10 20:37:56.837133 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 776 rows, 40.16 KiB.
2023.02.10 20:37:56.840970 [ 1823 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.131 sec., 180683 rows, 9.17 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.971 compressed bytes per row, compression rate: 2.132 (159740.939 rows/sec., 8.11 MiB/sec. uncompressed, 3.80 MiB/sec. compressed)
2023.02.10 20:37:56.841142 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Max size of temporary block: 804 rows, 41.61 KiB.
2023.02.10 20:37:56.842784 [ 1812 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.147 sec., 180033 rows, 9.14 MiB uncompressed, 4.29 MiB compressed, 53.233 uncompressed bytes per row, 24.980 compressed bytes per row, compression rate: 2.131 (156947.772 rows/sec., 7.97 MiB/sec. uncompressed, 3.74 MiB/sec. compressed)
2023.02.10 20:37:56.846135 [ 1836 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Debug> Aggregator: Written part in 1.141 sec., 180762 rows, 9.18 MiB uncompressed, 4.30 MiB compressed, 53.232 uncompressed bytes per row, 24.960 compressed bytes per row, compression rate: 2.133 (158368.440 rows/sec., 8.04 MiB/sec. uncompressed, 3.77 MiB/sec. compressed)
2023.02.10 20:37:56.966232 [ 1615 ] {ab3ed4f5-ef6b-4d48-88ea-61515f4fb3a9} <Error> executeQuery: Code: 210. DB::NetException: I/O error: Broken pipe, while writing to socket ([::1]:42420). (NETWORK_ERROR) (version 23.2.1.1) (from [::1]:42420) (in query: select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity) from customer, orders, lineitem where c_custkey = o_custkey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice order by o_totalprice desc, o_orderdate limit 10;), Stack trace (when copying this message, always include the lines below):