DATA-HUB 安装与启动:

部署运行你感兴趣的模型镜像

一、docker 镜像安装(与本地项目无关)

datahub环境复杂, 不用全部在本地安装

*注意指定python版本, docker有可能不兼容python3.10

/opt/homebrew/anaconda3/bin/python3.12 -m pip install --upgrade pip wheel setuptools
/opt/homebrew/anaconda3/bin/python3.12 -m pip install --upgrade acryl-datahub
/opt/homebrew/anaconda3/bin/python3.12 -m datahub version
/opt/homebrew/anaconda3/bin/python3.12 -m datahub docker quickstart
/opt/homebrew/anaconda3/bin/python3.12 -m datahub docker ingest-sample-data

一、本地安装:

1.2、主要参考:

Local Development | DataHub

Local Development | DataHub

1.1、前置依赖

# Install Java
brew install openjdk@17

# Install Python(特别重要, 版本不对,后面的构建编译环节会很多问题, 不通过,粗设置好PATH的python环境变量)
brew install python@3.10  # you may need to add this to your PATH
# alternatively, you can use pyenv to manage your python versions

# Install docker and docker compose
brew install --cask docker

1.2、重git上克隆代码

主分枝:

git clone https://github.com/datahub-project/datahub.git;

国际化的分支:

git clone https://github.com/luizhsalazar/datahub.git

执行结果:

(base) phoenix@phoenixdeMacBook-Pro ~ % git clone https://github.com/datahub-project/datahub.git;
正克隆到 'datahub'...
remote: Enumerating objects: 4477163, done.
remote: Counting objects: 100% (15782/15782), done.
remote: Compressing objects: 100% (1209/1209), done.
remote: Total 4477163 (delta 14145), reused 14814 (delta 13277), pack-reused 4461381 (from 2)
接收对象中: 100% (4477163/4477163), 6.39 GiB | 17.23 MiB/s, 完成.
处理 delta 中: 100% (2102360/2102360), 完成.
正在更新文件: 100% (8581/8581), 完成.

1.3、构建编译项目:

使用gradle wrapper构建整个项目:

切换到存储库的根目录:

请注意,上述操作还将运行测试和一些验证,这会使过程变得相当慢。

cd datahub
./gradlew build

建议根据您的需要部分编译DataHub:

  • 构建Datahub的后端GMS(通用元数据服务):

    ./gradlew :metadata-service:war:build

  • 构建数据中心的前端:

    ./gradlew :datahub-frontend:dist -x yarnTest -x yarnLint
  • 构建DataHub的命令行工具:

    ./gradlew :metadata-ingestion:installDev
  • 构建DataHub的文档:

    ./gradlew :docs-website:yarnLintFix :docs-website:build -x :metadata-ingestion:runPreFlightScript
    # To preview the documentation
    ./gradlew :docs-website:serve

这个教你怎么安装插件:

DataHub安装配置详细过程_datahub部署-CSDN博客

2、安装python3.10作为datahub的python环境:

2.1、安装:

brew install python@3.10;(这个很重要, 系统中有更高python版本, 在./gradlew build时, 会自引用python12/13, ) 参考$PATH环境变量设置。

2.2、设置mac默认的python环境

open -e  ~/.bash_profile;   打开启动文件

open -e ~/.zshrc;   打开启动文件

brew list python@3.10;  查看3.10安装路径

把上面两个文件改成一样:


# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/opt/homebrew/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "/opt/homebrew/anaconda3/etc/profile.d/conda.sh" ]; then
        . "/opt/homebrew/anaconda3/etc/profile.d/conda.sh"
    else
#        export PATH="/opt/homebrew/anaconda3/bin:$PATH"
	 export PATH="/opt/homebrew/bin:$PATH"

    fi
fi
unset __conda_setup
# <<< conda initialize <<<


#python3.12
alias python3='/opt/homebrew/Cellar/python@3.10/3.10.16/bin/python3.10' 
alias python=python3

1.x 清理 ./gradlew clear;

Starting a Gradle Daemon, 1 busy Daemon could not be reused, use --status for details
Configuration on demand is an incubating feature.
<-------------> 1% CONFIGURING [1m 43s]
<-------------> 1% CONFIGURING [2m 42s]
<-------------> 1% CONFIGURING [2m 49s]f classpath > gradle-node-plugin-7.0.2.pom
<-------------> 1% CONFIGURING [2m 50s]f classpath > gradle-nexus-staging-plugin-0.30.0.pom
> root project > Resolve dependencies of classpath > gradle-nexus-staging-plugin-0.30.0.pom


<-------------> 1% CONFIGURING [2m 51s]


> Configure project :datahub-frontend
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :datahub-upgrade
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :smoke-test
Root directory:  /Users/phoenix/datahub

> Configure project :docker:datahub-ingestion
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker:datahub-ingestion-base
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker:elasticsearch-setup
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker:kafka-setup
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker:mysql-setup
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :docker:postgres-setup
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :metadata-jobs:mae-consumer-job
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :metadata-jobs:mce-consumer-job
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :metadata-service:configuration
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT

> Configure project :metadata-service:war
fullVersion=v0.15.0rc3-62-g5946558
cliMajorVersion=0.15.0rc3
version=0.15.0rc3-SNAPSHOT
[Incubating] Problems report is available at: file:///Users/phoenix/datahub/build/reports/problems/problems-report.html

FAILURE: Build failed with an exception.

* What went wrong:
Task 'clear' not found in root project 'datahub' and its subprojects. Some candidates are: 'clean'.

* Try:
> Run gradlew tasks to get a list of available tasks.
> For more on name expansion, please refer to https://docs.gradle.org/8.11.1/userguide/command_line_interface.html#sec:name_abbreviation in the Gradle documentation.
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.

Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

For more on this, please refer to https://docs.gradle.org/8.11.1/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation.

BUILD FAILED in 8m 4s
2 actionable tasks: 2 up-to-date
(base) phoenix@phoenixdeMacBook-Pro datahub % 

、前端本地启动命令:(其他组建可以跑在docker上)

碰到问题:进入相应的虚拟环境:

python3 -m venv venv
source venv/bin/activate

python3 -m pip install --upgrade pip wheel setuptools

单独安装相应的包:python3 -m pip install pyarrow==11.0.0;  

/opt/homebrew/opt/python@3.13/bin/python3.13 -m venv /Users/phoenix/datahub/metadata-ingestion/venv pip install pyarrow==11.0.0;  

GIT国际化分支下载:

源码:

datahub/metadata-service at feature/ing-623 · datahub-project/datahub · GitHub

说明文档:

GitHub - luizhsalazar/datahub at feature/i18n-support

https://blog.datahubproject.io/how-we-implemented-internationalization-in-datahub-d3e9f6349a6a

本地启动frontend:

cd datahub-frontend/run && ./run-local-frontend

从源码中安装:

DataHub CLI | DataHub

二、插件安装

插件安装:

检查datahub插件:python3 -m datahub check plugins; 

安装插件命令:python3 -m pip3 install 'acryl-datahub[postgres]'

(备注: 本地环境安装, docker容器跑的时候也会安装, 但会因为网络问题, 容易超时错误,多跑几次, 直到安装成功就好了)

/opt/homebrew/anaconda3/bin/python3.12 -m datahub check plugins;

/opt/homebrew/anaconda3/bin/python3.12 -m pip3 install 'acryl-datahub[postgres]';

/opt/homebrew/anaconda3/bin/python3.12 -m pip install 'acryl-datahub[postgres]';

参考:

Datahub部署 | Datahub中文社区

插件安装:参考

安装datahub - 编程好6博客

您可能感兴趣的与本文相关的镜像

Python3.11

Python3.11

Conda
Python

Python 是一种高级、解释型、通用的编程语言,以其简洁易读的语法而闻名,适用于广泛的应用,包括Web开发、数据分析、人工智能和自动化脚本

[2025-12-10 19:14:37] [2025-12-10 19:14:35] event_change_callback():2772 - [STM]clip not start, event->chn:0 [2025-12-10 19:14:37] [2025-12-10 19:14:37] [2025-12-10 19:14:35] bc_clip_manage_callback():2012 - [TapoCare]receive clip manage info, clip start time: 1765365271, clip end time: 1765365331, msgSubType: 2, time: 1765365275. [2025-12-10 19:14:37] [2025-12-10 19:14:37] [2025-12-10 19:14:35] tcp_connection_start_tos():1046 - [tpssl][5.800]domain:use1-device-tapo-care.i.tplinknbu.com idle --> dns query. [2025-12-10 19:14:37] [2025-12-10 19:14:37] [2025-12-10 19:14:35] media_set_req_state():117 - [STREAM_COMMON][5.811][event] req state: idle -> connecting. [2025-12-10 19:14:37] [2025-12-10 19:14:35] cdn_post_alarm_request():1200 - [TapoCare]event go through elb [2025-12-10 19:14:37] [2025-12-10 19:14:37] [2025-12-10 19:14:35] [ERROR] set_g_mp4_index_writed():1508 - [STM]set g_mp4_index_writed [0] [2025-12-10 19:14:37] [2025-12-10 19:14:35] [ERROR] set_g_index_binary_writed():1516 - [STM]set g_index_binary_writed [0] [2025-12-10 19:14:37] [2025-12-10 19:14:35] [ERROR] set_g_snapshotIndex_Synced():1524 - [STM]set g_snapshotIndex_Synced [0] [2025-12-10 19:14:37] [2025-12-10 19:14:35] change_record_status():1496 - [STM]rec status change: pre => recording [2025-12-10 19:14:37] [2025-12-10 19:14:37] [2025-12-10 19:14:35] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 19:14:37] [2025-12-10 19:14:35] DetClip_SetState():1400 - [SnapshotIdx] clip state: idle -> prepare [2025-12-10 19:14:37] [2025-12-10 19:14:35] update_curr_context():992 - [STM]shm curr disk: 1, curr disk: 255 [2025-12-10 19:14:38] [2025-12-10 19:14:35] [ERROR] write_cur_index_to_disk():102 - [STM]sys_info is null [2025-12-10 19:14:38] [2025-12-10 19:14:35] [ERROR] close_storage_ctx_fd():3218 - [STM]get disk failed [2025-12-10 19:14:38] [2025-12-10 19:14:35] update_curr_context():995 - [STM]finish releasing curr disk. [2025-12-10 19:14:38] [2025-12-10 19:14:35] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 19:14:38] [2025-12-10 19:14:35] ring_info_set():755 - [CloudIot]{"infoType":3,"clipStarttime":1765365271,"clipEndtime":1765365331} [2025-12-10 19:14:38] [2025-12-10 19:14:35] dns_query_success_handle():778 - [tpssl][5.872]DNS use1-device-tapo-care.i.tplinknbu.com --> 3.89.235.107 [2025-12-10 19:14:38] [2025-12-10 19:14:35] [ERROR] pd_dla_process_dev():2169 - [AMS] pd alarm timestamp: 6131000 [2025-12-10 19:14:38] [2025-12-10 19:14:35] [ERROR] dn_switch_detect_start_cb():303 - [DN_SWITCH]detection event occurred [2025-12-10 19:14:38] [2025-12-10 19:14:35] pd_dla_process_dev():2182 - [AMS] PD start [2025-12-10 19:14:38] [2025-12-10 19:14:35] send_clip_manage_event_state():362 - [clip]send clip event[1765365271~1765365331], event_type=26, event_time=1765365275, duration=60 [2025-12-10 19:14:38] [2025-12-10 19:14:36] msg_alarm_handle():1305 - [TapoCare]recv alarm type: 0x20a while req [event] is progressing. [2025-12-10 19:14:38] [2025-12-10 19:14:38] [2025-12-10 19:14:36] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 19:14:38] [2025-12-10 19:14:36] tcp_connect_timeout_handle():438 - [tpssl][6.116]ip:3.89.235.107 tcp connecting --> tcp connected. [2025-12-10 19:14:38] [2025-12-10 19:14:36] tcp_connect_timeout_handle():508 - [tpssl][6.153]ip:3.89.235.107 tcp connected --> ssl connecting. [2025-12-10 19:14:38] [2025-12-10 19:14:36] [ERROR] check_skip_curr_file():2557 - [STM]skip to next unused file if no backup [2025-12-10 19:14:38] [2025-12-10 19:14:36] mp4_storage_mp4_init():1129 - [STM]MP4 mux context is initial, file: /tmp/mnt/harddisk_1/20251210_185454_tp00010.mp4 [2025-12-10 19:14:38] [2025-12-10 19:14:36] drop_sys_cache():3395 - [STM]Dropped cache: 980 kB-> 2896 kB [2025-12-10 19:14:38] [2025-12-10 19:14:36] DetClip_SetState():1400 - [SnapshotIdx] clip state: prepare -> process [2025-12-10 19:14:38] [2025-12-10 19:14:36] SnapshotIndexMod_PostMsg():2249 - [SnapshotIdx]post msg(1) time(1765365271). [2025-12-10 19:14:38] [2025-12-10 19:14:36] DetSnapshot_Save():1011 - [SnapshotIdx](/tmp/mnt/harddisk_1/snapshot/20251210/19/1765365271_0_00.jpg) success! [2025-12-10 19:14:38] [2025-12-10 19:14:36] mt_get_wifi_info():1828 - !!!! mt7682 has connected !!!! [2025-12-10 19:14:38] [2025-12-10 19:14:36] wlan_wpa_send_led_status():130 - [WLAN]WLAN STA CONNECTED. [2025-12-10 19:14:38] [2025-12-10 19:14:36] link_status_update():476 - [NIFC]Link status: LINK_DOWN -> LINK_LINKING_UP [2025-12-10 19:14:38] [2025-12-10 19:14:36] link_status_update():478 - [NIFC]IP: ********, mask: 255.255.255.0, gateway: 192.168.0.1, DNS: 192.168.0.1, 192.168.68.1, conn_type:0 [2025-12-10 19:14:38] [2025-12-10 19:14:36] link_status_update():476 - [NIFC]Link status: LINK_DOWN -> LINK_LINKING_UP [2025-12-10 19:14:38] [2025-12-10 19:14:36] link_status_update():478 - [NIFC]IP: ********, mask: 255.255.255.0, gateway: 192.168.0.1, DNS: 192.168.0.1, 192.168.68.1, conn_type:0 [2025-12-10 19:14:38] [2025-12-10 19:14:36] tcp_recv_handle():346 - [tpssl][6.910]ip:3.89.235.107 ssl connecting --> ssl connected. [2025-12-10 19:14:38] [2025-12-10 19:14:36] media_set_req_state():117 - [STREAM_COMMON][6.918][event] req state: connecting -> connected. [2025-12-10 19:14:38] [2025-12-10 19:14:37] media_set_req_state():117 - [STREAM_COMMON][7.039][event] req state: connected -> send_req. [2025-12-10 19:14:39] [2025-12-10 19:14:37] media_set_req_state():117 - [STREAM_COMMON][7.497][event] req state: send_req -> recv_rsp. [2025-12-10 19:14:39] [2025-12-10 19:14:37] event_recv_data_handle():2787 - [TapoCare]xtoken:563aeb1b-4ff8-45c8-b891-ca6da2ac778e, host:use1-device-i-06673a40ccb3e4e49.tapo-care.i.tplinknbu.com:443, ip: 34.234.96.68.这是第一段log,[2025-12-10 22:02:13] [2025-12-10 22:02:11] event_change_callback():2772 - [STM]clip not start, event->chn:0 [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:11] bc_clip_manage_callback():2012 - [TapoCare]receive clip manage info, clip start time: 1765375327, clip end time: 1765375387, msgSubType: 2, time: 1765375331. [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:11] tcp_connection_start_tos():1046 - [tpssl][5.852]domain:use1-device-tapo-care.i.tplinknbu.com idle --> dns query. [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:11] media_set_req_state():117 - [STREAM_COMMON][5.854][event] req state: idle -> connecting. [2025-12-10 22:02:13] [2025-12-10 22:02:11] cdn_post_alarm_request():1200 - [TapoCare]event go through elb [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] set_g_mp4_index_writed():1508 - [STM]set g_mp4_index_writed [0] [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] set_g_index_binary_writed():1516 - [STM]set g_index_binary_writed [0] [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] set_g_snapshotIndex_Synced():1524 - [STM]set g_snapshotIndex_Synced [0] [2025-12-10 22:02:13] [2025-12-10 22:02:11] change_record_status():1496 - [STM]rec status change: pre => recording [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 22:02:13] [2025-12-10 22:02:11] DetClip_SetState():1400 - [SnapshotIdx] clip state: idle -> prepare [2025-12-10 22:02:13] [2025-12-10 22:02:11] update_curr_context():992 - [STM]shm curr disk: 1, curr disk: 255 [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] write_cur_index_to_disk():102 - [STM]sys_info is null [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] close_storage_ctx_fd():3218 - [STM]get disk failed [2025-12-10 22:02:13] [2025-12-10 22:02:11] update_curr_context():995 - [STM]finish releasing curr disk. [2025-12-10 22:02:13] [2025-12-10 22:02:11] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 22:02:13] [2025-12-10 22:02:11] ring_info_set():755 - [CloudIot]{"infoType":3,"clipStarttime":1765375327,"clipEndtime":1765375387} [2025-12-10 22:02:13] Monitor: receive NVMP_START_DONE. pid:211 [2025-12-10 22:02:13] [2025-12-10 22:02:11] nvmp_print_start_done():436 - [NVMP]Main progress start done [2025-12-10 22:02:13] [2025-12-10 22:02:11] dns_query_success_handle():778 - [tpssl][6.001]DNS use1-device-tapo-care.i.tplinknbu.com --> 107.20.151.138 [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] ds_handle():2727 - [DS]Signal method is illegal. [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] ds_handle():2727 - [DS]Signal method is illegal. [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] pd_dla_process_dev():2169 - [AMS] pd alarm timestamp: 6261000 [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] dn_switch_detect_start_cb():303 - [DN_SWITCH]detection event occurred [2025-12-10 22:02:13] [2025-12-10 22:02:12] pd_dla_process_dev():2182 - [AMS] PD start [2025-12-10 22:02:13] [2025-12-10 22:02:12] hub_manage_pinfo_cb():1231 - [HUB_MANAGE]main start done prepare sync [2025-12-10 22:02:13] [2025-12-10 22:02:12] send_clip_manage_event_state():362 - [clip]send clip event[1765375327~1765375387], event_type=26, event_time=1765375332, duration=60 [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] jpeg_time_callback():866 - [AVDC][6.146]jpeg update start, shm jpeg time: 1791 -> 358 [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] jpeg_sync_shm_loop():834 - [AVDC][6.147]jpeg update done, jpeg pts:3591709, shm_jpeg_time: 358000. [2025-12-10 22:02:13] [2025-12-10 22:02:12] msg_alarm_handle():1305 - [TapoCare]recv alarm type: 0x20a while req [event] is progressing. [2025-12-10 22:02:13] [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 [2025-12-10 22:02:13] [2025-12-10 22:02:12] [ERROR] check_skip_curr_file():2557 - [STM]skip to next unused file if no backup [2025-12-10 22:02:13] [2025-12-10 22:02:12] mp4_storage_mp4_init():1129 - [STM]MP4 mux context is initial, file: /tmp/mnt/harddisk_1/20251210_214439_tp00015.mp4 [2025-12-10 22:02:13] [2025-12-10 22:02:12] drop_sys_cache():3395 - [STM]Dropped cache: 2276 kB-> 3144 kB [2025-12-10 22:02:13] [2025-12-10 22:02:12] DetClip_SetState():1400 - [SnapshotIdx] clip state: prepare -> process [2025-12-10 22:02:13] [2025-12-10 22:02:12] SnapshotIndexMod_PostMsg():2249 - [SnapshotIdx]post msg(1) time(1765375327). [2025-12-10 22:02:13] [2025-12-10 22:02:12] DetSnapshot_Save():1011 - [SnapshotIdx](/tmp/mnt/harddisk_1/snapshot/20251210/22/1765375327_0_00.jpg) success! [2025-12-10 22:02:13] [2025-12-10 22:02:12] tcp_connect_timeout_handle():438 - [tpssl][6.489]ip:107.20.151.138 tcp connecting --> tcp connected. [2025-12-10 22:02:13] [2025-12-10 22:02:12] tcp_connect_timeout_handle():508 - [tpssl][6.515]ip:107.20.151.138 tcp connected --> ssl connecting. [2025-12-10 22:02:13] [2025-12-10 22:02:12] mt_get_wifi_info():1828 - !!!! mt7682 has connected !!!! [2025-12-10 22:02:13] [2025-12-10 22:02:12] wlan_wpa_send_led_status():130 - [WLAN]WLAN STA CONNECTED. [2025-12-10 22:02:13] [2025-12-10 22:02:12] link_status_update():476 - [NIFC]Link status: LINK_DOWN -> LINK_LINKING_UP [2025-12-10 22:02:13] [2025-12-10 22:02:12] link_status_update():478 - [NIFC]IP: ********, mask: 255.255.255.0, gateway: 192.168.0.1, DNS: 192.168.0.1, 192.168.68.1, conn_type:0 [2025-12-10 22:02:13] [2025-12-10 22:02:12] link_status_update():476 - [NIFC]Link status: LINK_DOWN -> LINK_LINKING_UP [2025-12-10 22:02:13] [2025-12-10 22:02:12] link_status_update():478 - [NIFC]IP: ********, mask: 255.255.255.0, gateway: 192.168.0.1, DNS: 192.168.0.1, 192.168.68.1, conn_type:0 [2025-12-10 22:02:14] [2025-12-10 22:02:14] tcp_recv_handle():346 - [tpssl][8.104]ip:107.20.151.138 ssl connecting --> ssl connected. [2025-12-10 22:02:15] [2025-12-10 22:02:14] media_set_req_state():117 - [STREAM_COMMON][8.135][event] req state: connecting -> connected. [2025-12-10 22:02:15] [2025-12-10 22:02:14] media_set_req_state():117 - [STREAM_COMMON][8.239][event] req state: connected -> send_req. [2025-12-10 22:02:15] bpi_time:5706->8501(1912,7502) [2025-12-10 22:02:15] [2025-12-10 22:02:15] media_set_req_state():117 - [STREAM_COMMON][9.319][event] req state: send_req -> recv_rsp. [2025-12-10 22:02:16] [2025-12-10 22:02:15] event_recv_data_handle():2787 - [TapoCare]xtoken:b2809123-b57c-4992-8242-d34681857b1d, host:use1-device-i-0eec8ae510dd800c2.tapo-care.i.tplinknbu.com:443, ip: 34.229.40.179.这是第二段log。其中第一段event接口建连耗时较短,第二段较长,从event go through elb到event_recv_data_handle花费的时间。分析一下是为什么,从log的信息中
最新发布
12-13
从你提供的两段日志来看,这两段日志分别记录了设备在不同时间点触发事件录像(clip)并上传到 TapoCare 云端的过程。关键问题是: > **为什么第一段 log 中 `event` 接口建连耗时较短,而第二段较长?** 我们重点分析从 `[TapoCare]event go through elb` 到 `event_recv_data_handle()` 这两个关键节点之间的时间差。 --- ### ✅ 提取关键时间点 #### 第一段日志(19:14) ```log [2025-12-10 19:14:35] cdn_post_alarm_request():1200 - [TapoCare]event go through elb ... [2025-12-10 19:14:38] event_recv_data_handle():2787 - [TapoCare]xtoken:..., ip: 34.234.96.68 ``` - 开始时间:`19:14:35` - 结束时间:`19:14:38` - 耗时 ≈ **3 秒** #### 第二段日志(22:02) ```log [2025-12-10 22:02:11] cdn_post_alarm_request():1200 - [TapoCare]event go through elb ... [2025-12-10 22:02:16] event_recv_data_handle():2787 - [TapoCare]xtoken:..., ip: 34.229.40.179 ``` - 开始时间:`22:02:11` - 结束时间:`22:02:16` - 耗时 ≈ **5 秒** 所以确实第二段比第一段多花了约 2 秒。 --- ## 🔍 深入日志对比分析 我们逐项比较两段日志中从“发起请求”到“收到响应”的完整流程: | 阶段 | 第一段耗时 | 第二段耗时 | 差异 | |------|------------|-----------|-------| | DNS 查询 → TCP 连接建立 | ~0.2s (5.800 → 6.116) | ~0.6s (5.852 → 6.489) | ⚠️ 第二段慢 0.4s | | TCP → SSL 建立完成 | ~0.8s (6.153 → 6.910) | ~1.6s (6.515 → 8.104) | ⚠️ 第二段慢 0.8s | | SSL 完成 → 发送请求 | 很快 (~0.008s) | 很快 (~0.02s) | 可忽略 | | 发送请求 → 收到响应 | ~0.5s (6.918 → 7.497) | ~1.1s (8.239 → 9.319) | ⚠️ 第二段慢 0.6s | 总延迟主要集中在: 1. **SSL 握手阶段显著变慢** 2. **TCP 建立连接更久** 3. **后续 HTTP 请求等待响应时间更长** --- ## 🧩 根本原因分析(基于日志线索) ### ❌ 不是本地处理问题 - 本地模块如 `mp4_storage_mp4_init`, `DetSnapshot_Save` 等执行都很快。 - 快照保存成功:`DetSnapshot_Save(): success!` - 存储路径正常初始化:`/tmp/mnt/harddisk_1/...mp4` 说明本地资源准备 OK,不是瓶颈。 --- ### 🔴 关键差异点一:**DNS 和网络链路质量下降** | 日志项 | 第一次 | 第二次 | |--------|--------|---------| | DNS 查询目标 | use1-device-tapo-care.i.tplinknbu.com → `3.89.235.107` | 同域名 → `107.20.151.138` | | TCP 目标 IP | 3.89.235.107 | 107.20.151.138 | | SSL 加密耗时 | 约 0.75s | 约 1.6s | 👉 **结论:第二次连接的服务器 IP 更远或网络拥塞,导致 SSL/TCP 层延迟增加。** AWS ELB(弹性负载均衡)会动态分配后端实例,不同的 IP 地址可能位于不同区域或性能不同。 --- ### 🔴 关键差异点二:**系统状态异常影响调度** 看这段错误频繁出现: ```log [ERROR] send_sd_recording_power_down_timeout():1546 - [STM]set sd rec tm[1], tm=72 ``` 这个错误表示 SD 卡录制电源关闭超时,可能是存储子系统处于不稳定状态。 此外还有: ```log [ERROR] write_cur_index_to_disk():102 - [STM]sys_info is null [ERROR] close_storage_ctx_fd():3218 - [STM]get disk failed ``` ⚠️ 表明 **存储上下文未正确初始化或共享内存(sys_info)为空指针**,这可能导致某些线程阻塞或重试逻辑,间接拖慢整体事件上报流程。 虽然主流程仍在运行,但底层模块异常可能引起 CPU 调度竞争、缓存失效等问题。 --- ### 🔴 关键差异点三:**系统重启或进程刚启动(NVMP_START_DONE)** 第二段日志中出现了重要信号: ```log Monitor: receive NVMP_START_DONE. pid:211 [NVMP]Main progress start done ``` 这是**主程序启动完成标志**,说明此时设备刚刚完成一次启动或重启! 而第一段日志没有此信息,说明它是“稳定运行”状态下的事件。 📌 所以: > 第二段日志发生在 **设备刚启动不久(boot up)**,系统正处于初始化阶段,各种服务尚未完全就绪。 这就解释了: - 为何 `sys_info is null` - 为何 `jpeg_time_callback` 出现非法更新(时间跳变) - 为何 SSL 握手慢(证书加载、随机数生成等依赖系统熵池,启动初期不足) --- ### 🔴 时间戳跳跃问题(潜在 bug) 注意这一行: ```log [ERROR] jpeg_time_callback():866 - [AVDC][6.146]jpeg update start, shm jpeg time: 1791 -> 358 ``` 时间从 `1791` 回退到 `358` —— 明显是时间戳翻转或未同步! 这会导致音视频同步模块混乱,也可能引发短暂的数据流停滞。 --- ## ✅ 总结:为什么第二段 event 接口建连更慢? | 原因 | 证据支持 | 影响程度 | |------|----------|----------| | ✅ 设备刚启动(NVMP_START_DONE),系统未完全就绪 | 第二段有 `NVMP_START_DONE`,第一段无 | ⭐⭐⭐⭐☆ | | ✅ 存储系统异常(sys_info null, disk failed) | 多次报错,上下文释放失败 | ⭐⭐⭐☆☆ | | ✅ 网络链路较差(DNS/IP不同,SSL握手慢) | 目标IP变化,SSL耗时翻倍 | ⭐⭐⭐☆☆ | | ✅ JPEG 时间戳异常,媒体流同步出错 | 时间倒流 from 1791→358 | ⭐⭐☆☆☆ | ➡️ **根本原因是:第二段日志发生于设备重启后的早期阶段,系统处于不稳定状态,加上网络条件略差,导致整个事件上报流程延迟明显高于常态。** --- ## 💡 建议优化方向 1. **避免在启动初期立即触发事件上传** - 添加系统就绪检查机制(如 wait for storage ready) - 延迟非紧急事件上报直到所有模块初始化完成 2. **修复 `sys_info is null` 错误** - 检查 `sys_info` 初始化顺序是否晚于 STM 模块使用时机 - 使用互斥锁或事件通知机制确保依赖就绪 3. **监控 SSL 握手耗时** - 记录每次 HTTPS 请求各阶段耗时(DNS / TCP / SSL / Send / Recv) - 设置阈值告警,自动切换备用域名或 CDN 节点 4. **校正时间戳同步逻辑** - 在 `jpeg_time_callback` 中加入防回滚机制: ```c if (new_time < old_time && (old_time - new_time) > MAX_JPEG_TS_JUMP) { log_error("JPEG timestamp rollback detected!"); return; } ``` 5. **优化 NVMP 启动流程** - 将 `NVMP_START_DONE` 广播给所有依赖模块 - STM、DS、AMS 等应在收到该信号后再开始工作 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值