MySQL 添加字段报错1005 Can‘t create table ‘#sql-12d23_4bd‘ (errno: 28)

本文针对MySQL在添加字段时报错1005Can't create table '#sql-12d23_4bd' (errno: 28)的问题提供了两种解决方案:首先检查硬盘空间是否已满;若问题仍未解决,则尝试删除表后重建。

问题描述:MySQL 添加字段报错1005 Can't create table '#sql-12d23_4bd' (errno: 28)

解决方法:

1.可以考虑一下是否是硬盘内存已满

2.如果上述方法无法解决,再看看是不是删除表重新建

我用conda下载了mkl,我现在应该怎么用: libblas conda-forge/linux-64::libblas-3.9.0-35_h4a7cf45_openblas libcblas conda-forge/linux-64::libcblas-3.9.0-35_h0358290_openblas libgfortran conda-forge/linux-64::libgfortran-15.1.0-h69a702a_5 libgfortran5 conda-forge/linux-64::libgfortran5-15.1.0-hcea5267_5 libhwloc conda-forge/linux-64::libhwloc-2.12.1-default_h7f8ec31_1002 libiconv conda-forge/linux-64::libiconv-1.18-h3b78370_2 liblapack conda-forge/linux-64::liblapack-3.9.0-35_h47877c9_openblas libopenblas conda-forge/linux-64::libopenblas-0.3.30-pthreads_h94d23a6_2 libstdcxx conda-forge/linux-64::libstdcxx-15.1.0-h8f9b012_5 libstdcxx-ng conda-forge/linux-64::libstdcxx-ng-15.1.0-h4852527_5 libxml2 conda-forge/linux-64::libxml2-2.14.6-h26afc86_2 libxml2-16 conda-forge/linux-64::libxml2-16-2.14.6-ha9997c6_2 llvm-openmp conda-forge/linux-64::llvm-openmp-21.1.0-h4922eb0_0 mkl conda-forge/linux-64::mkl-2024.2.2-ha770c72_17 numpy conda-forge/linux-64::numpy-2.3.3-py311h2e04523_0 python_abi conda-forge/noarch::python_abi-3.11-8_cp311 scipy conda-forge/linux-64::scipy-1.16.2-py311h1e13796_0 tbb conda-forge/linux-64::tbb-2021.13.0-hb60516a_3 The following packages will be DOWNGRADED: _openmp_mutex 4.5-2_gnu --> 4.5-4_kmp_llvm Proceed ([y]/n)? y Downloading and Extracting Packages: Preparing transaction: done Verifying transaction: done (wan) root@dev-kunlin-yang-aieditor-1-6bnr-5b859cbcb8-zr88t:/data/colmap_build/colmap/build# cmake .. -GNinja -DBLA_VENDOR=Intel10_64lp -DCMAKE_CUDA_ARCHITECTURES=80 -- BUILD_SHARED_LIBS: OFF -- CMAKE_BUILD_TYPE: Release -- CMAKE_GENERATOR: Single-config -- CMAKE_GENERATOR: Ninja -- CMAKE_REGISTRY_FOLDER: OFF -- Could NOT find MKL (missing: MKL_LIBRARIES) CMake Error at /data/tool/opt/cmake-3.29.9-linux-x86_64/share/cmake-3.29/Modules/FindPackageHandleStandardArgs.cmake:230 (message): Could NOT find BLAS (missing: BLAS_LIBRARIES) Call Stack (most recent call first): /data/tool/opt/cmake-3.29.9-linux-x86_64/share/cmake-3.29/Modules/FindPackageHandleStandardArgs.cmake:600 (_FPHSA_FAILURE_MESSAGE) /data/tool/opt/cmake-3.29.9-linux-x86_64/share/cmake-3.29/Modules/FindBLAS.cmake:1387 (find_package_handle_standard_args) /data/code/faiss-36b77353dc435383e0c23a709e7997a29d049041/faiss/CMakeLists.txt:396 (find_package) -- Configuring incomplete, errors occurred!
最新发布
09-16
PS C:\Users\Administrator\Desktop> cd E:\PyTorch_Build\pytorch PS E:\PyTorch_Build\pytorch> # 1. 激活虚拟环境 PS E:\PyTorch_Build\pytorch> .\pytorch_env\Scripts\activate (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 修复conda路径(执行一次即可) (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = "${env:USERPROFILE}\miniconda3\Scripts" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:PATH += ";$condaPath" (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version # 应显示conda版本 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 1. 安装正确版本的MKL (pytorch_env) PS E:\PyTorch_Build\pytorch> pip uninstall -y mkl-static mkl-include Found existing installation: mkl-static 2024.1.0 Uninstalling mkl-static-2024.1.0: Successfully uninstalled mkl-static-2024.1.0 Found existing installation: mkl-include 2024.1.0 Uninstalling mkl-include-2024.1.0: Successfully uninstalled mkl-include-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> pip install mkl-static==2024.1 mkl-include==2024.1 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting mkl-static==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d8/f0/3b9976df82906d8f3244213b6d8beb67cda19ab5b0645eb199da3c826127/mkl_static-2024.1.0-py2.py3-none-win_amd64.whl (220.8 MB) Collecting mkl-include==2024.1 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/06/1b/f05201146f7f12bf871fa2c62096904317447846b5d23f3560a89b4bbaae/mkl_include-2024.1.0-py2.py3-none-win_amd64.whl (1.3 MB) Requirement already satisfied: intel-openmp==2024.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb-devel==2021.* in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from mkl-static==2024.1) (2021.13.1) Requirement already satisfied: intel-cmplr-lib-ur==2024.2.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from intel-openmp==2024.*->mkl-static==2024.1) (2024.2.1) Requirement already satisfied: tbb==2021.13.1 in e:\pytorch_build\pytorch\pytorch_env\lib\site-packages (from tbb-devel==2021.*->mkl-static==2024.1) (2021.13.1) Installing collected packages: mkl-include, mkl-static Successfully installed mkl-include-2024.1.0 mkl-static-2024.1.0 (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 2. 安装libuv (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge libuv=1.46 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 3. 安装OpenSSL (pytorch_env) PS E:\PyTorch_Build\pytorch> conda install -c conda-forge openssl=3.1 conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 4. 验证安装 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('MKL版本:', mkl.__version__)" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证所有关键组件 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import mkl; print('✓ MKL已安装')" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'mkl' (pytorch_env) PS E:\PyTorch_Build\pytorch> conda list | Select-String "libuv|openssl" conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> dir "E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\cudnn*" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> python -c "import os; print('环境变量检查:'); >> print('CUDNN_PATH:', os.getenv('CUDA_PATH')); >> print('CONDA_PREFIX:', os.getenv('CONDA_PREFIX'))" 环境变量检查: CUDNN_PATH: E:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0 CONDA_PREFIX: None (pytorch_env) PS E:\PyTorch_Build\pytorch> # 清理并重建 (pytorch_env) PS E:\PyTorch_Build\pytorch> Remove-Item -Recurse -Force build (pytorch_env) PS E:\PyTorch_Build\pytorch> python setup.py install Building wheel torch-2.9.0a0+git2d31c3d -- Building version 2.9.0a0+git2d31c3d E:\PyTorch_Build\pytorch\pytorch_env\lib\site-packages\setuptools\_distutils\_msvccompiler.py:12: UserWarning: _get_vc_env is private; find an alternative (pypa/distutils#340) warnings.warn( -- Checkout nccl release tag: v2.27.5-1 cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=E:\PyTorch_Build\pytorch\torch -DCMAKE_PREFIX_PATH=E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -DPython_EXECUTABLE=E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -DTORCH_BUILD_VERSION=2.9.0a0+git2d31c3d -DUSE_NUMPY=True E:\PyTorch_Build\pytorch CMake Deprecation Warning at CMakeLists.txt:18 (cmake_policy): The OLD behavior for policy CMP0126 will be removed from a future version of CMake. The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. -- The CXX compiler identification is MSVC 19.44.35215.0 -- The C compiler identification is MSVC 19.44.35215.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe - skipped -- Detecting C compile features -- Detecting C compile features - done -- Not forcing any particular BLAS to be found CMake Warning at CMakeLists.txt:425 (message): TensorPipe cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:427 (message): KleidiAI cannot be used on Windows. Set it to OFF CMake Warning at CMakeLists.txt:439 (message): Libuv is not installed in current conda env. Set USE_DISTRIBUTED to OFF. Please run command 'conda install -c conda-forge libuv=1.39' to install libuv. -- Performing Test C_HAS_AVX_1 -- Performing Test C_HAS_AVX_1 - Success -- Performing Test C_HAS_AVX2_1 -- Performing Test C_HAS_AVX2_1 - Success -- Performing Test C_HAS_AVX512_1 -- Performing Test C_HAS_AVX512_1 - Success -- Performing Test CXX_HAS_AVX_1 -- Performing Test CXX_HAS_AVX_1 - Success -- Performing Test CXX_HAS_AVX2_1 -- Performing Test CXX_HAS_AVX2_1 - Success -- Performing Test CXX_HAS_AVX512_1 -- Performing Test CXX_HAS_AVX512_1 - Success -- Current compiler supports avx2 extension. Will build perfkernels. -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY -- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- Compiler does not support SVE extension. Will not build perfkernels. CMake Warning at CMakeLists.txt:845 (message): x64 operating system is required for FBGEMM. Not compiling with FBGEMM. Turn this warning off by USE_FBGEMM=OFF. -- Performing Test HAS/UTF_8 -- Performing Test HAS/UTF_8 - Success -- Found CUDA: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 (found version "13.0") -- The CUDA compiler identification is NVIDIA 13.0.48 with host compiler MSVC 19.44.35215.0 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Found CUDAToolkit: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include (found version "13.0.48") -- PyTorch: CUDA detected: 13.0 -- PyTorch: CUDA nvcc is: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- PyTorch: CUDA toolkit directory: E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- PyTorch: Header version is: 13.0 -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter CMake Warning at cmake/public/cuda.cmake:140 (message): Failed to compute shorthash for libnvrtc.so Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:201 (message): Cannot find cuDNN library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUSPARSELT (missing: CUSPARSELT_LIBRARY_PATH CUSPARSELT_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:226 (message): Cannot find cuSPARSELt library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Could NOT find CUDSS (missing: CUDSS_LIBRARY_PATH CUDSS_INCLUDE_PATH) CMake Warning at cmake/public/cuda.cmake:242 (message): Cannot find CUDSS library. Turning the option off Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- USE_CUFILE is set to 0. Compiling without cuFile support -- Autodetected CUDA architecture(s): 12.0 CMake Warning at cmake/public/cuda.cmake:317 (message): pytorch is not compatible with `CMAKE_CUDA_ARCHITECTURES` and will ignore its value. Please configure `TORCH_CUDA_ARCH_LIST` instead. Call Stack (most recent call first): cmake/Dependencies.cmake:44 (include) CMakeLists.txt:873 (include) -- Added CUDA NVCC flags for: -gencode;arch=compute_120,code=sm_120 CMake Warning at cmake/Dependencies.cmake:95 (message): Not compiling with XPU. Could NOT find SYCL. Suppress this warning with -DUSE_XPU=OFF. Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Building using own protobuf under third_party per request. -- Use custom protobuf build. CMake Warning at cmake/ProtoBuf.cmake:37 (message): Ancient protobuf forces CMake compatibility Call Stack (most recent call first): cmake/ProtoBuf.cmake:87 (custom_protobuf_find) cmake/Dependencies.cmake:107 (include) CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- -- 3.13.0.0 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - not found -- Found Threads: TRUE -- Caffe2 protobuf include directory: $<BUILD_INTERFACE:E:/PyTorch_Build/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include> -- Trying to find preferred BLAS backend of choice: MKL -- MKL_THREADING = OMP -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of void* -- Check size of void* - done -- MKL_THREADING = OMP CMake Warning at cmake/Dependencies.cmake:213 (message): MKL could not be found. Defaulting to Eigen Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Warning at cmake/Dependencies.cmake:279 (message): Preferred BLAS (MKL) cannot be found, now searching for a general BLAS library Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Cannot find a library with BLAS API. Not using BLAS. -- Using pocketfft in directory: E:/PyTorch_Build/pytorch/third_party/pocketfft/ CMake Deprecation Warning at third_party/pthreadpool/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/FXdiv/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/cpuinfo/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- The ASM compiler identification is MSVC CMake Warning (dev) at pytorch_env/Lib/site-packages/cmake/data/share/cmake-4.1/Modules/CMakeDetermineASMCompiler.cmake:234 (message): Policy CMP194 is not set: MSVC is not an assembler for language ASM. Run "cmake --help-policy CMP194" for policy details. Use the cmake_policy command to set the policy and suppress this warning. Call Stack (most recent call first): third_party/XNNPACK/CMakeLists.txt:18 (PROJECT) This warning is for project developers. Use -Wno-dev to suppress it. -- Found assembler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Building for XNNPACK_TARGET_PROCESSOR: x86_64 -- Generating microkernels.cmake Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avx256vnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c (1th function) Duplicate microkernel definition: src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-avxvnni.c and src\qs8-qc4w-packw\gen\qs8-qc4w-packw-x8c8-gemm-goi-scalar.c No microkernel found in src\reference\binary-elementwise.cc No microkernel found in src\reference\packing.cc No microkernel found in src\reference\unary-elementwise.cc -- Found Git: E:/Program Files/Git/cmd/git.exe (found version "2.51.0.windows.1") -- Google Benchmark version: v1.9.3, normalized to 1.9.3 -- Looking for shm_open in rt -- Looking for shm_open in rt - not found -- Performing Test HAVE_CXX_FLAG_WX -- Performing Test HAVE_CXX_FLAG_WX - Success -- Compiling and running to test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- success -- Compiling and running to test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- failed to compile -- Compiling and running to test HAVE_STEADY_CLOCK -- Performing Test HAVE_STEADY_CLOCK -- success -- Compiling and running to test HAVE_PTHREAD_AFFINITY -- Performing Test HAVE_PTHREAD_AFFINITY -- failed to compile CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning at cmake/Dependencies.cmake:749 (message): FP16 is only cmake-2.8 compatible Call Stack (most recent call first): CMakeLists.txt:873 (include) CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. -- Using third party subdirectory Eigen. -- Found Python: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter Development.Module missing components: NumPy CMake Warning at cmake/Dependencies.cmake:826 (message): NumPy could not be found. Not building with NumPy. Suppress this warning with -DUSE_NUMPY=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- Using third_party/pybind11. -- pybind11 include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/pybind11/include -- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS) -- Using third_party/opentelemetry-cpp. -- opentelemetry api include dirs: E:/PyTorch_Build/pytorch/cmake/../third_party/opentelemetry-cpp/api/include -- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) CMake Warning at cmake/Dependencies.cmake:894 (message): Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF Call Stack (most recent call first): CMakeLists.txt:873 (include) -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- MKL_THREADING = OMP -- Check OMP with lib C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib and flags -openmp:experimental -- Found OpenMP_C: -openmp:experimental -- Found OpenMP_CXX: -openmp:experimental -- Found OpenMP: TRUE -- Adding OpenMP CXX_FLAGS: -openmp:experimental -- Will link against OpenMP libraries: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/lib/x64/libomp.lib -- Found nvtx3: E:/PyTorch_Build/pytorch/third_party/NVTX/c/include -- ROCM_PATH environment variable is not set and C:/opt/rocm does not exist. Building without ROCm support. -- Found Python3: E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe (found version "3.10.10") found components: Interpreter -- ONNX_PROTOC_EXECUTABLE: $<TARGET_FILE:protobuf::protoc> -- Protobuf_VERSION: Protobuf_VERSION_NOTFOUND Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto Generated: E:/PyTorch_Build/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto -- -- ******** Summary ******** -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler version : 19.44.35215.0 -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1 -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- CMAKE_MODULE_PATH : E:/PyTorch_Build/pytorch/cmake/Modules;E:/PyTorch_Build/pytorch/cmake/public/../Modules_CUDA_fix -- -- ONNX version : 1.18.0 -- ONNX NAMESPACE : onnx_torch -- ONNX_USE_LITE_PROTO : OFF -- USE_PROTOBUF_SHARED_LIBS : OFF -- ONNX_DISABLE_EXCEPTIONS : OFF -- ONNX_DISABLE_STATIC_REGISTRATION : OFF -- ONNX_WERROR : OFF -- ONNX_BUILD_TESTS : OFF -- BUILD_SHARED_LIBS : OFF -- -- Protobuf compiler : $<TARGET_FILE:protobuf::protoc> -- Protobuf includes : -- Protobuf libraries : -- ONNX_BUILD_PYTHON : OFF -- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor -- Adding -DNDEBUG to compile flags -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 -- Checking prototype magma_get_sgeqrf_nb for MAGMA_V2 - False -- MAGMA not found. Compiling without MAGMA support -- Could not find hardware support for NEON on this machine. -- No OMAP3 processor on this machine. -- No OMAP4 processor on this machine. -- MKL_THREADING = OMP -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_intel_thread - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_sequential - mkl_core] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - libiomp5md - pthread] -- Library mkl_intel: not found -- Checking for [mkl_intel_lp64 - mkl_core - pthread] -- Library mkl_intel_lp64: not found -- Checking for [mkl_intel - mkl_core - pthread] -- Library mkl_intel: not found -- Checking for [mkl - guide - pthread - m] -- Library mkl: not found -- MKL library not found -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Checking for [Accelerate] -- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND -- Checking for [vecLib] -- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND -- Checking for [flexiblas] -- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND -- Checking for [openblas] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [openblas - pthread - m - gomp] -- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND -- Checking for [libopenblas] -- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [goto2 - gfortran - pthread] -- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND -- Checking for [acml - gfortran] -- Library acml: BLAS_acml_LIBRARY-NOTFOUND -- Checking for [blis] -- Library blis: BLAS_blis_LIBRARY-NOTFOUND -- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY) -- Checking for [ptf77blas - atlas - gfortran] -- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND -- Checking for [] -- Cannot find a library with BLAS API. Not using BLAS. -- LAPACK requires BLAS -- Cannot find a library with LAPACK API. Not using LAPACK. disabling ROCM because NOT USE_ROCM is set -- MIOpen not found. Compiling without MIOpen support disabling MKLDNN because USE_MKLDNN is not set -- {fmt} version: 11.2.0 -- Build type: Release -- Using Kineto with CUPTI support -- Configuring Kineto dependency: -- KINETO_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto -- KINETO_BUILD_TESTS = OFF -- KINETO_LIBRARY_TYPE = static -- CUDA_SOURCE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA_INCLUDE_DIRS = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- CUDA_cupti_LIBRARY = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/lib64/cupti.lib -- Found CUPTI CMake Deprecation Warning at third_party/kineto/libkineto/CMakeLists.txt:7 (cmake_minimum_required): Compatibility with CMake < 3.10 will be removed from a future version of CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package): Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules are removed. Run "cmake --help-policy CMP0148" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. -- Found PythonInterp: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/python.exe (found version "3.10.10") -- ROCM_SOURCE_DIR = -- Kineto: FMT_SOURCE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt -- Kineto: FMT_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/fmt/include -- CUPTI_INCLUDE_DIR = E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/extras/CUPTI/include -- ROCTRACER_INCLUDE_DIR = /include/roctracer -- DYNOLOG_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog/ -- IPCFABRIC_INCLUDE_DIR = E:/PyTorch_Build/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/ -- Configured Kineto -- Performing Test HAS/WD4624 -- Performing Test HAS/WD4624 - Success -- Performing Test HAS/WD4068 -- Performing Test HAS/WD4068 - Success -- Performing Test HAS/WD4067 -- Performing Test HAS/WD4067 - Success -- Performing Test HAS/WD4267 -- Performing Test HAS/WD4267 - Success -- Performing Test HAS/WD4661 -- Performing Test HAS/WD4661 - Success -- Performing Test HAS/WD4717 -- Performing Test HAS/WD4717 - Success -- Performing Test HAS/WD4244 -- Performing Test HAS/WD4244 - Success -- Performing Test HAS/WD4804 -- Performing Test HAS/WD4804 - Success -- Performing Test HAS/WD4273 -- Performing Test HAS/WD4273 - Success -- Performing Test HAS_WNO_STRINGOP_OVERFLOW -- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed -- -- Architecture: x64 -- Use the C++ compiler to compile (MI_USE_CXX=ON) -- -- Library name : mimalloc -- Version : 2.2.4 -- Build type : release -- C++ Compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- Compiler flags : /Zc:__cplusplus -- Compiler defines : MI_CMAKE_BUILD_TYPE=release;MI_BUILD_RELEASE -- Link libraries : psapi;shell32;user32;advapi32;bcrypt -- Build targets : static -- CMake Error at CMakeLists.txt:1264 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch/headeronly does not contain a CMakeLists.txt file. -- don't use NUMA -- Looking for backtrace -- Looking for backtrace - not found -- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR) -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- Autodetected CUDA architecture(s): 12.0 -- headers outputs: torch\csrc\inductor\aoti_torch\generated\c_shim_cpu.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_aten.h not found torch\csrc\inductor\aoti_torch\generated\c_shim_cuda.h not found -- sources outputs: -- declarations_yaml outputs: -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT -- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed -- Using ATen parallel backend: OMP -- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) -- Check size of long double -- Check size of long double - done -- Performing Test COMPILER_SUPPORTS_FLOAT128 -- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed -- Performing Test COMPILER_SUPPORTS_SSE2 -- Performing Test COMPILER_SUPPORTS_SSE2 - Success -- Performing Test COMPILER_SUPPORTS_SSE4 -- Performing Test COMPILER_SUPPORTS_SSE4 - Success -- Performing Test COMPILER_SUPPORTS_AVX -- Performing Test COMPILER_SUPPORTS_AVX - Success -- Performing Test COMPILER_SUPPORTS_FMA4 -- Performing Test COMPILER_SUPPORTS_FMA4 - Success -- Performing Test COMPILER_SUPPORTS_AVX2 -- Performing Test COMPILER_SUPPORTS_AVX2 - Success -- Performing Test COMPILER_SUPPORTS_AVX512F -- Performing Test COMPILER_SUPPORTS_AVX512F - Success -- Found OpenMP_C: -openmp:experimental (found version "2.0") -- Found OpenMP_CXX: -openmp:experimental (found version "2.0") -- Found OpenMP_CUDA: -openmp (found version "2.0") -- Found OpenMP: TRUE (found version "2.0") -- Performing Test COMPILER_SUPPORTS_OPENMP -- Performing Test COMPILER_SUPPORTS_OPENMP - Success -- Performing Test COMPILER_SUPPORTS_OMP_SIMD -- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES -- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH -- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM -- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed -- Configuring build for SLEEF-v3.8.0 Target system: Windows-10.0.26100 Target processor: AMD64 Host system: Windows-10.0.26100 Host processor: AMD64 Detected C compiler: MSVC @ C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe CMake: 4.1.0 Make program: E:/PyTorch_Build/pytorch/pytorch_env/Scripts/ninja.exe -- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef -- Building shared libs : OFF -- Building static test bins: OFF -- MPFR : LIB_MPFR-NOTFOUND -- GMP : LIBGMP-NOTFOUND -- RT : -- FFTW3 : LIBFFTW3-NOTFOUND -- OPENSSL : -- SDE : SDE_COMMAND-NOTFOUND -- COMPILER_SUPPORTS_OPENMP : FALSE AT_INSTALL_INCLUDE_DIR include/ATen/core core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/aten_interned_strings.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/enum_tag.h core header install: E:/PyTorch_Build/pytorch/build/aten/src/ATen/core/TensorBody.h CMake Error: File E:/PyTorch_Build/pytorch/torch/_utils_internal.py does not exist. CMake Error at caffe2/CMakeLists.txt:241 (configure_file): configure_file Problem configuring file CMake Error: File E:/PyTorch_Build/pytorch/torch/csrc/api/include/torch/version.h.in does not exist. CMake Error at caffe2/CMakeLists.txt:246 (configure_file): configure_file Problem configuring file -- NVSHMEM not found, not building with NVSHMEM support. CMake Error at caffe2/CMakeLists.txt:1398 (add_subdirectory): The source directory E:/PyTorch_Build/pytorch/torch does not contain a CMakeLists.txt file. CMake Warning at CMakeLists.txt:1285 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. -- -- ******** Summary ******** -- General: -- CMake version : 4.1.0 -- CMake command : E:/PyTorch_Build/pytorch/pytorch_env/Lib/site-packages/cmake/data/bin/cmake.exe -- System : Windows -- C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.44.35207/bin/Hostx64/x64/cl.exe -- C++ compiler id : MSVC -- C++ compiler version : 19.44.35215.0 -- Using ccache if found : OFF -- CXX flags : /DWIN32 /D_WINDOWS /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 -- Build type : Release -- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;EXPORT_AOTI_FUNCTIONS;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC -- CMAKE_PREFIX_PATH : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CMAKE_INSTALL_PREFIX : E:/PyTorch_Build/pytorch/torch -- USE_GOLD_LINKER : OFF -- -- TORCH_VERSION : 2.9.0 -- BUILD_STATIC_RUNTIME_BENCHMARK: OFF -- BUILD_BINARY : OFF -- BUILD_CUSTOM_PROTOBUF : ON -- Link local protobuf : ON -- BUILD_PYTHON : True -- Python version : 3.10.10 -- Python executable : E:\PyTorch_Build\pytorch\pytorch_env\Scripts\python.exe -- Python library : E:/Python310/libs/python310.lib -- Python includes : E:/Python310/Include -- Python site-package : E:\PyTorch_Build\pytorch\pytorch_env\Lib\site-packages -- BUILD_SHARED_LIBS : ON -- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF -- BUILD_TEST : True -- BUILD_JNI : OFF -- BUILD_MOBILE_AUTOGRAD : OFF -- BUILD_LITE_INTERPRETER: OFF -- INTERN_BUILD_MOBILE : -- TRACING_BASED : OFF -- USE_BLAS : 0 -- USE_LAPACK : 0 -- USE_ASAN : OFF -- USE_TSAN : OFF -- USE_CPP_CODE_COVERAGE : OFF -- USE_CUDA : ON -- CUDA static link : OFF -- USE_CUDNN : OFF -- USE_CUSPARSELT : OFF -- USE_CUDSS : OFF -- USE_CUFILE : OFF -- CUDA version : 13.0 -- USE_FLASH_ATTENTION : OFF -- USE_MEM_EFF_ATTENTION : ON -- CUDA root directory : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0 -- CUDA library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cuda.lib -- cudart library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart.lib -- cublas library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cublas.lib -- cufft library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cufft.lib -- curand library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/curand.lib -- cusparse library : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cusparse.lib -- nvrtc : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/nvrtc.lib -- CUDA include path : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/include -- NVCC executable : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA compiler : E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/bin/nvcc.exe -- CUDA flags : -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -Xcompiler /Zc:__cplusplus -Xcompiler /w -w -Xcompiler /FS -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_120,code=sm_120 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -- CUDA host compiler : -- CUDA --device-c : OFF -- USE_TENSORRT : -- USE_XPU : OFF -- USE_ROCM : OFF -- BUILD_NVFUSER : -- USE_EIGEN_FOR_BLAS : ON -- USE_EIGEN_FOR_SPARSE : OFF -- USE_FBGEMM : OFF -- USE_KINETO : ON -- USE_GFLAGS : OFF -- USE_GLOG : OFF -- USE_LITE_PROTO : OFF -- USE_PYTORCH_METAL : OFF -- USE_PYTORCH_METAL_EXPORT : OFF -- USE_MPS : OFF -- CAN_COMPILE_METAL : -- USE_MKL : OFF -- USE_MKLDNN : OFF -- USE_UCC : OFF -- USE_ITT : ON -- USE_XCCL : OFF -- USE_NCCL : OFF -- Found NVSHMEM : -- USE_NNPACK : OFF -- USE_NUMPY : OFF -- USE_OBSERVERS : ON -- USE_OPENCL : OFF -- USE_OPENMP : ON -- USE_MIMALLOC : ON -- USE_MIMALLOC_ON_MKL : OFF -- USE_VULKAN : OFF -- USE_PROF : OFF -- USE_PYTORCH_QNNPACK : OFF -- USE_XNNPACK : ON -- USE_DISTRIBUTED : OFF -- Public Dependencies : -- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;ittnotify;fp16;caffe2::openmp;fmt::fmt-header-only;kineto -- Public CUDA Deps. : -- Private CUDA Deps. : caffe2::curand;caffe2::cufft;caffe2::cublas;fmt::fmt-header-only;E:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v13.0/lib/x64/cudart_static.lib;CUDA::cusparse;CUDA::cufft;CUDA::cusolver;ATEN_CUDA_FILES_GEN_LIB -- USE_COREML_DELEGATE : OFF -- BUILD_LAZY_TS_BACKEND : ON -- USE_ROCM_KERNEL_ASSERT : OFF -- Performing Test HAS_WMISSING_PROTOTYPES -- Performing Test HAS_WMISSING_PROTOTYPES - Failed -- Performing Test HAS_WERROR_MISSING_PROTOTYPES -- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed -- Configuring incomplete, errors occurred! (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久修复conda命令不可用问题 (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPaths = @( >> "$env:USERPROFILE\miniconda3\Scripts", >> "$env:USERPROFILE\anaconda3\Scripts", >> "C:\ProgramData\miniconda3\Scripts" >> ) (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> foreach ($path in $condaPaths) { >> if (Test-Path $path) { >> $env:PATH = "$path;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> break >> } >> } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 验证修复 (pytorch_env) PS E:\PyTorch_Build\pytorch> conda --version conda: The term 'conda' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 设置 cuDNN v9.12 路径 (pytorch_env) PS E:\PyTorch_Build\pytorch> $cudnnPath = "E:\Program Files\NVIDIA\CUNND\v9.12" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 添加到环境变量 (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_ROOT_DIR = $cudnnPath (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_INCLUDE_DIR = "$cudnnPath\include" (pytorch_env) PS E:\PyTorch_Build\pytorch> $env:CUDNN_LIBRARY = "$cudnnPath\lib\x64\cudnn.lib" (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> # 永久生效 (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_ROOT_DIR", $cudnnPath, "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_INCLUDE_DIR", "$cudnnPath\include", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> [Environment]::SetEnvironmentVariable("CUDNN_LIBRARY", "$cudnnPath\lib\x64\cudnn.lib", "Machine") (pytorch_env) PS E:\PyTorch_Build\pytorch> # 原始代码大约在 190 行左右 (pytorch_env) PS E:\PyTorch_Build\pytorch> # 替换为以下内容强制使用 v9.12(pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_VERSION "9.12.0") # 手动指定版本 CUDNN_VERSION: The term 'CUDNN_VERSION' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_FOUND TRUE) CUDNN_FOUND: The term 'CUDNN_FOUND' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_INCLUDE_DIR $ENV{CUDNN_INCLUDE_DIR}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> set(CUDNN_LIBRARY $ENV{CUDNN_LIBRARY}) InvalidOperation: The variable '$ENV' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS "Using manually configured cuDNN v${CUDNN_VERSION}") InvalidOperation: The variable '$CUDNN_VERSION' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Include path: ${CUDNN_INCLUDE_DIR}") InvalidOperation: The variable '$CUDNN_INCLUDE_DIR' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> message(STATUS " Library path: ${CUDNN_LIBRARY}") InvalidOperation: The variable '$CUDNN_LIBRARY' cannot be retrieved because it has not been set. (pytorch_env) PS E:\PyTorch_Build\pytorch> # 精确查找 conda.bat (pytorch_env) PS E:\PyTorch_Build\pytorch> $condaPath = Get-ChildItem -Path C:\ -Recurse -Filter conda.bat -ErrorAction SilentlyContinue | >> Select-Object -First 1 | >> ForEach-Object { $_.DirectoryName } (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch> if ($condaPath) { >> $env:PATH = "$condaPath;$env:PATH" >> [Environment]::SetEnvironmentVariable("PATH", $env:PATH, "Machine") >> Write-Host "Conda found at: $condaPath" -ForegroundColor Green >> } else { >> Write-Host "Conda not found! Installing miniconda..." -ForegroundColor Yellow >> # 自动安装 miniconda >> Invoke-WebRequest -Uri "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" -OutFile "$env:TEMP\miniconda.exe" >> Start-Process -FilePath "$env:TEMP\miniconda.exe" -ArgumentList "/S", "/AddToPath=1", "/InstallationType=AllUsers", "/D=C:\Miniconda3" -Wait >> $env:PATH = "C:\Miniconda3\Scripts;$env:PATH" >> } Conda not found! Installing miniconda... /AddToPath=1 is disabled and ignored in 'All Users' installations Welcome to Miniconda3 py313_25.7.0-2 By continuing this installation you are accepting this license agreement: C:\Miniconda3\EULA.txt Please run the installer in GUI mode to read the details. Miniconda3 will now be installed into this location: C:\Miniconda3 Unpacking payload... Setting up the package cache... Setting up the base environment... Installing packages for base, creating shortcuts if necessary... Initializing conda directories... Setting installation directory permissions... Done! (pytorch_env) PS E:\PyTorch_Build\pytorch> (pytorch_env) PS E:\PyTorch_Build\pytorch>
09-02
==> /var/log/gitlab/gitlab-rails/exceptions_json.log <====> /var/log/gitlab/gitlab-rails/application.log <==2025-08-28T04:45:08.052Z: {:message=>"Dropping detached postgres partitions"}2025-08-28T04:45:08.053Z: {:message=>"Switched database connection", :connection_name=>"main"}2025-08-28T04:45:08.053Z: {:message=>"Checking for previously detached partitions to drop"}2025-08-28T04:45:08.144Z: {:message=>"Finished dropping detached postgres partitions"}2025-08-28T04:45:08.144Z: {:message=>"Switched database connection", :connection_name=>"main"}2025-08-28T04:45:08.237Z: {:message=>"Switched database connection", :connection_name=>"main"}2025-08-28T04:45:08.266Z: {:message=>"Switched database connection", :connection_name=>"main"}2025-08-28T04:45:08.386Z: {:message=>"Switched database connection", :connection_name=>"main"}2025-08-28T05:20:37.702Z: {:message=>"Excluding unhealthy shards", :failed_checks=>[{:status=>"failed", :message=>"7:permission denied. debug_error_string:{\"created\":\"@1756358407.690139809\",\"description\":\"Error received from peer unix:/var/opt/gitlab/gitaly/gitaly.socket\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1063,\"grpc_message\":\"permission denied\",\"grpc_status\":7}", :labels=>{:shard=>"gitaly-2"}}, {:status=>"failed", :message=>"gitaly node connectivity & disk access: the following nodes are not healthy: tcp://192.168.41.3:8075", :labels=>{:shard=>"default"}}], :class=>"RepositoryCheck::DispatchWorker"}2025-08-28T05:40:08.600Z: Ci::StuckBuilds::DropScheduledService: Cleaning scheduled, timed-out builds==> /var/log/gitlab/gitlab-rails/grpc.log <====> /var/log/gitlab/gitlab-rails/git_json.log <====> /var/log/gitlab/gitlab-rails/auth.log <====> /var/log/gitlab/gitlab-rails/api_json.log <====> /var/log/gitlab/gitlab-rails/service_measurement.log <====> /var/log/gitlab/gitlab-rails/database_load_balancing.log <====> /var/log/gitlab/gitlab-rails/production_json.log <=={"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:09:57.670Z","params":[],"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.098239,"mem_objects":1992,"mem_bytes":867576,"mem_mallocs":4906,"mem_total_bytes":947256,"pid":29061,"worker_id":"puma_2","rate_limiting_gates":[],"correlation_id":"7710a24b-ad5e-45c2-b3fb-731d8e630232","db_duration_s":0.0,"view_duration_s":0.00095,"duration_s":0.08899}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:10:12.646Z","params":[],"redis_calls":3,"redis_duration_s":0.00208,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.00208,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.07088,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29064,"worker_id":"puma_3","rate_limiting_gates":[],"correlation_id":"acab1e51-5825-47d1-96dc-1faa70268d7a","db_duration_s":0.0,"view_duration_s":0.00102,"duration_s":0.05915}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:10:27.640Z","params":[],"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.066442,"mem_objects":1992,"mem_bytes":867576,"mem_mallocs":4906,"mem_total_bytes":947256,"pid":29064,"worker_id":"puma_3","rate_limiting_gates":[],"correlation_id":"8151f690-67a9-478a-a6cb-19d703acc7c5","db_duration_s":0.0,"view_duration_s":0.00098,"duration_s":0.05607}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:10:42.640Z","params":[],"redis_calls":3,"redis_duration_s":0.001468,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.001468,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.066602,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29053,"worker_id":"puma_0","rate_limiting_gates":[],"correlation_id":"e0a6709c-3d88-4d2c-9b5f-c6a220ecb137","db_duration_s":0.0,"view_duration_s":0.00093,"duration_s":0.05519}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:10:57.633Z","params":[],"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.059788,"mem_objects":1992,"mem_bytes":867576,"mem_mallocs":4906,"mem_total_bytes":947256,"pid":29053,"worker_id":"puma_0","rate_limiting_gates":[],"correlation_id":"5a3bfca8-2182-4000-9ede-11d89bbf155c","db_duration_s":0.0,"view_duration_s":0.00085,"duration_s":0.05267}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:11:12.642Z","params":[],"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.068716,"mem_objects":1992,"mem_bytes":867576,"mem_mallocs":4906,"mem_total_bytes":947256,"pid":29053,"worker_id":"puma_0","rate_limiting_gates":[],"correlation_id":"ac47d522-332d-4b1d-9d3a-d855ff239712","db_duration_s":0.0,"view_duration_s":0.00093,"duration_s":0.05931}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:11:27.659Z","params":[],"redis_calls":3,"redis_duration_s":0.002131,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.002131,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.083974,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29064,"worker_id":"puma_3","rate_limiting_gates":[],"correlation_id":"2667866f-fcbe-4817-ab97-7df209d37897","db_duration_s":0.0,"view_duration_s":0.00146,"duration_s":0.06867}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:11:42.648Z","params":[],"redis_calls":3,"redis_duration_s":0.001244,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.001244,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.075447,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29057,"worker_id":"puma_1","rate_limiting_gates":[],"correlation_id":"1f51f1b2-c732-4cee-927f-480ae2563dd4","db_duration_s":0.0,"view_duration_s":0.00118,"duration_s":0.06432}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:11:57.647Z","params":[],"redis_calls":3,"redis_duration_s":0.001642,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.001642,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.07402,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29061,"worker_id":"puma_2","rate_limiting_gates":[],"correlation_id":"91a88641-1e5f-4c61-b7f6-b58c9ba15e68","db_duration_s":0.0,"view_duration_s":0.00082,"duration_s":0.0627}{"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"time":"2025-08-28T06:12:12.650Z","params":[],"redis_calls":3,"redis_duration_s":0.00147,"redis_read_bytes":608,"redis_write_bytes":206,"redis_cache_calls":3,"redis_cache_duration_s":0.00147,"redis_cache_read_bytes":608,"redis_cache_write_bytes":206,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.076819,"mem_objects":2529,"mem_bytes":1023080,"mem_mallocs":5170,"mem_total_bytes":1124240,"pid":29053,"worker_id":"puma_0","rate_limiting_gates":[],"correlation_id":"90764384-aa97-40bb-9d13-f381f8285583","db_duration_s":0.0,"view_duration_s":0.0012,"duration_s":0.06532}==> /var/log/gitlab/gitlab-rails/sidekiq_client.log <====> /var/log/gitlab/gitlab-rails/gitlab-rails-db-migrate-2025-08-09-16-03-52.log <====> /var/log/gitlab/gitlab-rails/application_json.log <=={"severity":"INFO","time":"2025-08-28T04:45:08.052Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Dropping detached postgres partitions"}{"severity":"DEBUG","time":"2025-08-28T04:45:08.053Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Switched database connection","connection_name":"main"}{"severity":"INFO","time":"2025-08-28T04:45:08.053Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Checking for previously detached partitions to drop"}{"severity":"INFO","time":"2025-08-28T04:45:08.144Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Finished dropping detached postgres partitions"}{"severity":"DEBUG","time":"2025-08-28T04:45:08.144Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Switched database connection","connection_name":"main"}{"severity":"DEBUG","time":"2025-08-28T04:45:08.238Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Switched database connection","connection_name":"main"}{"severity":"DEBUG","time":"2025-08-28T04:45:08.266Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Switched database connection","connection_name":"main"}{"severity":"DEBUG","time":"2025-08-28T04:45:08.387Z","correlation_id":"f49f6fc97aa56ff192e961d548cef24d","message":"Switched database connection","connection_name":"main"}{"severity":"ERROR","time":"2025-08-28T05:20:37.702Z","correlation_id":"5e2caba4a96c082232b68c79ad5d9c34","message":"Excluding unhealthy shards","failed_checks":[{"status":"failed","message":"7:permission denied. debug_error_string:{\"created\":\"@1756358407.690139809\",\"description\":\"Error received from peer unix:/var/opt/gitlab/gitaly/gitaly.socket\",\"file\":\"src/core/lib/surface/call.cc\",\"file_line\":1063,\"grpc_message\":\"permission denied\",\"grpc_status\":7}","labels":{"shard":"gitaly-2"}},{"status":"failed","message":"gitaly node connectivity \u0026 disk access: the following nodes are not healthy: tcp://192.168.41.3:8075","labels":{"shard":"default"}}],"class":"RepositoryCheck::DispatchWorker"}{"severity":"INFO","time":"2025-08-28T05:40:08.600Z","correlation_id":"996ed303f842c5f001a402c9e726bdbb","message":"Ci::StuckBuilds::DropScheduledService: Cleaning scheduled, timed-out builds"}==> /var/log/gitlab/gitlab-rails/production.log <==Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:15 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:16 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:17 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:18 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:19 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:20 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:21 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:22 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:23 +0800Started GET "/-/health" for 127.0.0.1 at 2025-08-28 14:12:24 +0800==> /var/log/gitlab/prometheus/current <==2025-08-28_03:57:30.77464 ts=2025-08-28T03:57:30.774Z caller=checkpoint.go:100 level=info component=tsdb msg="Creating checkpoint" from_segment=214 to_segment=215 mint=17562168000002025-08-28_03:57:30.93603 ts=2025-08-28T03:57:30.935Z caller=head.go:1013 level=info component=tsdb msg="WAL checkpoint complete" first=214 last=215 duration=161.418949ms2025-08-28_03:57:32.87013 ts=2025-08-28T03:57:32.870Z caller=manager.go:213 level=error component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-cadvisor2025-08-28_03:57:32.87017 ts=2025-08-28T03:57:32.870Z caller=manager.go:213 level=error component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-nodes2025-08-28_03:57:32.87040 ts=2025-08-28T03:57:32.870Z caller=manager.go:213 level=error component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory" scrape_pool=kubernetes-pods2025-08-28_05:57:32.13015 ts=2025-08-28T05:57:32.129Z caller=compact.go:519 level=info component=tsdb msg="write block" mint=1756349850661 maxt=1756353600000 ulid=01K3QM1CY67XX3HNZ43AT89XEM duration=411.731549ms2025-08-28_05:57:32.14118 ts=2025-08-28T05:57:32.141Z caller=db.go:1294 level=info component=tsdb msg="Deleting obsolete block" block=01K2D169FEGBGWT15E3SG4EECN2025-08-28_05:57:32.14605 ts=2025-08-28T05:57:32.145Z caller=db.go:1294 level=info component=tsdb msg="Deleting obsolete block" block=01K2GWSC54DD7BBKBVNC8NSPM02025-08-28_05:57:32.15102 ts=2025-08-28T05:57:32.150Z caller=db.go:1294 level=info component=tsdb msg="Deleting obsolete block" block=01K2EYZW7RMFE1D0FD6YTRFB3N2025-08-28_05:57:32.18483 ts=2025-08-28T05:57:32.184Z caller=head.go:844 level=info component=tsdb msg="Head GC completed" duration=33.616389ms==> /var/log/gitlab/prometheus/state <====> /var/log/gitlab/puma/puma_stderr.log <====> /var/log/gitlab/puma/puma_stdout.log <====> /var/log/gitlab/puma/current <==2025-08-28_03:57:29.57607 {"timestamp":"2025-08-28T03:57:29.576Z","pid":25248,"message":"* Environment: production"}2025-08-28_03:57:29.57610 {"timestamp":"2025-08-28T03:57:29.576Z","pid":25248,"message":"* Master PID: 25248"}2025-08-28_03:57:29.57611 {"timestamp":"2025-08-28T03:57:29.576Z","pid":25248,"message":"* Workers: 4"}2025-08-28_03:57:29.57616 {"timestamp":"2025-08-28T03:57:29.576Z","pid":25248,"message":"* Restarts: (鉁�) hot (鉁�) phased"}2025-08-28_03:57:29.57616 {"timestamp":"2025-08-28T03:57:29.576Z","pid":25248,"message":"* Preloading application"}2025-08-28_03:58:43.04424 {"timestamp":"2025-08-28T03:58:43.043Z","pid":25248,"message":"* Listening on unix:///var/opt/gitlab/gitlab-rails/sockets/gitlab.socket"}2025-08-28_03:58:43.04504 {"timestamp":"2025-08-28T03:58:43.044Z","pid":25248,"message":"* Listening on http://127.0.0.1:8080"}2025-08-28_03:58:43.04515 {"timestamp":"2025-08-28T03:58:43.045Z","pid":25248,"message":"! WARNING: Detected 1 Thread(s) started in app boot:"}2025-08-28_03:58:43.04528 {"timestamp":"2025-08-28T03:58:43.045Z","pid":25248,"message":"! #\u003cThread:0x00007fe68f0ea588 /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/rack-timeout-0.6.3/lib/rack/timeout/support/scheduler.rb:73 sleep\u003e - /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/rack-timeout-0.6.3/lib/rack/timeout/support/scheduler.rb:91:in `sleep'"}2025-08-28_03:58:43.04543 {"timestamp":"2025-08-28T03:58:43.045Z","pid":25248,"message":"Use Ctrl-C to stop"}==> /var/log/gitlab/puma/state <====> /var/log/gitlab/gitlab-kas/current <==2025-08-28_03:54:24.78884 {"level":"info","time":"2025-08-28T11:54:24.788+0800","msg":"Private API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8155"}2025-08-28_03:54:24.78891 {"level":"info","time":"2025-08-28T11:54:24.788+0800","msg":"API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8153"}2025-08-28_03:54:24.78892 {"level":"info","time":"2025-08-28T11:54:24.788+0800","msg":"Agentk API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8150","is_websocket":true}2025-08-28_03:54:24.78893 {"level":"info","time":"2025-08-28T11:54:24.788+0800","msg":"Observability endpoint is up","mod_name":"observability","net_network":"tcp","net_address":"127.0.0.1:8151"}2025-08-28_03:54:24.78893 {"level":"info","time":"2025-08-28T11:54:24.788+0800","msg":"Kubernetes API endpoint is up","mod_name":"kubernetes_api","net_network":"tcp","net_address":"127.0.0.1:8154"}2025-08-28_03:57:24.87723 {"level":"info","time":"2025-08-28T11:57:24.877+0800","msg":"Private API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8155"}2025-08-28_03:57:24.87727 {"level":"info","time":"2025-08-28T11:57:24.877+0800","msg":"Kubernetes API endpoint is up","mod_name":"kubernetes_api","net_network":"tcp","net_address":"127.0.0.1:8154"}2025-08-28_03:57:24.87728 {"level":"info","time":"2025-08-28T11:57:24.877+0800","msg":"API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8153"}2025-08-28_03:57:24.87728 {"level":"info","time":"2025-08-28T11:57:24.877+0800","msg":"Agentk API endpoint is up","net_network":"tcp","net_address":"127.0.0.1:8150","is_websocket":true}2025-08-28_03:57:24.87744 {"level":"info","time":"2025-08-28T11:57:24.877+0800","msg":"Observability endpoint is up","mod_name":"observability","net_network":"tcp","net_address":"127.0.0.1:8151"}==> /var/log/gitlab/gitlab-kas/state <====> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-28-11-56-22.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-09-16-03-47.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-25-16-02-47.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-09-19-26-13.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/current <=={"component":"HealthManager","correlation_id":"01K3QMW441YR5FPC7281XC8428","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:07.425Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMW60ZRCRGG5A8CCWG7KFS","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:09.375Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMW7V9SG05V7RTV1FWGZC5","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:11.241Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMW9KMRXV0PBDZK1H7AH5J","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:13.045Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWBHRFXSF6F2N002A5AT3","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:15.033Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWDDMN13NJGVYHMWFZXD4","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:16.948Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWF6JZP9K1Z9W0TV43XH1","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:18.770Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWH0M10MF7FK1AVW6R663","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:20.628Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWJYMJZBZ9WQD24YZRWM2","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:22.613Z","virtual_storage":"default"}{"component":"HealthManager","correlation_id":"01K3QMWMY57263MBG9SCS66EZY","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 192.168.41.3:8075: connect: connection refused\"","level":"error","msg":"failed checking node health","pid":25138,"storage":"gitaly-3","time":"2025-08-28T06:12:24.646Z","virtual_storage":"default"}==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-26-12-08-33.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-09-22-20-50.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/state <====> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-09-21-38-32.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/praefect/praefect-sql-migrate-2025-08-25-19-09-07.log <==praefect sql-migrate: all migrations are up==> /var/log/gitlab/sidekiq/current <=={"severity":"INFO","time":"2025-08-28T06:11:05.493Z","retry":0,"queue":"cronjob:schedule_merge_request_cleanup_refs","version":0,"queue_namespace":"cronjob","args":[],"class":"ScheduleMergeRequestCleanupRefsWorker","jid":"45ab0f892d791776de9a0b42","created_at":"2025-08-28T06:11:05.467Z","meta.caller_id":"Cronjob","correlation_id":"e6f219114252350a4ccfc656a401e2cc","meta.root_caller_id":"Cronjob","meta.feature_category":"code_review","worker_data_consistency":"always","idempotency_key":"resque:gitlab:duplicate:cronjob:schedule_merge_request_cleanup_refs:33e8a9dcd4c9780ad0ea123ad7ccbabde1aa1e90ffcbb928434ba4b5800a5811","size_limiter":"validated","enqueued_at":"2025-08-28T06:11:05.469Z","job_size_bytes":2,"pid":25493,"message":"ScheduleMergeRequestCleanupRefsWorker JID-45ab0f892d791776de9a0b42: done: 0.016838 sec","job_status":"done","scheduling_latency_s":0.007451,"redis_calls":3,"redis_duration_s":0.002085,"redis_read_bytes":204,"redis_write_bytes":283,"redis_cache_calls":1,"redis_cache_duration_s":0.000723,"redis_cache_read_bytes":202,"redis_cache_write_bytes":61,"redis_queues_calls":2,"redis_queues_duration_s":0.001362,"redis_queues_read_bytes":2,"redis_queues_write_bytes":222,"db_count":0,"db_write_count":0,"db_cached_count":0,"db_replica_count":0,"db_primary_count":0,"db_main_count":0,"db_main_replica_count":0,"db_replica_cached_count":0,"db_primary_cached_count":0,"db_main_cached_count":0,"db_main_replica_cached_count":0,"db_replica_wal_count":0,"db_primary_wal_count":0,"db_main_wal_count":0,"db_main_replica_wal_count":0,"db_replica_wal_cached_count":0,"db_primary_wal_cached_count":0,"db_main_wal_cached_count":0,"db_main_replica_wal_cached_count":0,"db_replica_duration_s":0.0,"db_primary_duration_s":0.0,"db_main_duration_s":0.0,"db_main_replica_duration_s":0.0,"cpu_s":0.010796,"mem_objects":1471,"mem_bytes":122488,"mem_mallocs":380,"mem_total_bytes":181328,"worker_id":"sidekiq_0","rate_limiting_gates":[],"duration_s":0.016838,"completed_at":"2025-08-28T06:11:05.493Z","load_balancing_strategy":"primary","db_duration_s":0.0}{"severity":"INFO","time":"2025-08-28T06:12:04.467Z","retry":0,"queue":"cronjob:users_migrate_records_to_ghost_user_in_batches","version":0,"queue_namespace":"cronjob","args":[],"class":"Users::MigrateRecordsToGhostUserInBatchesWorker","jid":"bdba58df550f25a5c4502ce3","created_at":"2025-08-28T06:12:04.425Z","meta.caller_id":"Cronjob","correlation_id":"f3325d7eef2e720bda14b20f84b55d23","meta.root_caller_id":"Cronjob","meta.feature_category":"users","worker_data_consistency":"always","idempotency_key":"resque:gitlab:duplicate:cronjob:users_migrate_records_to_ghost_user_in_batches:4bdb3193c92ce2ad4e73652ad55816e507f3dd7f07576150fc7c08572353b9e8","size_limiter":"validated","enqueued_at":"2025-08-28T06:12:04.462Z","job_size_bytes":2,"pid":25493,"message":"Users::MigrateRecordsToGhostUserInBatchesWorker JID-bdba58df550f25a5c4502ce3: start","job_status":"start","scheduling_latency_s":0.005215}{"severity":"INFO","time":"2025-08-28T06:12:04.631Z","retry":0,"queue":"cronjob:database_batched_background_migration_ci_database","version":0,"queue_namespace":"cronjob","args":[],"class":"Database::BatchedBackgroundMigration::CiDatabaseWorker","jid":"b8e2653e2bbe232da2228b57","created_at":"2025-08-28T06:12:04.622Z","meta.caller_id":"Cronjob","correlation_id":"f89e8db756fe29aef54362387433b49c","meta.root_caller_id":"Cronjob","meta.feature_category":"database","worker_data_consistency":"always","idempotency_key":"resque:gitlab:duplicate:cronjob:database_batched_background_migration_ci_database:6ba8adee4a8c1e77d2f087a2765c43226ceffa1fd65abc34b95725a7c9abd857","enqueued_at":"2025-08-28T06:12:04.627Z","job_size_bytes":2,"pid":25493,"message":"Database::BatchedBackgroundMigration::CiDatabaseWorker JID-b8e2653e2bbe232da2228b57: start","job_status":"start","scheduling_latency_s":0.004391}{"severity":"INFO","time":"2025-08-28T06:12:04.655Z","class":"Database::BatchedBackgroundMigration::CiDatabaseWorker","database":"ci","message":"skipping migration execution for unconfigured database","retry":0}{"severity":"INFO","time":"2025-08-28T06:12:04.664Z","retry":0,"queue":"cronjob:database_batched_background_migration_ci_database","version":0,"queue_namespace":"cronjob","args":[],"class":"Database::BatchedBackgroundMigration::CiDatabaseWorker","jid":"b8e2653e2bbe232da2228b57","created_at":"2025-08-28T06:12:04.622Z","meta.caller_id":"Cronjob","correlation_id":"f89e8db756fe29aef54362387433b49c","meta.root_caller_id":"Cronjob","meta.feature_category":"database","worker_data_consistency":"always","idempotency_key":"resque:gitlab:duplicate:cronjob:database_batched_background_migration_ci_database:6ba8adee4a8c1e77d2f087a2765c43226ceffa1fd65abc34b95725a7c9abd857","enqueued_at":"2025-08-28T06:12:04.627Z","job_size_bytes":2,"pid":25493,"message":"Database::BatchedBackgroundMigration::CiDatabaseWorker JID-b8e2653e2bbe232da2228b57: done: 0.032335 sec","job_status":"done","scheduling_latency_s":0.004391,"redis_calls":2,"redis_duration_s":0.006605,"redis_read_bytes":2,"redis_write_bytes":236,"redis_queues_calls":2,"redis_queues_duration_s":0.006605,"redis_queues_read_bytes
08-29
根据日志获取服务不可用阶段Build 湘雅三GCP(生产环境) - dockerbuild - Default Job #83 (GCPXY3-DOC-JOB1-83) started building on agent Local Agent1, bamboo version: 8.1.1 simple 12-Aug-2025 17:57:36 Local Agent1 simple 12-Aug-2025 17:57:36 Build working directory is /var/atlassian/application-data/bamboo/local-working-dir/622593/GCPXY3-DOC-JOB1 simple 12-Aug-2025 17:57:36 Executing build 湘雅三GCP(生产环境) - dockerbuild - Default Job #83 (GCPXY3-DOC-JOB1-83) simple 12-Aug-2025 17:57:36 Running pre-build action: VCS Version Collector simple 12-Aug-2025 17:57:36 Running pre-build action: Build Log Labeller Pre Build Action command 12-Aug-2025 17:57:36 Substituting variable: ${bamboo.prod_xy3} with 119.91.104.191 simple 12-Aug-2025 17:57:36 Starting task 'SSH Task' of type 'com.atlassian.bamboo.plugins.bamboo-scp-plugin:sshtask' simple 12-Aug-2025 17:57:36 Connecting to 119.91.104.191 on port: 2022 simple 12-Aug-2025 17:57:37 Executing [ simple 12-Aug-2025 17:57:37 cd /opt/xy3/binaries/ simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-web-prod.yml up -d --build --force-recreate ghc-web simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-auth-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-gateway-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-system-prod simple 12-Aug-2025 17:57:37 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-irbs-prod simple 12-Aug-2025 17:57:37 simple 12-Aug-2025 17:57:37 docker system prune -f simple 12-Aug-2025 17:57:37 ] build 12-Aug-2025 18:00:09 Step 1/6 : FROM java:8 build 12-Aug-2025 18:00:09 ---> d23bdf5b1b1b build 12-Aug-2025 18:00:09 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 65ebb77cad09 build 12-Aug-2025 18:00:09 Step 3/6 : ARG version build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 227aa09399d2 build 12-Aug-2025 18:00:09 Step 4/6 : ARG app build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 81b3499de399 build 12-Aug-2025 18:00:09 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:00:09 ---> Using cache build 12-Aug-2025 18:00:09 ---> 1e7dd4b95377 build 12-Aug-2025 18:00:09 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:00:15 ---> cab7592fa7f9 build 12-Aug-2025 18:00:15 build 12-Aug-2025 18:00:17 Successfully built cab7592fa7f9 build 12-Aug-2025 18:00:17 Successfully tagged ctms-auth-prod:1.0.0 build 12-Aug-2025 18:02:14 Step 1/6 : FROM java:8 build 12-Aug-2025 18:02:15 ---> d23bdf5b1b1b build 12-Aug-2025 18:02:15 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 65ebb77cad09 build 12-Aug-2025 18:02:15 Step 3/6 : ARG version build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 227aa09399d2 build 12-Aug-2025 18:02:15 Step 4/6 : ARG app build 12-Aug-2025 18:02:15 ---> Using cache build 12-Aug-2025 18:02:15 ---> 81b3499de399 build 12-Aug-2025 18:02:15 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:02:16 ---> Using cache build 12-Aug-2025 18:02:16 ---> 9036f99d9220 build 12-Aug-2025 18:02:16 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:02:29 ---> 7cdc7da77ff7 build 12-Aug-2025 18:02:29 build 12-Aug-2025 18:02:30 Successfully built 7cdc7da77ff7 build 12-Aug-2025 18:02:30 Successfully tagged ctms-gateway-prod:1.0.0 build 12-Aug-2025 18:04:17 Step 1/6 : FROM java:8 build 12-Aug-2025 18:04:18 ---> d23bdf5b1b1b build 12-Aug-2025 18:04:18 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 65ebb77cad09 build 12-Aug-2025 18:04:18 Step 3/6 : ARG version build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 227aa09399d2 build 12-Aug-2025 18:04:18 Step 4/6 : ARG app build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> 81b3499de399 build 12-Aug-2025 18:04:18 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:04:18 ---> Using cache build 12-Aug-2025 18:04:18 ---> aa994c85a058 build 12-Aug-2025 18:04:18 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:04:27 ---> 52ea93d68a7c build 12-Aug-2025 18:04:27 build 12-Aug-2025 18:04:28 Successfully built 52ea93d68a7c build 12-Aug-2025 18:04:28 Successfully tagged ctms-system-prod:1.0.0 build 12-Aug-2025 18:06:09 Step 1/6 : FROM java:8 build 12-Aug-2025 18:06:10 ---> d23bdf5b1b1b build 12-Aug-2025 18:06:10 Step 2/6 : VOLUME /tmp build 12-Aug-2025 18:06:10 ---> Using cache build 12-Aug-2025 18:06:10 ---> 65ebb77cad09 build 12-Aug-2025 18:06:10 Step 3/6 : ARG version build 12-Aug-2025 18:06:10 ---> Using cache build 12-Aug-2025 18:06:10 ---> 227aa09399d2 build 12-Aug-2025 18:06:10 Step 4/6 : ARG app build 12-Aug-2025 18:06:11 ---> Using cache build 12-Aug-2025 18:06:11 ---> 81b3499de399 build 12-Aug-2025 18:06:11 Step 5/6 : COPY services/${version}/lib/${app}/ ./lib/ build 12-Aug-2025 18:06:12 ---> Using cache build 12-Aug-2025 18:06:12 ---> 06af45c2f79d build 12-Aug-2025 18:06:12 Step 6/6 : ADD services/${version}/${app}.jar ${app}.jar build 12-Aug-2025 18:06:23 ---> d28fed9c95a3 build 12-Aug-2025 18:06:23 build 12-Aug-2025 18:06:24 Successfully built d28fed9c95a3 build 12-Aug-2025 18:06:24 Successfully tagged ctms-irbs-prod:1.0.0 build 12-Aug-2025 18:06:26 Deleted Images: build 12-Aug-2025 18:06:26 deleted: sha256:b5ba06737e2e551082b8096fbc83ce3675dac3ccfd413b9fe1b15121fbf97cf9 build 12-Aug-2025 18:06:26 deleted: sha256:35c76628a0ea6c92bf3f1e23a65d39b65cb9cfaf83af57df5293573bff5b944f build 12-Aug-2025 18:06:26 deleted: sha256:904ebc147f8ad93396e7ee55cb978baf913c7a9934b2249d22da7c798f15f5c1 build 12-Aug-2025 18:06:26 deleted: sha256:6587fc1c0e72fe65af594441b34436d18df5546688aa564f1419dfa50b022112 build 12-Aug-2025 18:06:26 deleted: sha256:1c8813bff1f6ce731675cf4229211dd4bf3421e7548792cf7657591a970494ac build 12-Aug-2025 18:06:26 deleted: sha256:16b2e7700101600360923d8b05101ff56e06d8da0928ef41edbf2b3f7a68c30c build 12-Aug-2025 18:06:26 deleted: sha256:943137eae2d76ad2baa2753da688278509d17f3b13c36b819dd66126db3db594 build 12-Aug-2025 18:06:26 deleted: sha256:ad52f9ba3affa27cdfc761c46ee7233344dae05480051f6f00da31088f608174 build 12-Aug-2025 18:06:26 build 12-Aug-2025 18:06:26 Total reclaimed space: 24.83MB error 12-Aug-2025 18:06:26 Found orphan containers (ghc-nacos, ctms-irbs-prod, ghc-minio, ctms-auth-prod, ctms-system-prod, ctms-gateway-prod, ghc-mysql, ghc-mysql2, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Recreating ghc-web ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ghc-web ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql, ghc-minio, ghc-web, ghc-mysql2, ghc-nacos, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-auth-prod error 12-Aug-2025 18:06:26 Recreating ctms-auth-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-auth-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql, ghc-web, ghc-mysql2, ghc-nacos, ghc-minio, ghc-redis) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-gateway-prod error 12-Aug-2025 18:06:26 Recreating ctms-gateway-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-gateway-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-redis, ghc-nacos, ghc-web, ghc-mysql2, ghc-mysql, ghc-minio) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-system-prod error 12-Aug-2025 18:06:26 Recreating ctms-system-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-system-prod ... done error 12-Aug-2025 18:06:26 Found orphan containers (ghc-mysql2, ghc-redis, ghc-nacos, ghc-mysql, ghc-minio, ghc-web) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. error 12-Aug-2025 18:06:26 Building ctms-irbs-prod error 12-Aug-2025 18:06:26 Recreating ctms-irbs-prod ... error 12-Aug-2025 18:06:26  error 12-Aug-2025 18:06:26 Recreating ctms-irbs-prod ... done error 12-Aug-2025 18:06:26  simple 12-Aug-2025 18:06:26 [ simple 12-Aug-2025 18:06:26 cd /opt/xy3/binaries/ simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-web-prod.yml up -d --build --force-recreate ghc-web simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-auth-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-gateway-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-system-prod simple 12-Aug-2025 18:06:26 docker-compose -f docker-compose-prod.yml up -d --build --force-recreate ctms-irbs-prod simple 12-Aug-2025 18:06:26 simple 12-Aug-2025 18:06:26 docker system prune -f simple 12-Aug-2025 18:06:26 ] has finished. simple 12-Aug-2025 18:06:26 Result: exit code = 0 simple 12-Aug-2025 18:06:26 Finished task 'SSH Task' with result: Success simple 12-Aug-2025 18:06:26 Running post build plugin 'NCover Results Collector' simple 12-Aug-2025 18:06:26 Running post build plugin 'Artifact Copier' simple 12-Aug-2025 18:06:26 Running post build plugin 'npm Cache Cleanup' simple 12-Aug-2025 18:06:26 Running post build plugin 'Build Results Label Collector' simple 12-Aug-2025 18:06:26 Running post build plugin 'Clover Results Collector' simple 12-Aug-2025 18:06:26 Running post build plugin 'Docker Container Cleanup' simple 12-Aug-2025 18:06:26 Finalising the build... simple 12-Aug-2025 18:06:26 Stopping timer. simple 12-Aug-2025 18:06:26 Build GCPXY3-DOC-JOB1-83 completed. simple 12-Aug-2025 18:06:26 Running on server: post build plugin 'NCover Results Collector' simple 12-Aug-2025 18:06:26 Running on server: post build plugin 'Build Hanging Detection Configuration' simple 12-Aug-2025 18:06:26 Running on server: post build plugin 'Build Labeller' simple 12-Aug-2025 18:06:26 Running on server: post build plugin 'Clover Delta Calculator' simple 12-Aug-2025 18:06:26 Running on server: post build plugin 'Maven Dependencies Postprocessor' simple 12-Aug-2025 18:06:26 All post build plugins have finished simple 12-Aug-2025 18:06:26 Generating build results summary... simple 12-Aug-2025 18:06:26 Saving build results to disk... simple 12-Aug-2025 18:06:26 Store variable context... simple 12-Aug-2025 18:06:26 Indexing build results... simple 12-Aug-2025 18:06:26 Finished building GCPXY3-DOC-JOB1-83.
08-14
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值