金鸡湖竞赛杂记2-实操

在这里插入图片描述

import torch
ModuleNotFoundError: No module named 'torch'

解决方案

pip install --user torch -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

#查询pytorch版本
import torch
print(torch.__version__)

在这里插入图片描述

import yaml
ModuleNotFoundError: No module named 'yaml'

解决方案

pip install --user pyymal -i https://pypi.tuna.tsinghua.edu.cn/simple
#PyYAML
pip install --user PyYAML -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

#查询PyYAML版本
import yaml
print(yaml.__version__)

在这里插入图片描述

    from tqdm import tqdm
ModuleNotFoundError: No module named 'tqdm'

解决方案

pip install --user tqdm -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述

import torchvision
ModuleNotFoundError: No module named 'torchvision'
#torchvision 0.8.1
pip install --user torchvision -i https://pypi.tuna.tsinghua.edu.cn/simple
#查询torchvision版本

在这里插入图片描述
在这里插入图片描述

解决办法1
解决办法2

D:\Program Files (x86)\python\lib\site-packages\requests\__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.9) or chardet (5.0.0)/charset_normalizer (2.0.12) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
C:\Users\Administrator\AppData\Roaming\Python\Python310\site-packages\torchvision\models\detection\anchor_utils.py:63: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf (Triggered internally at  ..\torch\csrc\utils\tensor_numpy.cpp:68.)
  device: torch.device = torch.device("cpu"),
pip uninstall urllib3
pip uninstall chardet
pip install --user urllib3==1.25 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install --user chardet==3.0.4 -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf (Triggered internally at  ..\torch\csrc\utils\tensor_numpy.cpp:68.)
  device: torch.device = torch.device("cpu"),

在这里插入图片描述解决办法

pip install --upgrade numpy#慢
pip uninstall numpy
pip install --user numpy -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

import seaborn as sn
ModuleNotFoundError: No module named 'seaborn'

解决方案

pip install --user seaborn -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述

Running auto-py-to-exe v2.21.0
Building directory: C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2
Provided command: pyinstaller --noconfirm --onedir --windowed --icon "C:/Users/Administrator/Desktop/HkTemCollect/sources/images/TemCollect.ico" --add-data "C:/Users/Administrator/Desktop/HkTemCollect;HkTemCollect/"  "C:/Users/Administrator/Desktop/HkTemCollect/main.py"
Recursion Limit is set to 5000
Executing: pyinstaller --noconfirm --onedir --windowed --icon C:/Users/Administrator/Desktop/HkTemCollect/sources/images/TemCollect.ico --add-data C:/Users/Administrator/Desktop/HkTemCollect;HkTemCollect/ C:/Users/Administrator/Desktop/HkTemCollect/main.py --distpath C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\application --workpath C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build --specpath C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2

1667231 INFO: PyInstaller: 5.2
1667244 INFO: Python: 3.10.4
1667245 INFO: Platform: Windows-10-10.0.19041-SP0
1667259 INFO: wrote C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\main.spec
1667278 INFO: UPX is not available.
1667291 INFO: Extending PYTHONPATH with paths
['C:\\Users\\Administrator\\Desktop\\HkTemCollect']
1668088 INFO: checking Analysis
1668100 INFO: Building Analysis because Analysis-03.toc is non existent
1668116 INFO: Reusing cached module dependency graph...
1668169 INFO: Caching module graph hooks...
1668186 WARNING: Several hooks defined for module 'numpy'. Please take care they do not conflict.
1668283 INFO: running Analysis Analysis-03.toc
1668300 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
  required by d:\program files (x86)\python\python.exe
1668540 INFO: Analyzing C:\Users\Administrator\Desktop\HkTemCollect\main.py
1677243 INFO: Processing pre-find module path hook site from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-site.py'.
1677257 INFO: site: retargeting to fake-dir 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\fake-modules'
1684509 INFO: Processing pre-find module path hook PyQt5.uic.port_v2 from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-PyQt5.uic.port_v2.py'.
1687312 INFO: Processing pre-safe import module hook gi from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-gi.py'.
1689224 INFO: Processing pre-safe import module hook six.moves from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-six.moves.py'.
1692753 INFO: Processing module hooks...
1692768 INFO: Loading module hook 'hook-numpy.py' from 'C:\\Users\\Administrator\\AppData\\Roaming\\Python\\Python310\\site-packages\\numpy\\_pyinstaller'...
1692868 INFO: Import to be excluded not found: 'f2py'
1692905 INFO: Loading module hook 'hook-certifi.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1692922 INFO: Loading module hook 'hook-eel.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1693115 INFO: Loading module hook 'hook-jinja2.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1693117 INFO: Loading module hook 'hook-pycparser.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1693122 INFO: Loading module hook 'hook-pyqtgraph.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1694034 INFO: Loading module hook 'hook-difflib.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1694050 INFO: Loading module hook 'hook-distutils.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1694065 INFO: Loading module hook 'hook-distutils.util.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1694065 INFO: Loading module hook 'hook-django.core.cache.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1694760 INFO: Loading module hook 'hook-django.core.mail.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1694860 INFO: Loading module hook 'hook-django.core.management.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1696703 INFO: Loading module hook 'hook-django.db.backends.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1697920 WARNING: Hidden import "django.db.backends.__pycache__.base" not found!
1697932 INFO: Loading module hook 'hook-django.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1705816 INFO: Packages required by django:
['asgiref', 'pytz', 'sqlparse']
1714578 INFO: Loading module hook 'hook-django.template.loaders.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1715337 INFO: Loading module hook 'hook-encodings.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1715774 INFO: Loading module hook 'hook-gevent.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1716551 WARNING: Unable to find package for requirement zope.event from package gevent.
1716560 WARNING: Unable to find package for requirement zope.interface from package gevent.
1716576 INFO: Packages required by gevent:
['setuptools', 'greenlet', 'cffi']
1717698 INFO: Loading module hook 'hook-heapq.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1717712 INFO: Loading module hook 'hook-lib2to3.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1717808 INFO: Loading module hook 'hook-matplotlib.backends.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1717817 INFO: Matplotlib backend selection method: automatic discovery of used backends
1718159 INFO: Trying determine the default backend as first importable candidate from the list: ['Qt5Agg', 'Gtk3Agg', 'TkAgg', 'WxAgg']
1718698 INFO: Selected matplotlib backends: ['Qt5Agg']
1718700 INFO: Loading module hook 'hook-matplotlib.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719036 INFO: Loading module hook 'hook-multiprocessing.util.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719043 INFO: Loading module hook 'hook-numpy._pytesttester.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719056 INFO: Loading module hook 'hook-packaging.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719058 INFO: Loading module hook 'hook-pickle.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719074 INFO: Loading module hook 'hook-PIL.Image.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719800 INFO: Loading module hook 'hook-PIL.ImageFilter.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719808 INFO: Loading module hook 'hook-PIL.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719834 INFO: Loading module hook 'hook-PIL.SpiderImagePlugin.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1719841 INFO: Loading module hook 'hook-pkg_resources.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1721126 INFO: Processing pre-safe import module hook win32com from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\pre_safe_import_module\\hook-win32com.py'.
1721746 WARNING: Hidden import "pkg_resources.py2_warn" not found!
1721749 WARNING: Hidden import "pkg_resources.markers" not found!
1721759 INFO: Loading module hook 'hook-platform.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1721773 INFO: Loading module hook 'hook-PyQt5.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1721786 INFO: Loading module hook 'hook-PyQt5.Qt.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1721841 INFO: Loading module hook 'hook-PyQt5.QtCore.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1721995 INFO: Loading module hook 'hook-PyQt5.QtGui.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1722216 INFO: Loading module hook 'hook-PyQt5.QtHelp.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1722616 INFO: Loading module hook 'hook-PyQt5.QtLocation.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1723202 INFO: Loading module hook 'hook-PyQt5.QtMultimedia.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1723622 INFO: Loading module hook 'hook-PyQt5.QtMultimediaWidgets.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1724183 INFO: Loading module hook 'hook-PyQt5.QtNetwork.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1724580 INFO: Loading module hook 'hook-PyQt5.QtOpenGL.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1725129 INFO: Loading module hook 'hook-PyQt5.QtPositioning.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1725279 INFO: Loading module hook 'hook-PyQt5.QtPrintSupport.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1725688 INFO: Loading module hook 'hook-PyQt5.QtQml.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1726157 INFO: Loading module hook 'hook-PyQt5.QtQuick.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1726598 INFO: Loading module hook 'hook-PyQt5.QtQuickWidgets.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1727162 INFO: Loading module hook 'hook-PyQt5.QtSensors.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1727447 INFO: Loading module hook 'hook-PyQt5.QtSerialPort.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1727584 INFO: Loading module hook 'hook-PyQt5.QtSql.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1727904 INFO: Loading module hook 'hook-PyQt5.QtSvg.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1728329 INFO: Loading module hook 'hook-PyQt5.QtTest.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1728759 INFO: Loading module hook 'hook-PyQt5.QtWidgets.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1729132 INFO: Loading module hook 'hook-PyQt5.QtXml.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1729338 INFO: Loading module hook 'hook-PyQt5.uic.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1729470 INFO: Loading module hook 'hook-PySide2.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1729479 INFO: Loading module hook 'hook-PySide2.QtCore.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1729799 INFO: Loading module hook 'hook-PySide2.QtGui.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1730327 INFO: Loading module hook 'hook-PySide2.QtNetwork.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1731005 INFO: Loading module hook 'hook-PySide2.QtSvg.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1731609 INFO: Loading module hook 'hook-PySide2.QtTest.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1732137 INFO: Loading module hook 'hook-PySide2.QtUiTools.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1732701 INFO: Loading module hook 'hook-PySide2.QtWidgets.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733310 INFO: Loading module hook 'hook-PySide2.QtXml.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733694 INFO: Loading module hook 'hook-pytz.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733939 INFO: Loading module hook 'hook-scipy.linalg.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733945 INFO: Loading module hook 'hook-scipy.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733955 INFO: Loading module hook 'hook-scipy.sparse.csgraph.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1733967 INFO: Loading module hook 'hook-scipy.special._ellip_harm_2.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1736883 INFO: Loading module hook 'hook-scipy.special._ufuncs.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1736899 INFO: Loading module hook 'hook-scipy.stats._stats.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1736902 INFO: Loading module hook 'hook-setuptools.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1738680 INFO: Loading module hook 'hook-sqlite3.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739126 INFO: Loading module hook 'hook-sysconfig.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739142 INFO: Loading module hook 'hook-win32ctypes.core.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739754 INFO: Loading module hook 'hook-xml.dom.domreg.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739773 INFO: Loading module hook 'hook-xml.etree.cElementTree.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739788 INFO: Loading module hook 'hook-xml.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739794 INFO: Loading module hook 'hook-zope.interface.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739812 INFO: Loading module hook 'hook-_tkinter.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1739966 INFO: checking Tree
1739969 INFO: Building Tree because Tree-06.toc is non existent
1739972 INFO: Building Tree Tree-06.toc
1740040 INFO: checking Tree
1740058 INFO: Building Tree because Tree-07.toc is non existent
1740076 INFO: Building Tree Tree-07.toc
1740158 INFO: checking Tree
1740167 INFO: Building Tree because Tree-08.toc is non existent
1740169 INFO: Building Tree Tree-08.toc
1740187 INFO: Loading module hook 'hook-psycopg2.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
1740191 WARNING: Hidden import "mx.DateTime" not found!
1740198 INFO: Loading module hook 'hook-django.contrib.sessions.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1740713 INFO: Loading module hook 'hook-django.db.backends.mysql.base.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1740729 INFO: Loading module hook 'hook-django.db.backends.oracle.base.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1740749 WARNING: Hidden import "django.db.backends.oracle.compiler" not found!
1740764 INFO: Loading module hook 'hook-scipy.spatial.transform.rotation.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1740843 INFO: Loading module hook 'hook-setuptools.msvc.py' from 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks'...
1740936 INFO: Looking for ctypes DLLs
1740951 WARNING: Ignoring ./lib/HCNetSDK.dll imported from C:\Users\Administrator\Desktop\HkTemCollect\lib\hk_dll.py - only basenames are supported with ctypes imports!
1740951 WARNING: Ignoring ./lib/libhcnetsdk.so imported from C:\Users\Administrator\Desktop\HkTemCollect\lib\hk_dll.py - only basenames are supported with ctypes imports!
1741157 INFO: Analyzing run-time hooks ...
1741179 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_inspect.py'
1741183 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_subprocess.py'
1741202 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgutil.py'
1741208 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_multiprocessing.py'
1741218 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgres.py'
1741224 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_mplconfig.py'
1741232 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pyside2.py'
1741247 INFO: Including run-time hook 'd:\\program files (x86)\\python\\lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pyqt5.py'
1741273 INFO: Looking for dynamic libraries
1746379 INFO: Looking for eggs
1746394 INFO: Using Python library d:\program files (x86)\python\python310.dll
1746394 INFO: Found binding redirects: 
[]
1746410 INFO: Warnings written to C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\warn-main.txt
1746619 INFO: Graph cross-reference written to C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\xref-main.html
1746713 INFO: Appending 'datas' from .spec
1746729 INFO: checking PYZ
1746744 INFO: Building PYZ because PYZ-02.toc is non existent
1746760 INFO: Building PYZ (ZlibArchive) C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\PYZ-02.pyz
1748762 INFO: Building PYZ (ZlibArchive) C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\PYZ-02.pyz completed successfully.
1748809 INFO: checking PKG
1748824 INFO: Building PKG because PKG-02.toc is non existent
1748840 INFO: Building PKG (CArchive) main.pkg
1748887 INFO: Building PKG (CArchive) main.pkg completed successfully.
1748902 INFO: Bootloader d:\program files (x86)\python\lib\site-packages\PyInstaller\bootloader\Windows-64bit\runw.exe
1748918 INFO: checking EXE
1748934 INFO: Building EXE because EXE-02.toc is non existent
1748949 INFO: Building EXE from EXE-02.toc
1748965 INFO: Copying bootloader EXE to C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\main.exe.notanexecutable
1748996 INFO: Copying icon to EXE
1749373 INFO: Copying icons from ['C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpdyajuur2\\build\\main\\generated-a52cbcf0af20b6a8e61bb4aa9d65588034ba0fc3b02a7eb3799d17d1883bde46.ico']
1749373 INFO: Writing RT_GROUP_ICON 0 resource with 104 bytes
1749389 INFO: Writing RT_ICON 1 resource with 631 bytes
1749389 INFO: Writing RT_ICON 2 resource with 1124 bytes
1749404 INFO: Writing RT_ICON 3 resource with 1606 bytes
1749420 INFO: Writing RT_ICON 4 resource with 2827 bytes
1749435 INFO: Writing RT_ICON 5 resource with 4261 bytes
1749435 INFO: Writing RT_ICON 6 resource with 11337 bytes
1749451 INFO: Writing RT_ICON 7 resource with 31438 bytes
1749467 INFO: Copying 0 resources to EXE
1749467 INFO: Embedding manifest in EXE
1749482 INFO: Updating manifest in C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\build\main\main.exe.notanexecutable
1749498 INFO: Updating resource type 24 name 1 language 0
1749517 INFO: Appending PKG archive to EXE
1749535 INFO: Fixing EXE headers
1749648 INFO: Building EXE from EXE-02.toc completed successfully.
1749664 INFO: checking COLLECT
1749686 INFO: Building COLLECT because COLLECT-00.toc is non existent
1749702 INFO: Building COLLECT COLLECT-00.toc
An error occurred while packaging
Traceback (most recent call last):
  File "d:\program files (x86)\python\lib\site-packages\auto_py_to_exe\packaging.py", line 131, in package
    run_pyinstaller()
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\__main__.py", line 178, in run
    run_build(pyi_config, spec_file, **vars(args))
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\__main__.py", line 59, in run_build
    PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\build_main.py", line 842, in main
    build(specfile, distpath, workpath, clean_build)
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\build_main.py", line 764, in build
    exec(code, spec_namespace)
  File "C:\Users\ADMINI~1\AppData\Local\Temp\tmpdyajuur2\main.spec", line 42, in <module>
    coll = COLLECT(
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\api.py", line 864, in __init__
    self.__postinit__()
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\datastruct.py", line 173, in __postinit__
    self.assemble()
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\api.py", line 896, in assemble
    fnm = checkCache(
  File "d:\program files (x86)\python\lib\site-packages\PyInstaller\building\utils.py", line 244, in checkCache
    os.remove(cachedfile)
FileNotFoundError: [WinError 2] 系统找不到指定的文件。: 'C:\\Users\\Administrator\\AppData\\Local\\pyinstaller\\bincache00_py310_64bit\\d3dcompiler_47.dll'

Project output will not be moved to output folder
Complete.

解决办法
在这里插入图片描述
在这里插入图片描述


在这里插入图片描述
在这里插入图片描述

Traceback (most recent call last):
  File "main.py", line 27, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\__init__.py", line 201, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\ColorBarItem.py", line 10, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\LinearRegionItem.py", line 5, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\InfiniteLine.py", line 11, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\ViewBox\__init__.py", line 1, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\ViewBox\ViewBox.py", line 1794, in <module>
  File "C:\Users\ADMINI~1\AppData\Local\Temp\embedded.gnhq1_6j.zip\shibokensupport\__feature__.py", line 142, in _import
  File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
  File "pyqtgraph\graphicsItems\ViewBox\ViewBoxMenu.py", line 6, in <module>
  File "importlib\__init__.py", line 126, in import_module
ModuleNotFoundError: No module named 'pyqtgraph.graphicsItems.ViewBox.axisCtrlTemplate_pyqt5'

解决方案1
解决方案2

pip uninstall pyqtgraph
pip install --user pyqtgraph -i https://pypi.tuna.tsinghua.edu.cn/simple
#查看pyinstaller版本
pip uninstall pyinstaller
pip install --user pyinstaller==4.8. -i https://pypi.tuna.tsinghua.edu.cn/simple
import PyInstaller
print('PyInstaller:',PyInstaller.__version__)
#PyInstaller==4.8.

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

import tensorboard
ModuleNotFoundError: No module named 'tensorboard'

在这里插入图片描述

pip install --user tensorboard. -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install --user tb-nightly -i https://pypi.tuna.tsinghua.edu.cn/simple

解决方案
在这里插入图片描述
在这里插入图片描述

pip install --user wandb -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install --user clearml -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述在这里插入图片描述
在这里插入图片描述

pip install --user tornado -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

from sklearn.linear_model import LinearRegression
ModuleNotFoundError: No module named 'sklearn'

解决方案

pip install --user sklearn -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述

from sklearn.externals import joblib
ImportError: cannot import name 'joblib' from 'sklearn.externals' 	

解决方案

pip install --user sklearn -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述

raise ImportError(msg)
ImportError: Missing optional dependency 'openpyxl'.  Use pip or conda to install openpyxl.
#openpyxl.
pip install --user openpyxl -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

Traceback (most recent call last):
  File "TemCollect.py", line 538, in <module>
  File "TemCollect.py", line 290, in __init__
  File "configparser.py", line 964, in __getitem__
KeyError: 'interface'

在这里插入图片描述

pip install --user tensorflow -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述
在这里插入图片描述

Traceback (most recent call last):
  File "TemCollect.py", line 539, in <module>
    MainWindow = MyWindow()
  File "TemCollect.py", line 291, in __init__
    self.setWindowIcon(QIcon(cf['interface']['pic_ico']))  # 设置程序图标
  File "configparser.py", line 964, in __getitem__
KeyError: 'interface'
## request超时设置
    def tornado(self,jobCat, scene, tag, desc):
        try:
            url = 'http://10.5.13.48/tornado-cloud/SendTornadoMessage'
            paramObj = {
                "corp_id": "0029357da1c3291945c445c556c060cb",
                "api_key": "5ae1074c-fd8a-4d50-bf25-4e07aeafb2c3",
                "jobCat": jobCat,
                "scene": scene,
                "tag": tag,
                "desc": desc,
                "picUrl": "",
                "userId": ""
            }
            paramsJson = bytes(json.dumps(paramObj), 'utf-8')
            header = {"Content-Type": "application/json;charset=UTF-8"}
            request = urllib.request.Request(url, paramsJson, header)
            result = urllib.request.urlopen(request,timeout=5)
            res = result.read().decode('utf-8')
        except Exception as e:
            print('tornado error:',e)
  File "C:\Users\Administrator\AppData\Roaming\Python\Python310\site-packages\PyInstaller\utils\win32\icon.py", line 70, in fromfile
    self._fields_ = list(struct.unpack(self._format_, data))
struct.error: unpack requires a buffer of 16 bytes
# -*- coding: utf-8 -*-
"""
Created on Tue Dec  7 16:14:28 2021

@author: zhoug
"""

import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
# from sklearn.externals import joblib
import joblib
import os
from pylab import mpl 
import time
mpl.rcParams["font.sans-serif"] = ["SimHei"]
mpl.rcParams["axes.unicode_minus"] = False # 解决保存图像是负号'-'显示为方块的问题
os.chdir(r"E:\文档\Projects\Pycharm\stoctlist")#修改当前目录,默认c:\user\电脑用户名,将三个文件放到你的文件夹里
class OnefactorRegression:
    def __init__(self,factor_name="货币资金(万元)",stock_num=5):
        self.factor_name=factor_name #定义因子名
        self.stock_num=stock_num  #定义选取5只股票
    
    def get_data(self,data_path):
        time.sleep(3)
        print("当前时间:",time.ctime(),",开始获取数据")
        if data_path=="./data/train.xlsx":
            dftmp=pd.read_excel(data_path,dtype={'stockcode':str},index_col=[-2])
            dftmp=dftmp.loc[:,["stockcode","name","industry","circ_mv",self.factor_name,"月收益率"]]
        else:
            dftmp=pd.read_excel(data_path,dtype={'stockcode':str},index_col=[-1])
            dftmp=dftmp.loc[:,["stockcode","name","industry","circ_mv",self.factor_name]]
        return dftmp
        #去极值、标准化、中性化,数据预处理三步走
        # 处理特征值:去极值、标准化、市值中性化
        #去极值:
        # 第一步:
        # 排除一些极端值的干扰
        #去极值方法:
        # ①(μ-3σ,μ+3σ)
        # 其中μ代表平均值,σ是标准差,
        # 3σ去极值法其实就是把离平均值太远的值算作极端值,
        # ②2、百分位法
        # 对所有的观察值按从小到大进行排序,
        # 最小的百分之X和最大的百分之X就是极端值,
        # 通常情况下,这个X一般取2.5,最小的2.5%和最大的2.5%相加,一共5%的数值被去除。
        #第二步:
        # 处理的方法也有两种:截尾和缩尾
        # 截尾:把找出的极端值直接去掉。
        # 缩尾:所有大于上临界值的值全等于上临界值,所有小于下临界值的值全等于下临界值,相当于把超出临界值的点都拽回到边界上。
        # 实际操作:
        # 下面我们使用优矿的去极值函数winsorize进行演示:
        # 优矿winsorize函数支持3σ和百分位法两种去极值方法,可以分别对相关参数进行调整。
        #核心代码:
        # pe['winsorized PE'] = winsorize(pe['PE'], win_type='NormDistDraw', n_draw=5)

        #标准化
        #标准化的目的是量级不同的数据可以放在一起比较
        #方法:
        #l 对数法:对股价和成交额求对数,相当于比较两者的增长率,增长率的值往往比较小,可以比较。
        #l min-max标准化:
        # (观察值 – 最小值)/(最大值 – 最小值)
        #标准化方法——z-score
        #财务里也有一个z-score,是通过对各项财务指标加权求和得出破产指数,用来评估企业运行状况,和这里的z-score显然是不一样的。
        #z-score公式如下:
        #(观测值 – 平均值)/标准差
        #通过公式可以看到z-score是计算观测值和平均值的差是标准差的几倍。


        #中性化
        #中性化其实就是起到一个提纯的作用。处理的问题有点类似于计量中对多重共线性。
        #市值中性化和行业中性化
        #

    def med_method(self,data):
        factor=data[self.factor_name]
        # 1、找到MAD值
        med = np.median(factor)
        distance = abs(factor - med)
        MAD = np.median(distance)
        # 2、求出MAD_e
        MAD_e = 1.4826 * MAD
        # 3、求出正常值范围的边界
        up_scale = med + 3 * MAD_e
        down_scale = med - 3 * MAD_e
        # 4、替换
        factor = np.where(factor > up_scale, up_scale, factor)
        factor = np.where(factor < down_scale, down_scale, factor)
        return factor
    
    # 自实现标准化
    # (x - mean) / std
    def stand_method(self,data):
        factor=data[self.factor_name]
        mean = np.mean(factor)
        std = np.std(factor)
        factor = (factor - mean) / std
        return factor
    
    #踢除市值因子影响
    def 去市值化(self,data):
        factor=data[self.factor_name]
        x_market_cap = data['circ_mv'].values.reshape(-1,1)#市值因子
        y_factor = factor
        # 线性回归预估器流程
        estimator = LinearRegression()
        estimator.fit(x_market_cap, y_factor)
        y_predict = estimator.predict(x_market_cap)
        factor = y_factor - y_predict
        return factor
    # 数据预处理:缺失值处理,去极值化,标准化,踢除市值因子影响
    def processing_data(self,data):
        time.sleep(3)
        print("当前时间:",time.ctime(),",开始清洗数据")
        data.dropna(inplace=True)#缺失值处理
        data[self.factor_name]=self.med_method(data)#去极值化
        data[self.factor_name]=self.stand_method(data)#标准化
        data[self.factor_name]=self.去市值化(data)#去市值化,踢除市值因子影响
        return data[self.factor_name]
        
    #训练模型,基于预测收益率top5选股
    def regression(self):
        data_path="./data/train.xlsx"
        data=self.get_data(data_path)#读取训练数据
        data[self.factor_name]=self.processing_data(data) # 数据预处理:缺失值处理,去极值化,标准化,踢除市值因子影响
        time.sleep(3)
        print("当前时间:",time.ctime(),",建立模型")
        #估计器实现
        #实例化一个估计器
        estimator1=LinearRegression()
        #传入训练数据,进行训练
        estimator1.fit(data[self.factor_name].values.reshape(-1,1),data['月收益率'])
        print("当前时间:",time.ctime(),"回归系数:",estimator1.coef_)
        print("当前时间:",time.ctime(),"偏置:",estimator1.intercept_)
        joblib.dump(estimator1, "./data/regression.pkl")
#        weights,bias=estimator1.coef_,estimator1.intercept_
#        return 
        #训练样本预测
#        testdata="./data/test.xlsx"
#        df_test=self.get_data(testdata)#读取测试数据
#        df_test[self.factor_name]=self.processing_data(df_test)#测试因子数据处理
    #训练出模型,基于预测收益率top5选股
    def predict(self,data_path):
        estimator1 = joblib.load("./data/regression.pkl")
#        data_path="./data/train.xlsx"
        data=self.get_data(data_path)#读取训练数据
        data[self.factor_name]=self.processing_data(data) # 数据预处理:缺失值处理,去极值化,标准化,踢除市值因子影响
        data['预测月收益率']=estimator1.predict(data[self.factor_name].values.reshape(-1,1))#预测新股票收益率
        if data_path=="./data/test.xlsx":
            data.to_excel("./data/result.xlsx")
        stockpool=data.sort_values(by="预测月收益率",ascending=False).index[:self.stock_num]#排序
        print(stockpool.tolist())
        return stockpool.tolist()
        
    def get_plot(self,stockpool,data_path):
        if data_path=="./data/train.xlsx":
            mytitle="基于训练样本模型预测:2021三季报"+self.factor_name+"因子投资组合11月走势 "
            fig_path="./data/训练样本模型表现——投资组合净值走势"
        else:
            mytitle="基于测试样本模型预测:2021三季报"+self.factor_name+"因子投资组合11月走势 "
            fig_path="./data/测试样本模型表现——投资组合净值走势"
        time.sleep(3)
        print("当前时间:",time.ctime(),",画投资组合走势图a")
        df_plot=pd.read_excel("./data/stocklist.xlsx",index_col=0)
        dfhs300=df_plot['000300.SH']
        df_plot=df_plot.loc[:,stockpool]
        df_plot['投资组合']=df_plot.mean(axis=1)
        df_plot['000300.SH']=dfhs300
        df_plot[['投资组合','000300.SH']].plot(figsize=(10,4),legend=True,title=mytitle)
        plt.savefig(fig_path)
        
        
        
    
if __name__=='__main__':
    self=OnefactorRegression()#默认资本公积(万元),修改因子名,测试不同因子(两张表中列名即为因子名)
    self.regression()
    stockpool=self.predict(data_path="./data/train.xlsx")
    self.get_plot(stockpool,data_path="./data/train.xlsx")
    
    
   
    
#ts.set_token('5e187186732eb93f7c72a21548a4f02644b32d91eaa104fb32382c29')
#pro=ts.pro_api()
#df_stockA = pro.stock_basic(exchange='', list_status='L', fields='ts_code,symbol,name,area,industry,list_date')
#df_stockA['list_date']=df_stockA['list_date'].astype('int')#数据转换
#df_stockA=df_stockA[df_stockA['list_date']<=20200630]#踢除新股
#df_stockA.drop('area',axis=1,inplace=True)#删除区域
#df_daily = pro.query('daily_basic', ts_code='', trade_date='20210630',fields='ts_code,trade_date,pe,pb,ps,dv_ratio,circ_mv')
#df_daily.drop('trade_date',axis=1,inplace=True)#删除日期
#df_stockA=pd.merge(df_stockA,df_daily,how='inner',on='ts_code')
#df_stockA.dropna(inplace=True)
#df_stockA['stockcode']=[stock[:6] for stock in df_stockA['ts_code']]
#df_stockA.set_index('stockcode',drop=True,inplace=True)#重设index,便于定位
#fdata20210630=df_stockA.iloc[:,2:]
#fdata20210930=df_stockA.iloc[:,[2,3]+list(range(10,118))]
#for file in file_list:
#    if file[-10:-4] in df_stockA.index:
#        dftmp=pd.read_csv("E:/guobing-studio/wyfe_stockdata/2021Q3/"+file,encoding='GB18030',engine='python',header=0,index_col=0,nrows=108,keep_default_na=False,na_values="--")
#        dftmp=dftmp.iloc[:,:2]
#        dftmp.columns=["2021-09-30","2021-06-30"]
#        for ix in dftmp.index:
#            fdata20210630.loc[file[-10:-4],ix]=dftmp.loc[ix,'2021-06-30']
#            fdata20210930.loc[file[-10:-4],ix]=dftmp.loc[ix,'2021-09-30']

#df_train=pd.read_excel(".\asset_debt20210630.xlsx")
#df_train=pd.read_excel(".\asset_debt20210630.xlsx",engine='python',header=0,index_col=0)
#df_train=pd.read_excel(".\asset_debt20210630.xlsx",encoding='GB18030',header=0,index_col=0)
#df_train=pd.read_excel(".\asset_debt20210630.xlsx")
#df_train=pd.read_excel("./asset_debt20210630.xlsx")
#df_train=pd.read_excel("./asset_debt20210630.xlsx",dtype={'stockcode':str})
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 20 15:17:09 2022

@author: zhoug
"""

import pandas as pd
import numpy as np
import glob
import random
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
import os
os.chdir(r"E:\文档\Projects\Pycharm\image_detect")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
#参数设置不宜过大,尽量不超过500,为100的倍数
batch_size=100
capacity=200
num_epoch=500#迭代次数
tf.app.flags.DEFINE_integer('batch_size', batch_size, '每批次的样本数量')
tf.app.flags.DEFINE_integer('capacity', capacity, '批处理队列大小')
FLAGS = tf.app.flags.FLAGS
tf.reset_default_graph()
def parse_csv():
    """
    解析CSV文件, 建立文件名和标签值对应表格
    :return: None
    """

    # 读取 csv标签文件
    csv_data = pd.read_csv('./data/train/labels.csv', names=['index', 'chars'], index_col='index')
    # print(csv_data)

    # 增加lables列
    csv_data['labels'] = None

    # 把字母转换成标签值 A -> 0, B -> 1, ... Z -> 25
    for i, row in csv_data.iterrows():
        labels = []
        # 每一个字母转换为数字标签
        for char in row['chars']:
            # 每个字母的ascii 值 和 'A' 相减
            labels.append(ord(char) - ord('A'))

        # 把标签值添加到 表格中
        csv_data.loc[i, 'labels'] = labels

    return csv_data


def filenames_2_labels(filenames, csv_data):
    """
    文件名转换为 标签值
    :param filenames: 多个文件名的数组
    :param csv_data: 包含文件名和标签值的表格
    :return: 标签值
    """
    # 获得文件名
    labels = []
    for file in filenames:
        index, _ = os.path.splitext(os.path.basename(file))
        # 根据文件名查找标签值, 添加到 标签值列表
        labels.append(csv_data.loc[int(index), 'labels'])

    return np.array(labels)


def pic_read(files):
    """
    文件队列读取图片
    :return: 图片和文件名
    """
    # 创建文件名队列
#    filename_queue = tf.train.string_input_producer(files)
    filename_queue = tf.train.input_producer(files)

    # 创建读取器, 读取图片。
    # 返回的第一个值为文件名
    # 第二个值为图片内容
    filename, value = tf.WholeFileReader().read(filename_queue)

    # 对图片进行解码
    image = tf.image.decode_jpeg(value)
    print('image:', image)

    # 设置形状,由于不改变张量的阶,所以可以直接调用张量的set_shape,
    # 不需要调用tf.reshape
    image.set_shape([20, 80, 3])

    # 建立批处理对列
    image_batch, filename_batch = tf.train.batch([image, filename],
            batch_size=FLAGS.batch_size, num_threads=2, capacity=FLAGS.capacity)

    return image_batch, filename_batch


def weight_var(shape, name=None):
    return tf.Variable(tf.truncated_normal(shape, mean=0.0, stddev=0.01, dtype=tf.float32), name=name)


def bias_var(shape, name=None):
    return tf.Variable(tf.zeros(shape, dtype=tf.float32), name=name)


def create_cnn_model():
    """
    创建卷积神经网络模型, 两个大的卷积层和一个全连接层
    :return: x, y_true, logits
    """
    # 定义数据占位符
    with tf.variable_scope('data'):
        x = tf.placeholder(tf.float32, [None, 20, 80, 3])
        y_true = tf.placeholder(tf.float32, [None, 4*26])

    # 卷积大层1: 卷积层, 激活函数, 池化层
    with tf.variable_scope('conv1'):
        # 卷积层: 输入: [None, 20, 80, 3]
        # 过滤器: size=[3,3], in_channel: 3, out_channels: 32, strides=1*1, padding='SAME'
        # 权重变量形状: [3, 3, 3, 32]
        # 输出的形状: [None, 20, 80, 32]
        w_conv1 = weight_var([3,3,3,32], name='w_conv1')
        b_conv1 = bias_var([32], name='b_conv1')

        x_conv1 = tf.nn.conv2d(x, filter=w_conv1, strides=[1, 1, 1, 1],
                               padding='SAME', name= 'conv1_2d') + b_conv1

        # 激活函数
        x_relu1 = tf.nn.relu(x_conv1, name='relu1')

        # 池化层: 输入形状 [None, 20, 80, 32]
        # kszie=[1, 2, 2, 1], stride =[1, 2, 2, 1]
        # 输出形状 [None, 10, 40 ,32]
        x_pool1 = tf.nn.max_pool(x_relu1, ksize=[1,2,2,1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')

    # 卷积大层2: 卷积层, 激活函数, 池化层
    with tf.variable_scope('conv2'):
        # 卷积层: 输入: [None, 10, 40, 32]
        # 过滤器: size=[3,3], in_channel: 32, out_channels: 64, strides=1*1, padding='SAME'
        # 权重变量形状: [3, 3, 32, 64]
        # 输出的形状: [None, 10, 40, 64]
        w_conv2 = weight_var([3, 3, 32, 64], name='w_conv2')
        b_conv2 = bias_var([64], name='b_conv2')

        x_conv2 = tf.nn.conv2d(x_pool1, filter=w_conv2, strides=[1, 1, 1, 1],
                               padding='SAME', name='conv2_2d') + b_conv2

        # 激活函数
        x_relu2 = tf.nn.relu(x_conv2, name='relu1')

        # 池化层: 输入形状 [None, 10, 40, 64]
        # kszie=[1, 2, 2, 1], stride =[1, 2, 2, 1]
        # 输出形状 [None, 5, 20 ,64]
        x_pool2 = tf.nn.max_pool(x_relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')


    # 全连接
    with tf.variable_scope('fc'):
        # 输入形状: [None, 5, 20, 64] => [None, 5*20*64]
        # 输出形状: [None, 4*26]
        # 权重矩阵: [5*20*64, 4*26]
        w_fc = weight_var([5*20*64, 4*26], name='w_fc')
        b_fc = bias_var([4*26])

        # 计算加权
        logits = tf.matmul(tf.reshape(x_pool2, [-1, 5*20*64]), w_fc) + b_fc

    return x, y_true, logits

def captcha(labels_data,num_epoch):
    """
    卷积神经网络实现验证码识别
    :return:
    """
    # 准备文件名列表
    files = glob.glob('./data/train/*.jpg')
    random.shuffle(files)

    # 文件读取流程 读取文件
    image_batch, filename_batch = pic_read(files)

    # 创建卷积神经网络
    x, y_true, logits = create_cnn_model()

    # 计算sigmoid交叉熵损失
    with tf.variable_scope('loss'):
        # y_true: 真实值 [100, 106],one_hot编码
        # logits: 全连接输出层的加权值,[100, 106]
        # 对返回的交叉熵列表计算平均值
        loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_true, logits=logits))

    # 优化
    with tf.variable_scope('optimize'):
        train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
        # train_op = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)

    # 计算准确率
    with tf.variable_scope('accuracy'):
        equal_list = tf.reduce_all(
            tf.equal(tf.argmax(tf.reshape(logits, [-1, 4, 26]), axis=-1),
                     tf.argmax(tf.reshape(y_true, [-1, 4, 26]), axis=-1)), axis=-1)

        accuracy = tf.reduce_mean(tf.cast(equal_list, tf.float32))

    # 实例化模型保存类
    saver = tf.train.Saver()

    # 开启会话训练
    with tf.Session() as sess:
        # 初始全局变量
        sess.run(tf.global_variables_initializer())

        # 创建线程协调器, 启动文件读取队列的线程
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess, coord)

        # 恢复模型
        if os.path.exists('./models/checkpoint'):
            
            saver.restore(sess, './models/captcha')

        # 迭代优化
        for i in range(num_epoch):
            # 获取图片 和 文件名
            images, filenames = sess.run([image_batch, filename_batch])
            # 从文件名列表转换成标签数组
            labels = filenames_2_labels(filenames, labels_data)
            # print(labels)
            labels_onehot = tf.reshape(tf.one_hot(labels, 26), [-1, 4*26]).eval()

            _, loss_value, acc = sess.run([train_op, loss, accuracy], feed_dict={x: images, y_true: labels_onehot})
            print('第 {} 次的 损失值 {} 和 准确率 {}'.format(i, loss_value, acc))

            # 保存模型
            if (i+1) % 150 == 0:
                saver.save(sess, './models/captcha')

        # 关闭线程
        coord.request_stop()
        coord.join(threads)        
def del_all_flags(FLAGS):
    flags_dict = FLAGS._flags()
    keys_list = [keys for keys in flags_dict]
    for keys in keys_list:
        FLAGS.__delattr__(keys)
if __name__ == '__main__':
    # 建立 文件名和标签值的表格
    csv_data = parse_csv()
    # print(csv_data)

    captcha(csv_data,num_epoch)
    del_all_flags(tf.app.flags.FLAGS)
   
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
"""
Run inference on images, videos, directories, streams, etc.

Usage - sources:
    $ python path/to/detect.py --weights yolov5s.pt --source 0              # webcam
                                                             img.jpg        # image
                                                             vid.mp4        # video
                                                             path/          # directory
                                                             path/*.jpg     # glob

"""

import argparse
import os
import platform
import sys
from pathlib import Path

import torch
import torch.backends.cudnn as cudnn

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLOv5 root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from models.common import DetectMultiBackend
from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
                           increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, time_sync


@torch.no_grad()
def run(
        # weights=ROOT / 'yolov5s.pt',  # model.pt path(s)
        # source=ROOT / 'data/images',  # file/dir/URL/glob, 0 for webcam
        # data=ROOT / 'data/coco128.yaml',  # dataset.yaml path
        weights=ROOT /'runs/train/exp16/weights/best.pt',  # model.pt path(s)
        # source=ROOT /'datasets/e-bike/test/images',  # file/dir/URL/glob, 0 for webcam
        source=ROOT / 'datasets/e-bike/train/images',
        data=ROOT /'data/e-bike.yaml',  # dataset.yaml path

        imgsz=(640, 640),  # inference size (height, width)
        conf_thres=0.25,  # confidence threshold
        iou_thres=0.45,  # NMS IOU threshold
        max_det=1000,  # maximum detections per image
        device='',  # cuda device, i.e. 0 or 0,1,2,3 or cpu
        view_img=False,  # show results
        save_txt=False,  # save results to *.txt
        save_conf=False,  # save confidences in --save-txt labels
        save_crop=False,  # save cropped prediction boxes
        nosave=False,  # do not save images/videos
        classes=None,  # filter by class: --class 0, or --class 0 2 3
        agnostic_nms=False,  # class-agnostic NMS
        augment=False,  # augmented inference
        visualize=False,  # visualize features
        update=False,  # update all models
        project=ROOT / 'runs/detect',  # save results to project/name
        name='exp',  # save results to project/name
        exist_ok=False,  # existing project/name ok, do not increment
        line_thickness=3,  # bounding box thickness (pixels)
        hide_labels=False,  # hide labels
        hide_conf=False,  # hide confidences
        half=False,  # use FP16 half-precision inference
        dnn=False,  # use OpenCV DNN for ONNX inference
):
    print("weights:",weights)
    print("source:",source)
    print("data:",data)

    source = str(source)
    save_img = not nosave and not source.endswith('.txt')  # save inference images
    is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
    is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
    webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
    if is_url and is_file:
        source = check_file(source)  # download

    # Directories
    save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run
    (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir

    # Load model
    device = select_device(device)
    model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
    stride, names, pt = model.stride, model.names, model.pt
    imgsz = check_img_size(imgsz, s=stride)  # check image size

    # Dataloader
    if webcam:
        view_img = check_imshow()
        cudnn.benchmark = True  # set True to speed up constant image size inference
        dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)
        bs = len(dataset)  # batch_size
    else:
        dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)
        bs = 1  # batch_size
    vid_path, vid_writer = [None] * bs, [None] * bs

    # Run inference
    model.warmup(imgsz=(1 if pt else bs, 3, *imgsz))  # warmup
    seen, windows, dt = 0, [], [0.0, 0.0, 0.0]
    for path, im, im0s, vid_cap, s in dataset:
        t1 = time_sync()
        im = torch.from_numpy(im).to(device)
        im = im.half() if model.fp16 else im.float()  # uint8 to fp16/32
        im /= 255  # 0 - 255 to 0.0 - 1.0
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim
        t2 = time_sync()
        dt[0] += t2 - t1

        # Inference
        visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
        pred = model(im, augment=augment, visualize=visualize)
        t3 = time_sync()
        dt[1] += t3 - t2

        # NMS
        pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
        dt[2] += time_sync() - t3

        # Second-stage classifier (optional)
        # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)

        # Process predictions
        for i, det in enumerate(pred):  # per image
            seen += 1
            if webcam:  # batch_size >= 1
                p, im0, frame = path[i], im0s[i].copy(), dataset.count
                s += f'{i}: '
            else:
                p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)

            p = Path(p)  # to Path
            save_path = str(save_dir / p.name)  # im.jpg
            txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}')  # im.txt
            s += '%gx%g ' % im.shape[2:]  # print string
            gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
            imc = im0.copy() if save_crop else im0  # for save_crop
            annotator = Annotator(im0, line_width=line_thickness, example=str(names))
            if len(det):
                # Rescale boxes from img_size to im0 size
                det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()

                # Print results
                for c in det[:, -1].unique():
                    n = (det[:, -1] == c).sum()  # detections per class
                    s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

                # Write results
                for *xyxy, conf, cls in reversed(det):
                    if save_txt:  # Write to file
                        xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                        line = (cls, *xywh, conf) if save_conf else (cls, *xywh)  # label format
                        with open(f'{txt_path}.txt', 'a') as f:
                            f.write(('%g ' * len(line)).rstrip() % line + '\n')

                    if save_img or save_crop or view_img:  # Add bbox to image
                        c = int(cls)  # integer class
                        label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
                        annotator.box_label(xyxy, label, color=colors(c, True))
                    if save_crop:
                        save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

            # Stream results
            im0 = annotator.result()
            if view_img:
                if platform.system() == 'Linux' and p not in windows:
                    windows.append(p)
                    cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO)  # allow window resize (Linux)
                    cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
                cv2.imshow(str(p), im0)
                cv2.waitKey(1)  # 1 millisecond

            # Save results (image with detections)
            if save_img:
                if dataset.mode == 'image':
                    cv2.imwrite(save_path, im0)
                else:  # 'video' or 'stream'
                    if vid_path[i] != save_path:  # new video
                        vid_path[i] = save_path
                        if isinstance(vid_writer[i], cv2.VideoWriter):
                            vid_writer[i].release()  # release previous video writer
                        if vid_cap:  # video
                            fps = vid_cap.get(cv2.CAP_PROP_FPS)
                            w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                            h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                        else:  # stream
                            fps, w, h = 30, im0.shape[1], im0.shape[0]
                        save_path = str(Path(save_path).with_suffix('.mp4'))  # force *.mp4 suffix on results videos
                        vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
                    vid_writer[i].write(im0)

        # Print time (inference-only)
        LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')

    # Print results
    t = tuple(x / seen * 1E3 for x in dt)  # speeds per image
    LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
    if save_txt or save_img:
        s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
        LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
    if update:
        strip_optimizer(weights[0])  # update model (to fix SourceChangeWarning)

def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default=ROOT /'e-bike.pt', help='model path(s)')
    # parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
    parser.add_argument('--source', type=str, default=ROOT / 'datasets/e-bike/test/images', help='file/dir/URL/glob, 0 for webcam')
    # parser.add_argument('--source', type=str, default=ROOT / 'datasets/e-bike/train/images',help='file/dir/URL/glob, 0 for webcam')
    parser.add_argument('--data', type=str, default=ROOT / 'data/e-bike.yaml', help='(optional) dataset.yaml path')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
    parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--view-img', action='store_true', help='show results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
    parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
    parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--visualize', action='store_true', help='visualize features')
    parser.add_argument('--update', action='store_true', help='update all models')
    parser.add_argument('--project', default=ROOT /'runs/detect', help='save results to project/name')
    parser.add_argument('--name', default='exp', help='save results to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
    parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
    parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
    parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
    parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
    opt = parser.parse_args()
    opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1  # expand
    print_args(vars(opt))
    return opt


def main(opt):
    check_requirements(exclude=('tensorboard', 'thop'))
    run(**vars(opt))


if __name__ == "__main__":
    opt = parse_opt()
    main(opt)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值