RetinexNet算法代码运行

1、从github上下载代码

https://github.com/weichen582/RetinexNet

本地环境未安装成功,在AutoDl中执行成功,使用的配置是如下图:

先执行 python main.py 再执行 python main.py --phase=test

conda create --name tensorflow-env tensorflow python=3.8 conda activate tensorflow-env pip install pillow 将 # import tensorflow as tf 改为 import tensorflow.compat.v1 as tf 要改的太多了:安装tensorflow 1.15.5 pip uninstall tensorflow # 重新安装最新版本tensorflow pip install tensorflow1.15.5

root@autodl-container-d823119b52-2480a5ee:~# cd autodl-tmp/RetinexNet-master/ root@autodl-container-d823119b52-2480a5ee:~/autodl-tmp/RetinexNet-master# python main.py 2023-10-28 11:44:00.543486: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation. [*] GPU 2023-10-28 11:44:01.398316: I tensorflow/core/platform/cpu_feature_guard.cc:145] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX 2023-10-28 11:44:01.430188: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2500000000 Hz 2023-10-28 11:44:01.434284: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56022f2193a0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2023-10-28 11:44:01.434302: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2023-10-28 11:44:01.436500: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2023-10-28 11:44:01.590961: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56022f21da30 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2023-10-28 11:44:01.590988: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 3090, Compute Capability 8.6 2023-10-28 11:44:01.591478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties: name: NVIDIA GeForce RTX 3090 major: 8 minor: 6 memoryClockRate(GHz): 1.695 pciBusID: 0000:43:00.0 2023-10-28 11:44:01.591513: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2023-10-28 11:44:01.599500: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2023-10-28 11:44:01.602809: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2023-10-28 11:44:01.603925: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2023-10-28 11:44:01.604635: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11 2023-10-28 11:44:01.605980: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2023-10-28 11:44:01.606270: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2023-10-28 11:44:01.606938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0 2023-10-28 11:44:01.606964: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2023-10-28 11:44:01.996859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix: 2023-10-28 11:44:01.996892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0 2023-10-28 11:44:01.996900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N 2023-10-28 11:44:01.997700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 12134 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:43:00.0, compute capability: 8.6) WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:58: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W1028 11:44:02.002405 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:58: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:19: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. W1028 11:44:02.005759 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:19: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:19: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead. W1028 11:44:02.005921 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:19: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:38: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead. W1028 11:44:02.150308 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:38: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.234852 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.245300 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.261503 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.271692 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.287640 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. W1028 11:44:02.298180 140390209119040 image_ops_impl.py:1776] The operation `tf.image.convert_image_dtype` will be skipped since the input and output dtypes are identical. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:91: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. W1028 11:44:02.320108 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:91: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:93: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead. W1028 11:44:02.320262 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:93: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:99: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. W1028 11:44:03.060690 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:99: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. WARNING:tensorflow:From /root/autodl-tmp/RetinexNet-master/model.py:101: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead. W1028 11:44:03.512579 140390209119040 module_wrapper.py:137] From /root/autodl-tmp/RetinexNet-master/model.py:101: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead. [*] Initialize model successfully... [*] Number of training data: 0 INFO:tensorflow:Restoring parameters from ./checkpoint/Decom/RetinexNet-Decom-0 I1028 11:44:03.613654 140390209119040 saver.py:1284] Restoring parameters from ./checkpoint/Decom/RetinexNet-Decom-0 Traceback (most recent call last): File "main.py", line 118, in <module> tf.app.run() File "/root/miniconda3/lib/python3.8/site-packages/tensorflow_core/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/root/miniconda3/lib/python3.8/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/root/miniconda3/lib/python3.8/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "main.py", line 99, in main lowlight_train(model) File "main.py", line 68, in lowlight_train lowlight_enhance.train(train_low_data, train_high_data, eval_low_data, batch_size=args.batch_size, patch_size=args.patch_size, epoch=args.epoch, lr=lr, sample_dir=args.sample_dir, ckpt_dir=os.path.join(args.ckpt_dir, 'Decom'), eval_every_epoch=args.eval_every_epoch, train_phase="Decom") File "/root/autodl-tmp/RetinexNet-master/model.py", line 153, in train start_epoch = global_step // numBatch ZeroDivisionError: integer division or modulo by zero

  • 31
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
GitHub代码复现是指在GitHub上找到一个感兴趣或有用的开源代码项目,并通过阅读代码、运行代码并进行修改来重新实现或再次创建整个项目。 首先,需要在GitHub上搜索并找到目标项目。可以通过GitHub的搜索功能,输入关键词、项目名称、描述等来筛选出符合条件的项目。选择一个代码质量好、维护活跃的项目会更有保障。 一旦找到了目标项目,就可以clone(克隆)该项目到本地。可以使用git命令行或者通过GitHub Desktop等工具进行操作。克隆项目后,就可以在本地对代码进行修改、调试、定制等。 接下来,对项目进行配置和安装依赖。一般来说,项目中会有一个readme文件或者其他文档来指导配置环境和安装所需的依赖包。根据项目要求进行配置和安装。 然后,就可以运行项目了。根据项目的要求,可能需要提供一些参数或者数据集。根据项目的文档,在终端或者IDE中运行相应的命令或者程序。 当项目运行成功后,就可以根据自己的需求对代码进行修改和优化。可以根据项目的架构和实现逻辑进行更改,添加新的功能,或者提升代码的性能等。 最后,如果对项目的改进比较显著,可以考虑提交自己的贡献给项目的维护者。可以通过Fork项目、修改代码、提交Pull Request等方式向项目提交自己的改动。项目维护者会进行代码审查,并决定是否接受你的改动。 总之,GitHub代码复现是一个学习和交流的过程。通过复现别人的代码,可以提升自己的编程能力,了解项目的实现细节,还可以与其他开发者交流、合作,共同提高。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值