python 中 keras.datasets 加载cifar10数据集(本地导入)

如果在python中无法加载的话,那只能本地导入cifar

首先先下载 cifar-10-python.tar.gz

再把cifar-10-python.tar.gz 放到本地文件夹中

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA6JGx5ZKM6aaZ6I-c6YO954ix5ZCD,size_20,color_FFFFFF,t_70,g_se,x_16

这里我把cifar-10-python.tar.gz改成了cifar-10-batches-py.tar.gz,其实可以不改名,不影响的,但最好还是改一下名字!

搞完了这些后就可以去python中运行

from keras import datasets

(x_train,y_train),(x_test,y_test)

=keras.datasets.cifar10.load_data()

如果没成功,出现这样的情况话:

1664e1328f09438aacad50ae6b4137be.png

先关掉python,再打开运行一次python.

 

还报错的话!

请输入:

import tensorflow as tf(x_train,y_train),(x_test,y_test) =tf.keras.datasets.cifar10.load_data()   

就ok了!

如果出现没有发现文件错误,如下图:

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA6JGx5ZKM6aaZ6I-c6YO954ix5ZCD,size_20,color_FFFFFF,t_70,g_se,x_16

把cifar-10-python.tar.gz删除,重新下载

 

 

 

 

 

 

  • 6
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
boston_housing module: Boston housing price regression dataset. cifar10 module: CIFAR10 small images classification dataset. cifar100 module: CIFAR100 small images classification dataset. fashion_mnist module: Fashion-MNIST dataset. imdb module: IMDB sentiment classification dataset. mnist module: MNIST handwritten digits dataset. reuters module: Reuters topic classification dataset. import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() cifar100 = keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar100.load_data() cifar10 = keras.datasets.cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() imdb = keras.datasets.imdb (x_train, y_train), (x_test, y_test) = imdb.load_data() # word_index is a dictionary mapping words to an integer index word_index = imdb.get_word_index() # We reverse it, mapping integer indices to words reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # We decode the review; note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in x_train[0]]) print(decoded_review) boston_housing = keras.datasets.boston_housing (x_train, y_train), (x_test, y_test) = boston_housing.load_data() reuters= keras.datasets.reuters (x_train, y_train), (x_test, y_test) = reuters.load_data() tf.keras.datasets.reuters.get_word_index( path='reuters_word_index.json' )
CIFAR-10是一个包含60000张32x32像素的彩色图片的数据集。这些图片包含了10个不同的类别:飞机、汽车、鸟、猫、鹿、狗、青蛙、马、船和卡车。 下面是使用Python进行CIFAR-10数据集分类的基本步骤: 1. 导入必要的库和模块: ```python import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt ``` 2. 加载数据集: ```python (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() ``` 3. 数据预处理: ```python train_images, test_images = train_images / 255.0, test_images / 255.0 ``` 4. 定义模型: ```python model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) ``` 5. 编译模型: ```python model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) ``` 6. 训练模型: ```python history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) ``` 7. 评估模型: ```python test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) ``` 完整代码示例: ```python import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt # 加载数据集 (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # 数据预处理 train_images, test_images = train_images / 255.0, test_images / 255.0 # 定义模型 model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) # 编译模型 model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) # 训练模型 history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) # 评估模型 test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) ``` 这样就可以使用Python对CIFAR-10数据集进行分类了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值