mnist数据集本地导入,及keras使用

大家都知道keras在使用mnist时需要在线下载数据之后才能继续执行后续代码,而mnist数据集的下载地址已经被墙了。

即:在使用时会报错:

from keras.datasets import mnist

在本地使用方法步骤如下:
1.下载本地:
蓝奏云下载地址:链接
解压之后是一个.npz文件。

编辑mnist.py(我的位置是C:\Users\LFY\envs\deep\Lib\site-packages\keras\datasets)或者pycharm中按住Ctrl点击上面那句代码的mnist即可打开mnisy.py文件:
在这里插入图片描述
编辑此.py文件为(把原先的源代码注释掉,增加下面的即可):
(可以看到,源代码默认从亚马逊上下载mnist数据。)

def load_data(path='mnist.npz'):
    """Loads the MNIST dataset.

    # Arguments
        path: path where to cache the dataset locally
            (relative to ~/.keras/datasets).

    # Returns
        Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
    """
    # path = get_file(path,
    #                 origin='https://s3.amazonaws.com/img-datasets/mnist.npz',
    #                 file_hash='8a61469f7ea1b51cbae51d4f78837e45')
    # with np.load(path, allow_pickle=True) as f:
    #     x_train, y_train = f['x_train'], f['y_train']
    #     x_test, y_test = f['x_test'], f['y_test']
    # return (x_train, y_train), (x_test, y_test)
    path = r'D:\Download\mnist.npz' #此处的path是你下载的mnist.py的目录。加r的原因是 让斜杠‘\’不转义,或者可以把\替换为/ 和开头加r的效果相同。
    with np.load(path) as f:
    	x_train, y_train = f['x_train'], f['y_train']
    	x_test, y_test = f['x_test'], f['y_test']
    return (x_train, y_train), (x_test, y_test)

OK,可以加载了。

boston_housing module: Boston housing price regression dataset. cifar10 module: CIFAR10 small images classification dataset. cifar100 module: CIFAR100 small images classification dataset. fashion_mnist module: Fashion-MNIST dataset. imdb module: IMDB sentiment classification dataset. mnist module: MNIST handwritten digits dataset. reuters module: Reuters topic classification dataset. import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() cifar100 = keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar100.load_data() cifar10 = keras.datasets.cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() imdb = keras.datasets.imdb (x_train, y_train), (x_test, y_test) = imdb.load_data() # word_index is a dictionary mapping words to an integer index word_index = imdb.get_word_index() # We reverse it, mapping integer indices to words reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # We decode the review; note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in x_train[0]]) print(decoded_review) boston_housing = keras.datasets.boston_housing (x_train, y_train), (x_test, y_test) = boston_housing.load_data() reuters= keras.datasets.reuters (x_train, y_train), (x_test, y_test) = reuters.load_data() tf.keras.datasets.reuters.get_word_index( path='reuters_word_index.json' )
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值