五月综合激情婷婷六月,日韩欧美国产一区不卡,他扒开我内裤强吻我下面视频 ,无套内射无矿码免费看黄,天天躁,日日躁,狠狠躁

新聞動(dòng)態(tài)

如何解決Keras載入mnist數(shù)據(jù)集出錯(cuò)的問(wèn)題

發(fā)布日期:2022-06-07 14:31 | 文章來(lái)源:腳本之家

1.找到本地keras目錄下的mnist.py文件,目錄:

F:\python_enter_anaconda510\Lib\site-packages\tensorflow\python\keras\datasets

2.下載mnist.npz文件到本地,下載地址:

https://s3.amazonaws.com/img-datasets/mnist.npz

3.修改mnist.py文件為以下內(nèi)容,并保存

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
 
from ..utils.data_utils import get_file
import numpy as np
 
def load_data(path='mnist.npz'):
 """Loads the MNIST dataset.
 # Arguments
  path: path where to cache the dataset locally
(relative to ~/.keras/datasets).
 # Returns
  Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
 """
 path = 'E:/Data/Mnist/mnist.npz' #此處的path為你剛剛防止mnist.py的目錄。注意斜杠
 f = np.load(path)
 x_train, y_train = f['x_train'], f['y_train']
 x_test, y_test = f['x_test'], f['y_test']
 f.close()
 return (x_train, y_train), (x_test, y_test)

補(bǔ)充:Keras MNIST 手寫數(shù)字識(shí)別數(shù)據(jù)集

下載 MNIST 數(shù)據(jù)

1 導(dǎo)入相關(guān)的模塊

import keras
import numpy as np
from keras.utils import np_utils
import os
from keras.datasets import mnist

2 第一次進(jìn)行Mnist 數(shù)據(jù)的下載

 
(X_train_image ,y_train_image),(X_test_image,y_test_image) = mnist.load_data()

第一次執(zhí)行 mnist.load_data() 方法 ,程序會(huì)檢查用戶目錄下是否已經(jīng)存在 MNIST 數(shù)據(jù)集文件 ,如果沒(méi)有,就會(huì)自動(dòng)下載 . (所以第一次運(yùn)行比較慢) .

3 查看已經(jīng)下載的MNIST 數(shù)據(jù)文件

4 查看MNIST數(shù)據(jù)

print('train data = ' ,len(X_train_image)) # 
print('test data = ',len(X_test_image))

查看訓(xùn)練數(shù)據(jù)

1 訓(xùn)練集是由 images 和 label 組成的 , images 是數(shù)字的單色數(shù)字圖像 28 x 28 的 , label 是images 對(duì)應(yīng)的數(shù)字的十進(jìn)制表示 .

2 顯示數(shù)字的圖像

import matplotlib.pyplot as plt
def plot_image(image):
 fig = plt.gcf() 
 fig.set_size_inches(2,2)  # 設(shè)置圖形的大小
 plt.imshow(image,cmap='binary') # 傳入圖像image ,cmap 參數(shù)設(shè)置為 binary ,以黑白灰度顯示 
 plt.show()

3 查看訓(xùn)練數(shù)據(jù)中的第一個(gè)數(shù)據(jù)

plot_image(x_train_image[0])

查看對(duì)應(yīng)的標(biāo)記(真實(shí)值)

print(y_train_image[0])

運(yùn)行結(jié)果 : 5

查看多項(xiàng)訓(xùn)練數(shù)據(jù) images 與 label

上面我們只顯示了一組數(shù)據(jù)的圖像 , 下面將顯示多組手寫數(shù)字的圖像展示 ,以便我們查看數(shù)據(jù) .

def plot_images_labels_prediction(images, labels,
prediction, idx, num=10):
 fig = plt.gcf()
 fig.set_size_inches(12, 14) # 設(shè)置大小
 if num > 25: num = 25
 for i in range(0, num):
  ax = plt.subplot(5, 5, 1 + i)# 分成 5 X 5 個(gè)子圖顯示, 第三個(gè)參數(shù)表示第幾個(gè)子圖
  ax.imshow(images[idx], cmap='binary')
  title = "label=" + str(labels[idx])
  if len(prediction) > 0: # 如果有預(yù)測(cè)值
title += ",predict=" + str(prediction[idx])
 
  ax.set_title(title, fontsize=10)
  ax.set_xticks([])
  ax.set_yticks([])
  idx += 1
 plt.show()
plot_images_labels_prediction(x_train_image,y_train_image,[],0,10)

查看測(cè)試集 的手寫數(shù)字前十個(gè)

plot_images_labels_prediction(x_test_image,y_test_image,[],0,10)
 

多層感知器模型數(shù)據(jù)預(yù)處理

feature (數(shù)字圖像的特征值) 數(shù)據(jù)預(yù)處理可分為兩個(gè)步驟:

(1) 將原本的 288 X28 的數(shù)字圖像以 reshape 轉(zhuǎn)換為 一維的向量 ,其長(zhǎng)度為 784 ,并且轉(zhuǎn)換為 float

(2) 數(shù)字圖像 image 的數(shù)字標(biāo)準(zhǔn)化

1 查看image 的shape

print("x_train_image : " ,len(x_train_image) , x_train_image.shape )
print("y_train_label : ", len(y_train_label) , y_train_label.shape)
#output : 
 
x_train_image :  60000 (60000, 28, 28)
y_train_label :  60000 (60000,)

2 將 lmage 以 reshape 轉(zhuǎn)換

# 將 image 以 reshape 轉(zhuǎn)化
 
x_Train = x_train_image.reshape(60000,784).astype('float32')
x_Test = x_test_image.reshape(10000,784).astype('float32')
 
print('x_Train : ' ,x_Train.shape)
print('x_Test' ,x_Test.shape)

3 標(biāo)準(zhǔn)化

images 的數(shù)字標(biāo)準(zhǔn)化可以提高后續(xù)訓(xùn)練模型的準(zhǔn)確率 ,因?yàn)?images 的數(shù)字 是從 0 到255 的值 ,代表圖形每一個(gè)點(diǎn)灰度的深淺 .

# 標(biāo)準(zhǔn)化 
x_Test_normalize = x_Test/255 
x_Train_normalize = x_Train/255

4 查看標(biāo)準(zhǔn)化后的測(cè)試集和訓(xùn)練集 image

print(x_Train_normalize[0]) # 訓(xùn)練集中的第一個(gè)數(shù)字的標(biāo)準(zhǔn)化
x_train_image :  60000 (60000, 28, 28)
y_train_label :  60000 (60000,)
[0.0.0.0.0.0.
 
........................................................
 0.0.0.0.0.0.
 0.
 0.21568628 0.6745098  0.8862745  0.99215686 0.99215686 0.99215686
 0.99215686 0.95686275 0.52156866 0.04313726 0.0.
 0.0.0.0.0.0.
 0.0.0.0.0.0.
 0.0.0.0.0.53333336 0.99215686
 0.99215686 0.99215686 0.83137256 0.5294118  0.5176471  0.0627451
 
 0.0.0.0.  ]

Label 數(shù)據(jù)的預(yù)處理

label 標(biāo)簽字段原本是 0 ~ 9 的數(shù)字 ,必須以 One -hot Encoding 獨(dú)熱編碼 轉(zhuǎn)換為 10個(gè) 0,1 組合 ,比如 7 經(jīng)過(guò) One -hot encoding

轉(zhuǎn)換為 0000000100 ,正好就對(duì)應(yīng)了輸出層的 10 個(gè) 神經(jīng)元 .

# 將訓(xùn)練集和測(cè)試集標(biāo)簽都進(jìn)行獨(dú)熱碼轉(zhuǎn)化
y_TrainOneHot = np_utils.to_categorical(y_train_label)
y_TestOneHot = np_utils.to_categorical(y_test_label)
print(y_TrainOneHot[:5]) # 查看前5項(xiàng)的標(biāo)簽
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]  5
 [1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]  0
 [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]  4
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]  1
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]] 9

Keras 多元感知器識(shí)別 MNIST 手寫數(shù)字圖像的介紹

1 我們將將建立如圖所示的多層感知器模型

2 建立model 后 ,必須先訓(xùn)練model 才能進(jìn)行預(yù)測(cè)(識(shí)別)這些手寫數(shù)字 .

數(shù)據(jù)的預(yù)處理我們已經(jīng)處理完了. 包含 數(shù)據(jù)集 輸入(數(shù)字圖像)的標(biāo)準(zhǔn)化 , label的one-hot encoding

下面我們將建立模型

我們將建立多層感知器模型 ,輸入層 共有784 個(gè)神經(jīng)元 ,hodden layer 有 256 個(gè)neure ,輸出層用 10 個(gè)神經(jīng)元 .

1 導(dǎo)入相關(guān)模塊

from keras.models import Sequential
from keras.layers import Dense

2 建立 Sequence 模型

 
# 建立Sequential 模型
model = Sequential()

3 建立 "輸入層" 和 "隱藏層"

使用 model,add() 方法加入 Dense 神經(jīng)網(wǎng)絡(luò)層 .

model.add(Dense(units=256,
 input_dim =784,
 keras_initializer='normal',
 activation='relu')
 )
參數(shù) 說(shuō)明
units =256 定義"隱藏層"神經(jīng)元的個(gè)數(shù)為256
input_dim 設(shè)置輸入層神經(jīng)元個(gè)數(shù)為 784
kernel_initialize='normal' 使用正態(tài)分布的隨機(jī)數(shù)初始化weight和bias
activation 激勵(lì)函數(shù)為 relu

4 建立輸出層

model.add(Dense(
 units=10,
 kernel_initializer='normal',
 activation='softmax'
))
 

參數(shù) 說(shuō)明
units 定義"輸出層"神經(jīng)元個(gè)數(shù)為10
kernel_initializer='normal' 同上
activation='softmax 激活函數(shù) softmax

5 查看模型的摘要

print(model.summary())

param 的計(jì)算是 上一次的神經(jīng)元個(gè)數(shù) * 本層神經(jīng)元個(gè)數(shù) + 本層神經(jīng)元個(gè)數(shù) .

進(jìn)行訓(xùn)練

1 定義訓(xùn)練方式

model.compile(loss='categorical_crossentropy' ,optimizer='adam',metrics=['accuracy'])

loss (損失函數(shù)) : 設(shè)置損失函數(shù), 這里使用的是交叉熵 .

optimizer : 優(yōu)化器的選擇,可以讓訓(xùn)練更快的收斂

metrics : 設(shè)置評(píng)估模型的方式是準(zhǔn)確率

開始訓(xùn)練 2

train_history = model.fit(x=x_Train_normalize,y=y_TrainOneHot,validation_split=0.2 ,  epoch=10,batch_size=200,verbose=2)
 

使用 model.fit() 進(jìn)行訓(xùn)練 , 訓(xùn)練過(guò)程會(huì)存儲(chǔ)在 train_history 變量中 .

(1)輸入訓(xùn)練數(shù)據(jù)參數(shù)

x = x_Train_normalize

y = y_TrainOneHot

(2)設(shè)置訓(xùn)練集和驗(yàn)證集的數(shù)據(jù)比例

validation_split=0.2 8 :2 = 訓(xùn)練集 : 驗(yàn)證集

(3) 設(shè)置訓(xùn)練周期 和 每一批次項(xiàng)數(shù)

epoch=10,batch_size=200

(4) 顯示訓(xùn)練過(guò)程

verbose = 2

3 建立show_train_history 顯示訓(xùn)練過(guò)程

def show_train_history(train_history,train,validation) :
 
 plt.plot(train_history.history[train])
 plt.plot(train_history.history[validation])
 plt.title("Train_history")
 plt.ylabel(train)
 plt.xlabel('Epoch')
 plt.legend(['train','validation'],loc='upper left')
 plt.show()

測(cè)試數(shù)據(jù)評(píng)估模型準(zhǔn)確率

scores = model.evaluate(x_Test_normalize,y_TestOneHot)
print()
print('accuracy=',scores[1] )

accuracy= 0.9769

進(jìn)行預(yù)測(cè)

通過(guò)之前的步驟, 我們建立了模型, 并且完成了模型訓(xùn)練 ,準(zhǔn)確率達(dá)到可以接受的 0.97 . 接下來(lái)我們將使用此模型進(jìn)行預(yù)測(cè).

1 執(zhí)行預(yù)測(cè)

prediction = model.predict_classes(x_Test)
print(prediction)

result : [7 2 1 ... 4 5 6]

2 顯示 10 項(xiàng)預(yù)測(cè)結(jié)果

plot_images_labels_prediction(x_test_image,y_test_label,prediction,idx=340)

我們可以看到 第一個(gè)數(shù)字 label 是 5 結(jié)果預(yù)測(cè)成 3 了.

顯示混淆矩陣

上面我們?cè)陬A(yù)測(cè)到第340 個(gè)測(cè)試集中的數(shù)字5 時(shí) ,卻被錯(cuò)誤的預(yù)測(cè)成了 3 .如果想要更進(jìn)一步的知道我們所建立的模型中哪些 數(shù)字的預(yù)測(cè)準(zhǔn)確率更高 , 哪些數(shù)字會(huì)容忍混淆 .

混淆矩陣 也稱為 誤差矩陣.

1 使用Pandas 建立混淆矩陣 .

showMetrix = pd.crosstab(y_test_label,prediction,colnames=['label',],rownames=['predict'])
print(showMetrix)
label0  1  2 3 4 5 6 7 8 9
predict
0  971  0  1 1 1 0 2 1 3 0
1 0  1124  4 0 0 1 2 0 4 0
2 5  0  1009 2 1 0 3 4 8 0
3 0  0  5  993 0 1 0 3 4 4
4 1  0  5 1  961 0 3 0 3 8
5 3  0  016 1  852 7 2 8 3
6 5  3  3 1 3 3  939 0 1 0
7 0  5 13 7 1 0 0  988 5 9
8 4  0  3 7 1 1 1 2  954 1
9 3  6  011 7 2 1 4 4  971

2 使用DataFrame

df = pd.DataFrame({'label ':y_test_label, 'predict':prediction})
print(df)
labelpredict
0 7  7
1 2  2
2 1  1
3 0  0
4 4  4
5 1  1
6 4  4
7 9  9
8 5  5
9 9  9
100  0
116  6
129  9
130  0
141  1
155  5
169  9
177  7
183  3
194  4
209  9
216  6
226  6
235  5
244  4
250  0
267  7
274  4
280  0
291  1
.........
9970 5  5
9971 2  2
9972 4  4
9973 9  9
9974 4  4
9975 3  3
9976 6  6
9977 4  4
9978 1  1
9979 7  7
9980 2  2
9981 6  6
9982 5  6
9983 0  0
9984 1  1
9985 2  2
9986 3  3
9987 4  4
9988 5  5
9989 6  6
9990 7  7
9991 8  8
9992 9  9
9993 0  0
9994 1  1
9995 2  2
9996 3  3
9997 4  4
9998 5  5
9999 6  6

隱藏層增加為 1000個(gè)神經(jīng)元

model.add(Dense(units=1000,
 input_dim=784,
 kernel_initializer='normal',
 activation='relu'))

hidden layer 神經(jīng)元的增大,參數(shù)也增多了, 所以訓(xùn)練model的時(shí)間也變慢了.

加入 Dropout 功能避免過(guò)度擬合

 
# 建立Sequential 模型
model = Sequential()
 
model.add(Dense(units=1000,
 input_dim=784,
 kernel_initializer='normal',
 activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout 
model.add(Dense(units=10,
 kernel_initializer='normal',
 activation='softmax'))

訓(xùn)練的準(zhǔn)確率 和 驗(yàn)證的準(zhǔn)確率 差距變小了 .

建立多層感知器模型包含兩層隱藏層

 
# 建立Sequential 模型
model = Sequential()
# 輸入層 +" 隱藏層"1 
model.add(Dense(units=1000,
 input_dim=784,
 kernel_initializer='normal',
 activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 隱藏層"2
model.add(Dense(units=1000,
 kernel_initializer='normal',
 activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 輸出層" 
model.add(Dense(units=10,
 kernel_initializer='normal',
 activation='softmax'))
 
print(model.summary())

代碼:

import tensorflow as tf
import keras
import matplotlib.pyplot as plt
import numpy as np
from keras.utils import np_utils
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
import pandas as pd
import os
 
np.random.seed(10)
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
 
(x_train_image ,y_train_label),(x_test_image,y_test_label) = mnist.load_data()
 
#
# print('train data = ' ,len(X_train_image)) #
# print('test data = ',len(X_test_image))
 
def plot_image(image):
 fig = plt.gcf()
 fig.set_size_inches(2,2)  # 設(shè)置圖形的大小
 plt.imshow(image,cmap='binary') # 傳入圖像image ,cmap 參數(shù)設(shè)置為 binary ,以黑白灰度顯示
 plt.show()
def plot_images_labels_prediction(images, labels,
prediction, idx, num=10):
 fig = plt.gcf()
 fig.set_size_inches(12, 14)
 if num > 25: num = 25
 for i in range(0, num):
  ax = plt.subplot(5, 5, 1 + i)# 分成 5 X 5 個(gè)子圖顯示, 第三個(gè)參數(shù)表示第幾個(gè)子圖
  ax.imshow(images[idx], cmap='binary')
  title = "label=" + str(labels[idx])
  if len(prediction) > 0:
title += ",predict=" + str(prediction[idx])
 
  ax.set_title(title, fontsize=10)
  ax.set_xticks([])
  ax.set_yticks([])
  idx += 1
 plt.show()
 
def show_train_history(train_history,train,validation) :
 
 plt.plot(train_history.history[train])
 plt.plot(train_history.history[validation])
 plt.title("Train_history")
 plt.ylabel(train)
 plt.xlabel('Epoch')
 plt.legend(['train','validation'],loc='upper left')
 plt.show()
 
# plot_images_labels_prediction(x_train_image,y_train_image,[],0,10)
#
# plot_images_labels_prediction(x_test_image,y_test_image,[],0,10)
print("x_train_image : " ,len(x_train_image) , x_train_image.shape )
print("y_train_label : ", len(y_train_label) , y_train_label.shape)
# 將 image 以 reshape 轉(zhuǎn)化
 
x_Train = x_train_image.reshape(60000,784).astype('float32')
x_Test = x_test_image.reshape(10000,784).astype('float32')
 
# print('x_Train : ' ,x_Train.shape)
# print('x_Test' ,x_Test.shape)
# 標(biāo)準(zhǔn)化
x_Test_normalize = x_Test/255
x_Train_normalize = x_Train/255
 
# print(x_Train_normalize[0]) # 訓(xùn)練集中的第一個(gè)數(shù)字的標(biāo)準(zhǔn)化
# 將訓(xùn)練集和測(cè)試集標(biāo)簽都進(jìn)行獨(dú)熱碼轉(zhuǎn)化
y_TrainOneHot = np_utils.to_categorical(y_train_label)
y_TestOneHot = np_utils.to_categorical(y_test_label)
print(y_TrainOneHot[:5]) # 查看前5項(xiàng)的標(biāo)簽
 
# 建立Sequential 模型
model = Sequential()
model.add(Dense(units=1000,
 input_dim=784,
 kernel_initializer='normal',
 activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
# " 隱藏層"2
model.add(Dense(units=1000,
 kernel_initializer='normal',
 activation='relu'))
model.add(Dropout(0.5)) # 加入Dropout
 
model.add(Dense(units=10,
 kernel_initializer='normal',
 activation='softmax'))
print(model.summary())
 
# 訓(xùn)練方式
model.compile(loss='categorical_crossentropy' ,optimizer='adam',metrics=['accuracy'])
# 開始訓(xùn)練
train_history =model.fit(x=x_Train_normalize, y=y_TrainOneHot,validation_split=0.2, epochs=10, batch_size=200,verbose=2)
 
show_train_history(train_history,'acc','val_acc')
scores = model.evaluate(x_Test_normalize,y_TestOneHot)
print()
print('accuracy=',scores[1] )
prediction = model.predict_classes(x_Test)
print(prediction)
plot_images_labels_prediction(x_test_image,y_test_label,prediction,idx=340)
showMetrix = pd.crosstab(y_test_label,prediction,colnames=['label',],rownames=['predict'])
print(showMetrix)
df = pd.DataFrame({'label ':y_test_label, 'predict':prediction})
print(df)
 
#
#
# plot_image(x_train_image[0])
#
# print(y_train_image[0])

代碼2:

import numpy as np
from keras.models import Sequential
from keras.layers import Dense , Dropout ,Deconv2D
from keras.utils import np_utils
from keras.datasets import mnist
from keras.optimizers import SGD
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
def load_data():
 (x_train,y_train),(x_test,y_test) = mnist.load_data()
 number = 10000
 x_train = x_train[0:number]
 y_train = y_train[0:number]
 
 x_train =x_train.reshape(number,28*28)
 x_test = x_test.reshape(x_test.shape[0],28*28)
 x_train = x_train.astype('float32')
 x_test = x_test.astype('float32')
 y_train = np_utils.to_categorical(y_train,10)
 y_test = np_utils.to_categorical(y_test,10)
 x_train = x_train/255
 x_test = x_test /255
 return (x_train,y_train),(x_test,y_test)
(x_train,y_train),(x_test,y_test) = load_data()
 
model = Sequential()
model.add(Dense(input_dim=28*28,units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=689,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(output_dim=10,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=10000,epochs=20)
res1 = model.evaluate(x_train,y_train,batch_size=10000)
print("\n Train Acc :",res1[1])
res2 = model.evaluate(x_test,y_test,batch_size=10000)
print("\n Test Acc :",res2[1])

以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持本站。

國(guó)外服務(wù)器租用

版權(quán)聲明:本站文章來(lái)源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來(lái)源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非maisonbaluchon.cn所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來(lái)源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來(lái),僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。

相關(guān)文章

實(shí)時(shí)開通

自選配置、實(shí)時(shí)開通

免備案

全球線路精選!

全天候客戶服務(wù)

7x24全年不間斷在線

專屬顧問(wèn)服務(wù)

1對(duì)1客戶咨詢顧問(wèn)

在線
客服

在線客服:7*24小時(shí)在線

客服
熱線

400-630-3752
7*24小時(shí)客服服務(wù)熱線

關(guān)注
微信

關(guān)注官方微信
頂部