TensorFlow教程Softmax邏輯回歸識(shí)別手寫數(shù)字MNIST數(shù)據(jù)集
基于MNIST數(shù)據(jù)集的邏輯回歸模型做十分類任務(wù)
沒有隱含層的Softmax Regression只能直接從圖像的像素點(diǎn)推斷是哪個(gè)數(shù)字,而沒有特征抽象的過程。多層神經(jīng)網(wǎng)絡(luò)依靠隱含層,則可以組合出高階特征,比如橫線、豎線、圓圈等,之后可以將這些高階特征或者說組件再組合成數(shù)字,就能實(shí)現(xiàn)精準(zhǔn)的匹配和分類。
import tensorflow as tf
import numpy as np
import input_data
print('Download and Extract MNIST dataset')
mnist = input_data.read_data_sets('data/', one_hot=True) # one_hot=True意思是編碼格式為01編碼
print("tpye of 'mnist' is %s" % (type(mnist)))
print("number of train data is %d" % (mnist.train.num_examples))
print("number of test data is %d" % (mnist.test.num_examples))
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print("MNIST loaded")
"""
print("type of 'trainimg' is %s" % (type(trainimg)))
print("type of 'trainlabel' is %s" % (type(trainlabel)))
print("type of 'testimg' is %s" % (type(testimg)))
print("type of 'testlabel' is %s"% (type(testlabel)))
print("------------------------------------------------")
print("shape of 'trainimg' is %s"% (trainimg.shape,))
print("shape of 'trainlabel' is %s" % (trainlabel.shape,))
print("shape of 'testimg' is %s" % (testimg.shape,))
print("shape of 'testlabel' is %s" % (testlabel.shape,))
"""
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10]) # None is for infinite
w = tf.Variable(tf.zeros([784, 10])) # 為了方便直接用0初始化,可以高斯初始化
b = tf.Variable(tf.zeros([10])) # 10分類的任務(wù),10種label,所以只需要初始化10個(gè)b
pred = tf.nn.softmax(tf.matmul(x, w) + b) # 前向傳播的預(yù)測(cè)值
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=[1])) # 交叉熵?fù)p失函數(shù)
optm = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
corr = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # tf.equal()對(duì)比預(yù)測(cè)值的索引和真實(shí)label的索引是否一樣,一樣返回True,不一樣返回False
accr = tf.reduce_mean(tf.cast(corr, tf.float32))
init = tf.global_variables_initializer() # 全局參數(shù)初始化器
training_epochs = 100 # 所有樣本迭代100次
batch_size = 100 # 每進(jìn)行一次迭代選擇100個(gè)樣本
display_step = 5
# SESSION
sess = tf.Session() # 定義一個(gè)Session
sess.run(init) # 在sess里run一下初始化操作
# MINI-BATCH LEARNING
for epoch in range(training_epochs): # 每一個(gè)epoch進(jìn)行循環(huán)
avg_cost = 0. # 剛開始損失值定義為0
num_batch = int(mnist.train.num_examples/batch_size)
for i in range(num_batch): # 每一個(gè)batch進(jìn)行選擇
batch_xs, batch_ys = mnist.train.next_batch(batch_size) # 通過next_batch()就可以一個(gè)一個(gè)batch的拿數(shù)據(jù),
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys}) # run一下用梯度下降進(jìn)行求解,通過placeholder把x,y傳進(jìn)來
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y:batch_ys})/num_batch
# DISPLAY
if epoch % display_step == 0: # display_step之前定義為5,這里每5個(gè)epoch打印一下
train_acc = sess.run(accr, feed_dict={x: batch_xs, y:batch_ys})
test_acc = sess.run(accr, feed_dict={x: mnist.test.images, y: mnist.test.labels})
print("Epoch: %03d/%03d cost: %.9f TRAIN ACCURACY: %.3f TEST ACCURACY: %.3f"
% (epoch, training_epochs, avg_cost, train_acc, test_acc))
print("DONE")
迭代100次跑一下模型,最終,在測(cè)試集上可以達(dá)到92.2%的準(zhǔn)確率,雖然還不錯(cuò),但是還達(dá)不到實(shí)用的程度。手寫數(shù)字的識(shí)別的主要應(yīng)用場(chǎng)景是識(shí)別銀行支票,如果準(zhǔn)確率不夠高,可能會(huì)引起嚴(yán)重的后果。
Epoch: 095/100 loss: 0.283259882 train_acc: 0.940 test_acc: 0.922
插一些知識(shí)點(diǎn),關(guān)于tensorflow中一些函數(shù)的用法
sess = tf.InteractiveSession() arr = np.array([[31, 23, 4, 24, 27, 34], [18, 3, 25, 0, 6, 35], [28, 14, 33, 22, 30, 8], [13, 30, 21, 19, 7, 9], [16, 1, 26, 32, 2, 29], [17, 12, 5, 11, 10, 15]])
在tensorflow中打印要用.eval() tf.rank(arr).eval() # 打印矩陣arr的維度 tf.shape(arr).eval() # 打印矩陣arr的大小 tf.argmax(arr, 0).eval() # 打印最大值的索引,參數(shù)0為按列求索引,1為按行求索引
以上就是TensorFlow教程Softmax邏輯回歸識(shí)別手寫數(shù)字MNIST數(shù)據(jù)集的詳細(xì)內(nèi)容,更多關(guān)于Softmax邏輯回歸MNIST數(shù)據(jù)集手寫識(shí)別的資料請(qǐng)關(guān)注本站其它相關(guān)文章!
版權(quán)聲明:本站文章來源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非maisonbaluchon.cn所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。
關(guān)注官方微信