在之前的实战(1) 中,我们将数据清洗整理后,得到了'notMNIST.pickle'数据。
本文将阐述利用tensorflow创建一个简单的神经网络以及随机梯度下降算法。
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
首先,载入之前整理好的数据'notMNIST.pickle'。(在实战(1)中得到的)
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory 帮助回收内存
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
运行结果为:
Training set (200000, 28, 28) (200000,)
Validation set (10000, 28, 28) (10000,)
Test set (10000, 28, 28) (10000,)
下一步转换数据格式。
将图像拉成一维数组。
dataset成为二维数组。
label也成为二位数组。
0 对应[1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]
1 对应[0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # -1 means unspecified value adaptive
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
运行结果为:
Training set (200000, 784) (200000, 10)
Validation set (10000, 784) (10000, 10)
Test set (10000, 784) (10000, 10)
tensorflow 这样工作: 首先描述你的输入,变量,以及操作。这些组成了计算图。 之后的操作要在这个block下面进行。
比如:
with graph.as_default():
...
然后可以用命令session.run()运行你定义的操作。 上下文管理器用来定义session.你所定义的操作也一定要在session的block下面。
with tf.Session(graph=graph) as session:
...
这时我们可以载入数据进行训练啦。
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data. 定义输入数据并载入 -----------------------------------------1
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.定义变量 要训练得到的参数weight, bias ----------------------------------------2
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels])) # changing when training
biases = tf.Variable(tf.zeros([num_labels])) # changing when training
# tf.truncated_normal
# tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
# Outputs random values from a truncated normal distribution.
# The generated values follow a normal distribution with specified mean and
# standard deviation, except that values whose magnitude is more than 2 standard
# deviations from the mean are dropped and re-picked.
# tf.zeros
# tf.zeros([10]) <tf.Tensor 'zeros:0' shape=(10,) dtype=float32>
# Training computation. 训练数据 ----------------------------------------3
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases # tf.matmul matrix multiply
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # compute average cross entropy loss
# softmax_cross_entropy_with_logits
# The activation ops provide different types of nonlinearities for use in neural
# networks. These include smooth nonlinearities (`sigmoid`, `tanh`, `elu`,
# `softplus`, and `softsign`), continuous but not everywhere differentiable
# functions (`relu`, `relu6`, and `relu_x`), and random regularization (`dropout`).
# tf.reduce_mean
# tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)
# Computes the mean of elements across dimensions of a tensor.
# Optimizer. -----------------------------------------4
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # 0.5 means learning rate
# tf.train.GradientDescentOptimizer(
# tf.train.GradientDescentOptimizer(self, learning_rate, use_locking=False, name='GradientDescent')
# Predictions for the training, validation, and test data.---------------------------------------5
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits) # weights and bias have been changed
valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
# tf.nn.softmax
# Returns: A `Tensor`. Has the same type as `logits`. Same shape as `logits`.(num, 784) *(784,10) + = (num, 10)
下面进行简单的梯度下降,开始迭代。
num_steps = 801
def accuracy(predictions, labels):
''' predictions = [0.8,0,0,0,0.1,0,0,0.1,0,0]
labels = [1,0,0,0,0,0,0,0,0,0]
'''
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph:
# random weights for the matrix, zeros for the biases.
tf.initialize_all_variables().run() # initialize
print('Initialized')
for step in xrange(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction]) # using train_prediction to train and return prediction in train data set
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
运行结果如下:
Initialized
Loss at step 0: 17.639723
Training accuracy: 8.9%
Validation accuracy: 11.4%
Loss at step 100: 2.268863
Training accuracy: 71.8%
Validation accuracy: 70.8%
Loss at step 200: 1.818829
Training accuracy: 74.9%
Validation accuracy: 73.6%
Loss at step 300: 1.580101
Training accuracy: 76.5%
Validation accuracy: 74.5%
Loss at step 400: 1.419103
Training accuracy: 77.1%
Validation accuracy: 75.1%
Loss at step 500: 1.299344
Training accuracy: 77.7%
Validation accuracy: 75.3%
Loss at step 600: 1.205005
Training accuracy: 78.3%
Validation accuracy: 75.3%
Loss at step 700: 1.127984
Training accuracy: 78.8%
Validation accuracy: 75.5%
Loss at step 800: 1.063572
Training accuracy: 79.3%
Validation accuracy: 75.7%
Test accuracy: 82.6%
之后,我们可以用更快的优化算法,随机梯度算法进行训练。
graph的定义与之前类似,不同的是我们的训练数据是一小批一小批的。
所以要在运行session.run()时并导入小批量数据之前定义占位量(placeholder).。
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed ----------------------------------------1
# at run time with a training minibatch.
# 相当于开辟空间
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables. ------------------------------------------2
weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation. ------------------------------------------3
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer. -------------------------------------------4
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data. --------------------------------------------5
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
下面是对应的训练操作代码: num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
# 传递值到tf的命名空间
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
运行结果如下: Initialized
Minibatch loss at step 0: 16.076256
Minibatch accuracy: 14.1%
Validation accuracy: 17.9%
Minibatch loss at step 500: 1.690020
Minibatch accuracy: 72.7%
Validation accuracy: 75.1%
Minibatch loss at step 1000: 1.430756
Minibatch accuracy: 77.3%
Validation accuracy: 76.1%
Minibatch loss at step 1500: 1.065795
Minibatch accuracy: 81.2%
Validation accuracy: 77.0%
Minibatch loss at step 2000: 1.248749
Minibatch accuracy: 75.0%
Validation accuracy: 77.3%
Minibatch loss at step 2500: 0.934266
Minibatch accuracy: 81.2%
Validation accuracy: 78.1%
Minibatch loss at step 3000: 1.047278
Minibatch accuracy: 76.6%
Validation accuracy: 78.4%
Test accuracy: 85.4%
当然结果肯定会有所提升。
batch_size = 128
hiden_layer_node_num = 1024
graph = tf.Graph()
with graph.as_default():
# input -----------------------------------------1
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables. ------------------------------------------2
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, hiden_layer_node_num]))
biases1 = tf.Variable(tf.zeros([hiden_layer_node_num]))
# input layer output (batch_size, hiden_layer_node_num)
weights2 = tf.Variable(tf.truncated_normal([hiden_layer_node_num, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation. ------------------------------------------3
logits = tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1), weights2) + biases2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer. -------------------------------------------4
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data. --------------------------------------------5
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1), weights2) + biases2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1), weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
# 传递值到tf的命名空间
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
运行结果如下: Initialized
Minibatch loss at step 0: 379.534973
Minibatch accuracy: 8.6%
Validation accuracy: 21.7%
Minibatch loss at step 500: 12.951815
Minibatch accuracy: 86.7%
Validation accuracy: 80.8%
Minibatch loss at step 1000: 9.569818
Minibatch accuracy: 82.8%
Validation accuracy: 80.9%
Minibatch loss at step 1500: 7.165316
Minibatch accuracy: 84.4%
Validation accuracy: 78.8%
Minibatch loss at step 2000: 10.387121
Minibatch accuracy: 78.9%
Validation accuracy: 80.8%
Minibatch loss at step 2500: 3.324355
Minibatch accuracy: 80.5%
Validation accuracy: 80.8%
Minibatch loss at step 3000: 4.396149
Minibatch accuracy: 89.8%
Validation accuracy: 81.3%
Test accuracy: 88.9%
测试结果正确率达到了88.9%
这样一个简单的神经网络就搭建好了。