Softmax作业
Softmax原理
-
softmax classifier(Multinomial Logistic Regression)。其中softmax函数的表达式为 P ( Y = k ∣ X = x i ) = e s k ∑ j e s j P(Y=k|X=x_i)=\frac{e^{s_k}}{\sum_j{e^{s_j}}} P(Y=k∣X=xi)=∑jesjesk其中 s = f ( x i ; W ) s=f(x_i;W) s=f(xi;W)由公式可知当预测越准确时 P P P越大,因此损失函数为 L i = − l o g P ( Y = y i ∣ X = x i ) = − l o g ( e s y i ∑ j e s j ) = − s y i + l o g ( 1 ∑ i e s j ) L_i=-logP(Y=y_i|X=x_i)=-log(\frac{e^{s_{y_i}}}{\sum_j{e^{s_j}}})=-s_{y_i}+log(\frac{1}{\sum_i{e^{s_j}}}) Li=−logP(Y=yi∣X=xi)=−log(∑jesjesyi)=−syi+log(∑iesj1)
-
求梯度:由上式最后一个公式可得
当 j = y [ i ] : d W = ( − 1 + m a r g i n ) ∗ X j=y[i]: dW=(-1+margin)*X j=y[i]:dW=(−1+margin)∗X
当 j ≠ y [ i ] : d W = m a r g i n ∗ X j\ne{y[i]}:dW=margin*X j=y[i]:dW=margin∗X
其中 m a r g i n = n p . e x p ( s c o r e s [ j ] ) / ( n p . s u m ( n p . e x p ( s c o r e s ) ) ) margin=np.exp(scores[j])/(np.sum(np.exp(scores))) margin=np.exp(scores[j])/(np.sum(np.exp(scores)))
作业实现
softmax.py
-
softmax_loss_naive首先是循环实现softmax的loss和gradient计算
def softmax_loss_naive(W, X, y, reg): """ Softmax loss function, naive implementation (with loops) Inputs have dimension D, there are C classes, and we operate on minibatches of N examples. Inputs: - W: A numpy array of shape (D, C) containing weights. - X: A numpy array of shape (N, D) containing a minibatch of data. - y: A numpy array of shape (N,) containing training labels; y[i] = c means that X[i] has label c, where 0 <= c < C. - reg: (float) regularization strength Returns a tuple of: - loss as single float - gradient with respect to weights W; an array of same shape as W """ # Initialize the loss and gradient to zero. loss = 0.0 dW = np.zeros_like(W) ############################################################################# # TODO: Compute the softmax loss and its gradient using explicit loops. # # Store the loss in loss and the gradient in dW. If you are not careful # # here, it is easy to run into numeric instability. Don't forget the # # regularization! # ############################################################################# num_classes = W.shape[1] num_train = X.shape[0] for i in range(num_train): scores = X[i].dot(W) loss += -scores[y[i]] + np.log(np.sum(np.exp(scores))) for j in range(num_classes): margin = np.exp(scores[j])/(np.sum(np.exp(scores))) if j== y[i]: dW[:,j] += (-1 + margin)*X[i] else: dW[:,j] += margin*X[i] loss /= num_train loss += reg*np.sum(W*W) dW = dW/num_train + 2*reg*W ############################################################################# # END OF YOUR CODE # ############################################################################# return loss, dW
-
softmax_loss_vectorized(W, X, y, reg)——直接利用矩阵计算softmax的损失函数及梯度
def softmax_loss_vectorized(W, X, y, reg): """ Softmax loss function, vectorized version. Inputs and outputs are the same as softmax_loss_naive. """ # Initialize the loss and gradient to zero. loss = 0.0 dW = np.zeros_like(W) ############################################################################# # TODO: Compute the softmax loss and its gradient using no explicit loops. # # Store the loss in loss and the gradient in dW. If you are not careful # # here, it is easy to run into numeric instability. Don't forget the # # regularization! # ############################################################################# num_classes = W.shape[1] num_train = X.shape[0] scores = X.dot(W) softmax_output = np.exp(scores)/np.sum(np.exp(scores),axis=1).reshape(-1,1) loss = -np.sum(np.log(softmax_output[range(num_train),list(y)]))/num_train loss += reg*np.sum(W*W) dSoftmax = softmax_output.copy() dSoftmax[range(num_train), list(y)] += -1 dW = (X.T).dot(dSoftmax) dW = dW/num_train + reg*W ############################################################################# # END OF YOUR CODE # ############################################################################# return loss, dW
Softmax.ipynb
该指导文件与svm的相似。
-
首先是测试softmax_loss_naive的loss准确性
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-84KW7VoJ-1618559759155)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307185355045.png)]
-
测试softmax_loss_naive的梯度准确性
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-o8xFPRQ3-1618559759158)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307185434499.png)]
-
测试两种计算方法的计算速度和测试时间
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SSO2gVbT-1618559759159)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307185516089.png)]
-
测试最佳的超参数,learning_rates=[1e-7, 5e-7], regularization_strengths=[2.5e4,5e4]
# Use the validation set to tune hyperparameters (regularization strength and # learning rate). You should experiment with different ranges for the learning # rates and regularization strengths; if you are careful you should be able to # get a classification accuracy of over 0.35 on the validation set. from cs231n.classifiers import Softmax results = { } best_val = -1 best_softmax = None learning_rates = [1e-7, 5e-7] regularization_strengths = [2.5e4, 5e4] ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained softmax classifer in best_softmax. # ################################################################################ range_lr = np.linspace(learning_rates[0], learning_rates[1],3) range_reg = np.linspace(regularization_strengths[0], regularization_strengths[1],3) for cur_lr in range_lr: for cur_reg in range_reg: softmax = Softmax() softmax.train(X_train, y_train, learning_rate=cur_lr, reg=cur_reg, num_iters=1500,verbose=False) y_train_pred = softmax.predict(X_train) train_acc = np.mean(y_train==y_train_pred) y_val_pred = softmax.predict(X_val) val_acc = np.mean(y_val==y_val_pred) results[(cur_lr, cur_reg)] = (train_acc, val_acc) if val_acc > best_val: best_val = val_acc best_softmax = softmax ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val)
最后结果为
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PIGfgaWL-1618559759160)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307190017697.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-z10pSKTI-1618559759161)(C:\Users\vpmas\AppData\Roaming\Typora\typora-user-images\image-20210307190040809.png)]
Inline Question 1:
Why do we expect our loss to be close to -log(0.1)? Explain briefly.**
Your answer: 因为我们一共10类数据,在类别分布均匀的情况下,网络预测结果最好的时候,loss为-log(1/10)v