我的读Caffe的过程完全参考了张博博士的文章,http://blog.csdn.net/xizero00/article/details/50914471,十分的佩服他给我们留下了这样用心的文章,我有一些理解也直接盗用了QAQ,然后备注我参考并查阅之后是自己手打上去的,如果涉及到侵权的事,麻烦联系我,我会删除相关部分。
Layer实际上定义了Layer的基本操作,即初始化层、前向传播和反向传播。在前向传播中根据bottom blob得到top blob,反向传播则根据top反传到bottom。而且在前传的时候还可以计算loss,一般来说只有最后一层才会计算loss,虽然每个层都有计算loss的功能。Layer类在没有实现GPU前传和反传的时候会自动使用CPU的实现。这里有一张经典的Layer的层次图。
Layer.hpp
#ifndef CAFFE_LAYER_H_
#define CAFFE_LAYER_H_
#include <algorithm>
#include <string>
#include <vector>
#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/layer_factory.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/math_functions.hpp"
/**
Forward declare boost::thread instead of including boost/thread.hpp
to avoid a boost/NVCC issues (#1009, #1010) on OSX.
*/
namespace boost { class mutex; }
namespace caffe {
/**
* @brief An interface for the units of computation which can be composed into a
* Net.
*
* Layer%s must implement a Forward function, in which they take their input
* (bottom) Blob%s (if any) and compute their output Blob%s (if any).
* They may also implement a Backward function, in which they compute the error
* gradients with respect to their input Blob%s, given the error gradients with
* their output Blob%s.
*/
template <typename Dtype>
class Layer {
public:
/**
* You should not implement your own constructor. Any set up code should go
* to SetUp(), where the dimensions of the bottom blobs are provided to the
* layer.
*/
/*
构造函数初始化层的参数,并且设置当前层是否可以共享(如果是数据层则可以共享数据给多个网络)
这里的blobs_的定义是 vector<shared_ptr<Blob<Dtype> > > blobs_; 也就是说它是是blob指针类型的容器。
*/
explicit Layer(const LayerParameter& param)
: layer_param_(param), is_shared_(false) {
// Set phase and copy blobs (if there are any).
//训练还是测试?phase
phase_ = param.phase();
if (layer_param_.blobs_size() > 0) {
//将blobs_的大小设置为参数中的大小
blobs_.resize(layer_param_.blobs_size());
for (int i = 0; i < layer_param_.blobs_size(); ++i) {
//新建若干个Blob
blobs_[i].reset(new Blob<Dtype>());
//从blob文件中获取数据
blobs_[i]->FromProto(layer_param_.blobs(i));
}
}
}
virtual ~Layer() {}
/**
* @brief Implements common layer setup functionality.
*
* @param bottom the preshaped input blobs
* @param top
* the allocated but unshaped output blobs, to be shaped by Reshape
*
* Checks that the number of bottom and top blobs is correct.
* Calls LayerSetUp to do special layer setup for individual layer types,
* followed by Reshape to set up sizes of top blobs and internal buffers.
* Sets up the loss weight multiplier blobs for any non-zero loss weights.
* This method may not be overridden.
*/
//SetUp设置层的互斥层,检查BLOB的参数,调用LayerSetUp进行初始化
//LayerSetUp是个虚函数,用户可以重载它
//然后再设置topblob的形状以及设置损失权重
void SetUp(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
//初始化互斥层
InitMutex();
//检查Blob
CheckBlobCounts(bottom, top);
//层的初始化(虚函数,需要用户实现如何初始化层)
LayerSetUp(bottom, top);
//改变top的形状(虚函数,需用户去实现如何根据bottomblob改变topblob的形状)
Reshape(bottom, top);
//设置损失权重
SetLossWeights(top);
}
/**
* @brief Does layer-specific setup: your layer should implement this function
* as well as Reshape.
*
* @param bottom
* the preshaped input blobs, whose data fields store the input data for
* this layer
* @param top
* the allocated but unshaped output blobs
*
* This method should do one-time layer specific setup. This includes reading
* and processing relevent parameters from the <code>layer_param_</code>.
* Setting up the shapes of top blobs and internal buffers should be done in
* <code>Reshape</code>, which will be called before the forward pass to
* adjust the top blob sizes.
*/
//虚函数,必须自己去实现
virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {}
/**
* @brief Whether a layer should be shared by multiple nets during data
* parallelism. By default, all layers except for data layers should
* not be shared. data layers should be shared to ensure each worker
* solver access data sequentially during data parallelism.
*/
//在数据并行化训练的时候,层是否可以在多个网络之间共享
//默认是只有数据层才能在多个网络之间共享,其它层不行
//数据层应该在数据并行化的时候确保每个solver能够顺序地访问数据
virtual inline bool ShareInParallel() const { return false; }
/** @brief Return whether this layer is actually shared by other nets.
* If ShareInParallel() is true and using more than one GPU and the
* net has TRAIN phase, then this function is expected return true.
*/
//判断该层是否开启共享模式(即是否数据并行化了)
inline bool IsShared() const { return is_shared_; }
/** @brief Set whether this layer is actually shared by other nets
* If ShareInParallel() is true and using more than one GPU and the
* net has TRAIN phase, then is_shared should be set true.
*/
//设置是否共享
inline void SetShared(bool is_shared) {
CHECK(ShareInParallel() || !is_shared)
<< type() << "Layer does not support sharing.";
is_shared_ = is_shared;
}
/**
* @brief Adjust the shapes of top blobs and internal buffers to accommodate
* the shapes of the bottom blobs.
*
* @param bottom the input blobs, with the requested input shapes
* @param top the top blobs, which should be reshaped as needed
*
* This method should reshape top blobs as needed according to the shapes
* of the bottom (input) blobs, as well as reshaping any internal buffers
* and making any other necessary adjustments so that the layer can
* accommodate the bottom blobs.
*/
//纯虚函数(Reshape必须要实现)
virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) = 0;
/**
* @brief Given the bottom blobs, compute the top blobs and the loss.
*
* @param bottom
* the input blobs, whose data fields store the input data for this layer
* @param top
* the preshaped output blobs, whose data fields will store this layers'
* outputs
* \return The total loss from the layer.
*
* The Forward wrapper calls the relevant device wrapper function
* (Forward_cpu or Forward_gpu) to compute the top blob values given the
* bottom blobs. If the layer has any non-zero loss_weights, the wrapper
* then computes and returns the loss.
*
* Your layer should implement Forward_cpu and (optionally) Forward_gpu.
*/
//前向传播函数
//输入bottom,计算出top
inline Dtype Forward(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top);
/**
* @brief Given the top blob error gradients, compute the bottom blob error
* gradients.
*
* @param top
* the output blobs, whose diff fields store the gradient of the error
* with respect to themselves
* @param propagate_down
* a vector with equal length to bottom, with each index indicating
* whether to propagate the error gradients down to the bottom blob at
* the corresponding index
* @param bottom
* the input blobs, whose diff fields will store the gradient of the error
* with respect to themselves after Backward is run
*
* The Backward wrapper calls the relevant device wrapper function
* (Backward_cpu or Backward_gpu) to compute the bottom blob diffs given the
* top blob diffs.
*
* Your layer should implement Backward_cpu and (optionally) Backward_gpu.
*/
//反向传播函数
//输入top和propagate_down
//输出bottom
inline void Backward(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom);
/**
* @brief Returns the vector of learnable parameter blobs.
*/
vector<shared_ptr<Blob<Dtype> > >& blobs() {
return blobs_;
}
/**
* @brief Returns the layer parameter.
*/
const LayerParameter& layer_param() const { return layer_param_; }
/**
* @brief Writes the layer parameter to a protocol buffer
*/
//把层参数写进proto文件
virtual void ToProto(LayerParameter* param, bool write_diff = false);
/**
* @brief Returns the scalar loss associated with a top blob at a given index.
*/
//返回标量的损失(该损失与top blob相关联,给定索引就可以获得该损失)
inline Dtype loss(const int top_index) const {
return (loss_.size() > top_index) ? loss_[top_index] : Dtype(0);
}
/**
* @brief Sets the loss associated with a top blob at a given index.
*/
//给定索引,设置top blob相关联的损失
inline void set_loss(const int top_index, const Dtype value) {
if (loss_.size() <= top_index) {
loss_.resize(top_index + 1, Dtype(0));
}
loss_[top_index] = value;
}
/**
* @brief Returns the layer type.
*/
//虚函数,而且还是内联的,返回层类型
virtual inline const char* type() const { return ""; }
/**
* @brief Returns the exact number of bottom blobs required by the layer,
* or -1 if no exact number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some exact number of bottom blobs.
*/
//虚函数,获得bottom blob的精确个数
virtual inline int ExactNumBottomBlobs() const { return -1; }
/**
* @brief Returns the minimum number of bottom blobs required by the layer,
* or -1 if no minimum number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some minimum number of bottom blobs.
*/
//虚函数,获得bottom blob的最小个数
virtual inline int MinBottomBlobs() const { return -1; }
/**
* @brief Returns the maximum number of bottom blobs required by the layer,
* or -1 if no maximum number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some maximum number of bottom blobs.
*/
//虚函数,获得bottom blob的最大个数
virtual inline int MaxBottomBlobs() const { return -1; }
/**
* @brief Returns the exact number of top blobs required by the layer,
* or -1 if no exact number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some exact number of top blobs.
*/
//虚函数,获得top blob的精确个数
virtual inline int ExactNumTopBlobs() const { return -1; }
/**
* @brief Returns the minimum number of top blobs required by the layer,
* or -1 if no minimum number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some minimum number of top blobs.
*/
//虚函数,获得top blob的最小个数
virtual inline int MinTopBlobs() const { return -1; }
/**
* @brief Returns the maximum number of top blobs required by the layer,
* or -1 if no maximum number is required.
*
* This method should be overridden to return a non-negative value if your
* layer expects some maximum number of top blobs.
*/
//虚函数,获得top blob的最大个数
virtual inline int MaxTopBlobs() const { return -1; }
/**
* @brief Returns true if the layer requires an equal number of bottom and
* top blobs.
*
* This method should be overridden to return true if your layer expects an
* equal number of bottom and top blobs.
*/
//虚函数,判断bottom blob和top blob的个数是否相等
virtual inline bool EqualNumBottomTopBlobs() const { return false; }
/**
* @brief Return whether "anonymous" top blobs are created automatically
* by the layer.
*
* If this method returns true, Net::Init will create enough "anonymous" top
* blobs to fulfill the requirement specified by ExactNumTopBlobs() or
* MinTopBlobs().
*/
//返回当前层是否自动创建匿名top blobs
//如果返回true,表明表明网络初始化的时候创建了足够多的匿名top blobs
//未满足ExactNumTopBlobs或者MinTopBlobs所要求的top blobs的个数
virtual inline bool AutoTopBlobs() const { return false; }
/**
* @brief Return whether to allow force_backward for a given bottom blob
* index.
*
* If AllowForceBackward(i) == false, we will ignore the force_backward
* setting and backpropagate to blob i only if it needs gradient information
* (as is done when force_backward == false).
*/
//对于一个给定的bottom blob,返回是否允许强制反传
virtual inline bool AllowForceBackward(const int bottom_index) const {
return true;
}
/**
* @brief Specifies whether the layer should compute gradients w.r.t. a
* parameter at a particular index given by param_id.
*
* You can safely ignore false values and always compute gradients
* for all parameters, but possibly with wasteful computation.
*/
//给定param_id返回是否应该计算梯度
inline bool param_propagate_down(const int param_id) {
return (param_propagate_down_.size() > param_id) ?
param_propagate_down_[param_id] : false;
}
/**
* @brief Sets whether the layer should compute gradients w.r.t. a
* parameter at a particular index given by param_id.
*/
//给定param_id设置是否应该计算梯度
inline void set_param_propagate_down(const int param_id, const bool value) {
if (param_propagate_down_.size() <= param_id) {
param_propagate_down_.resize(param_id + 1, true);
}
param_propagate_down_[param_id] = value;
}
inline Phase phase() { return phase_; }
/**
* @brief set phase
* enable train and test with one network, for saving memory
*/
virtual inline void set_phase(Phase phase) {
phase_ = phase;
}
//保护性的成员变量
protected:
/** The protobuf that stores the layer parameters */
//层的参数
LayerParameter layer_param_;
/** The phase: TRAIN or TEST */
//训练还是测试
Phase phase_;
/** The vector that stores the learnable parameters as a set of blobs. */
//blobs_是blob的指针容器
vector<shared_ptr<Blob<Dtype> > > blobs_;
/** Vector indicating whether to compute the diff of each param blob. */
//是否需要计算梯度,也即是是否需要向下传播
vector<bool> param_propagate_down_;
/** The vector that indicates whether each top blob has a non-zero weight in
* the objective function. */
//每个top blob在目标函数中有非零的权重
vector<Dtype> loss_;
/** @brief Using the CPU device, compute the layer output. */
//纯虚函数,必须要实现前向的CPU的计算,需要用户去实现前向传播CPU,也就是说必须要实现CPU的前向传播
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) = 0;
/**
* @brief Using the GPU device, compute the layer output.
* Fall back to Forward_cpu() if unavailable.
*/
//虚函数,需要用户去实现前向椽笔GPU,如果不能实现GPU就运行CPU
//如果没有实现就调用默认的CPU的代码
virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
// LOG(WARNING) << "Using CPU code as backup.";
return Forward_cpu(bottom, top);
}
/**
* @brief Using the CPU device, compute the gradients for any parameters and
* for the bottom blobs if propagate_down is true.
*/
//纯虚函数,反传CPU,必须实现
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) = 0;
/**
* @brief Using the GPU device, compute the gradients for any parameters and
* for the bottom blobs if propagate_down is true.
* Fall back to Backward_cpu() if unavailable.
*/
//虚函数,反传GPU,如果没有则用CPU的反传
virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
// LOG(WARNING) << "Using CPU code as backup.";
Backward_cpu(top, propagate_down, bottom);
}
/**
* Called by the parent Layer's SetUp to check that the number of bottom
* and top Blobs provided as input match the expected numbers specified by
* the {ExactNum,Min,Max}{Bottom,Top}Blobs() functions.
*/
// 该函数在SetUp中被调用
// 检查Blob的一些参数是否正确
// 比如:
// 精确的底层blob数目
// 最小的底层blob数目
// 最大的底层blob数目
// 精确的顶层blob数目
// 最小的顶层blob数目
// 最大的顶层blob数目
// 此外还检查顶层和底层是否一致
virtual void CheckBlobCounts(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
if (ExactNumBottomBlobs() >= 0) {
CHECK_EQ(ExactNumBottomBlobs(), bottom.size())
<< type() << " Layer takes " << ExactNumBottomBlobs()
<< " bottom blob(s) as input.";
}
if (MinBottomBlobs() >= 0) {
CHECK_LE(MinBottomBlobs(), bottom.size())
<< type() << " Layer takes at least " << MinBottomBlobs()
<< " bottom blob(s) as input.";
}
if (MaxBottomBlobs() >= 0) {
CHECK_GE(MaxBottomBlobs(), bottom.size())
<< type() << " Layer takes at most " << MaxBottomBlobs()
<< " bottom blob(s) as input.";
}
if (ExactNumTopBlobs() >= 0) {
CHECK_EQ(ExactNumTopBlobs(), top.size())
<< type() << " Layer produces " << ExactNumTopBlobs()
<< " top blob(s) as output.";
}
if (MinTopBlobs() >= 0) {
CHECK_LE(MinTopBlobs(), top.size())
<< type() << " Layer produces at least " << MinTopBlobs()
<< " top blob(s) as output.";
}
if (MaxTopBlobs() >= 0) {
CHECK_GE(MaxTopBlobs(), top.size())
<< type() << " Layer produces at most " << MaxTopBlobs()
<< " top blob(s) as output.";
}
if (EqualNumBottomTopBlobs()) {
CHECK_EQ(bottom.size(), top.size())
<< type() << " Layer produces one top blob as output for each "
<< "bottom blob input.";
}
}
/**
* Called by SetUp to initialize the weights associated with any top blobs in
* the loss function. Store non-zero loss weights in the diff blob.
*/
inline void SetLossWeights(const vector<Blob<Dtype>*>& top) {
const int num_loss_weights = layer_param_.loss_weight_size();
if (num_loss_weights) {
CHECK_EQ(top.size(), num_loss_weights) << "loss_weight must be "
"unspecified or specified once per top blob.";
for (int top_id = 0; top_id < top.size(); ++top_id) {
const Dtype loss_weight = layer_param_.loss_weight(top_id);
if (loss_weight == Dtype(0)) { continue; }
this->set_loss(top_id, loss_weight);
const int count = top[top_id]->count();
Dtype* loss_multiplier = top[top_id]->mutable_cpu_diff();
caffe_set(count, loss_weight, loss_multiplier);
}
}
}
private:
// 判断该层是否被其他层所共享
// 这个内部变量实际是判断该层是不是数据层、数据层才可以被其他的网络共享
/** Whether this layer is actually shared by other nets*/
bool is_shared_;
/** The mutex for sequential forward if this layer is shared */
// 前向传播的时候所使用的互斥量的指针
shared_ptr<boost::mutex> forward_mutex_;
/** Initialize forward_mutex_ */
void InitMutex();
//如果该层是共享的,则需要锁住互斥量
/** Lock forward_mutex_ if this layer is shared */
void Lock();
//如果该层是共享的,则需要解锁互斥量
/** Unlock forward_mutex_ if this layer is shared */
void Unlock();
DISABLE_COPY_AND_ASSIGN(Layer);
}; // class Layer
// Forward and backward wrappers. You should implement the cpu and
// gpu specific implementations instead, and should not change these
// functions.
template <typename Dtype>
inline Dtype Layer<Dtype>::Forward(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
// Lock during forward to ensure sequential forward
Lock();
Dtype loss = 0;
Reshape(bottom, top);
switch (Caffe::mode()) {
case Caffe::CPU:
Forward_cpu(bottom, top);
for (int top_id = 0; top_id < top.size(); ++top_id) {
if (!this->loss(top_id)) { continue; }
const int count = top[top_id]->count();
const Dtype* data = top[top_id]->cpu_data();
const Dtype* loss_weights = top[top_id]->cpu_diff();
loss += caffe_cpu_dot(count, data, loss_weights);
}
break;
case Caffe::GPU:
Forward_gpu(bottom, top);
#ifndef CPU_ONLY
for (int top_id = 0; top_id < top.size(); ++top_id) {
if (!this->loss(top_id)) { continue; }
const int count = top[top_id]->count();
const Dtype* data = top[top_id]->gpu_data();
const Dtype* loss_weights = top[top_id]->gpu_diff();
Dtype blob_loss = 0;
caffe_gpu_dot(count, data, loss_weights, &blob_loss);
loss += blob_loss;
}
#endif
break;
default:
LOG(FATAL) << "Unknown caffe mode.";
}
Unlock();
return loss;
}
template <typename Dtype>
inline void Layer<Dtype>::Backward(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
switch (Caffe::mode()) {
case Caffe::CPU:
Backward_cpu(top, propagate_down, bottom);
break;
case Caffe::GPU:
Backward_gpu(top, propagate_down, bottom);
break;
default:
LOG(FATAL) << "Unknown caffe mode.";
}
}
// Serialize LayerParameter to protocol buffer
template <typename Dtype>
void Layer<Dtype>::ToProto(LayerParameter* param, bool write_diff) {
param->Clear();
param->CopyFrom(layer_param_);
param->clear_blobs();
for (int i = 0; i < blobs_.size(); ++i) {
blobs_[i]->ToProto(param->add_blobs(), write_diff);
}
}
} // namespace caffe
#endif // CAFFE_LAYER_H_
其中的一些函数的具体实现如下:
主要就是前传和反传,前传调用对应的Forward_cpu或者Forward_gpu
而我们知道Forward_cpu是纯虚函数,必须要实现而Forward_gpu是虚函数,如果不实现就调用 Forward_cpu函数了。
前传(你必须实现自己的Forward_cpu,实现Forward_gpu是可选的)
template <typename Dtype>
inline Dtype Layer<Dtype>::Forward(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
// Lock during forward to ensure sequential forward
// 前传的时候需要上锁,按照顺序执行才行,否则就乱了
Lock();
Dtype loss = 0;
// 根据bottom设置top的形状
Reshape(bottom, top);
// 设置运行模式CPU or GPU
switch (Caffe::mode()) {
case Caffe::CPU:
// 调用CPU的前传
Forward_cpu(bottom, top);
// 前传计算完之后计算损失(只有最后一层才进行计算,其余层都不用)
for (int top_id = 0; top_id < top.size(); ++top_id) {
if (!this->loss(top_id)) { continue; }
const int count = top[top_id]->count();
// 获取前传的数据
const Dtype* data = top[top_id]->cpu_data();
// 获取梯度(\frac{\partial Loss}{\partial net})
const Dtype* loss_weights = top[top_id]->cpu_diff();
// data与loss_weight的点积,即得损失函数关于当前层权重的偏导了
// \frac{\partial Loss}{\partial net} * \frac{\partial net}{\frac{W}}
// = \frac{\partial Loss}{\partial W}
loss += caffe_cpu_dot(count, data, loss_weights);
}
break;
case Caffe::GPU:
// GPU前传
Forward_gpu(bottom, top);
#ifndef CPU_ONLY
// 同上,只不过这里用GPU来计算点积了
for (int top_id = 0; top_id < top.size(); ++top_id) {
if (!this->loss(top_id)) { continue; }
const int count = top[top_id]->count();
// 获取GPU上的数据
const Dtype* data = top[top_id]->gpu_data();
const Dtype* loss_weights = top[top_id]->gpu_diff();
Dtype blob_loss = 0;
caffe_gpu_dot(count, data, loss_weights, &blob_loss);
loss += blob_loss;
}
#endif
break;
default:
LOG(FATAL) << "Unknown caffe mode.";
}
Unlock();
return loss;
}
反传和前向传播类似:
// 反传 ,必须实现CPU,但是GPU是可选的
template <typename Dtype>
inline void Layer<Dtype>::Backward(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
switch (Caffe::mode()) {
case Caffe::CPU:// CPU反传
Backward_cpu(top, propagate_down, bottom);
break;
case Caffe::GPU:// GPU反传
Backward_gpu(top, propagate_down, bottom);
break;
default:
LOG(FATAL) << "Unknown caffe mode.";
}
}
// 将LayerParameter转换为ProtoBuf
template <typename Dtype>
void Layer<Dtype>::ToProto(LayerParameter* param, bool write_diff) {
param->Clear();
param->CopyFrom(layer_param_);
param->clear_blobs();
for (int i = 0; i < blobs_.size(); ++i) {
blobs_[i]->ToProto(param->add_blobs(), write_diff);
}
}
其他部分的实现:
// 初始化互斥量
template <typename Dtype>
void Layer<Dtype>::InitMutex() {
forward_mutex_.reset(new boost::mutex());
}
上锁
// Lock
template <typename Dtype>
void Layer<Dtype>::Lock() {
if (IsShared()) {
forward_mutex_->lock();
}
}
解锁
// UnLock
template <typename Dtype>
void Layer<Dtype>::Unlock() {
if (IsShared()) {
forward_mutex_->unlock();
}
}
张博大佬的博客还提到了一些和layer相关的类,我也列出来一下
用到了device_alternate.hpp
这其中只是定义了一些检查CUDA是否运行成功的函数、还有就是定义了几个宏
这群宏说实话我没看懂。。。但是这个hpp文件不会修改的,可以不关注。 总结:
Layer的设计主要就是SetUp、Forward、Backward函数
这其中的SetUp的实现又依赖于CheckBlobCounts、LayerSetUp、Reshape等的实现。这其中Reshape又是必须要实现的,因为它是纯虚函数
这其中的Forward中又依赖于Forward_cpu、Forward_gpu,这其中Forward_cpu又是必须要实现的。
这其中的Backward中又依赖于Backward_cpu、Backward_gpu,这其中Backward_cpu 又是必须要实现的。