(1)移植辅助的文件
将include/caffe/util/下的coords.hpp和modified_permutohedral.hpp复制到caffe-windows对应的目录将src/caffe/util/modified_permutohedral.cpp复制到对应的目录中去
(2)移植Layer中的特性
在include/caffe/layer.hpp中添加如下代码:#include "caffe/util/coords.hpp"
和以下代码: virtual DiagonalAffineMap
#include
#include "caffe/blob.hpp"#include "caffe/common.hpp"#include "caffe/layer_factory.hpp"#include "caffe/proto/caffe.pb.h"#include "caffe/util/coords.hpp"#include "caffe/util/math_functions.hpp"
/** Forward declare boost::thread instead of including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010) on OSX. */namespace boost { class mutex; }
namespace caffe {
/** * @brief An interface for the units of computation which can be composed into a * Net. * * Layer%s must implement a Forward function, in which they take their input * (bottom) Blob%s (if any) and compute their output Blob%s (if any). * They may also implement a Backward function, in which they compute the error * gradients with respect to their input Blob%s, given the error gradients with * their output Blob%s. */template
/** * @brief Implements common layer setup functionality. * * @param bottom the preshaped input blobs * @param top * the allocated but unshaped output blobs, to be shaped by Reshape * * Checks that the number of bottom and top blobs is correct. * Calls LayerSetUp to do special layer setup for individual layer types, * followed by Reshape to set up sizes of top blobs and internal buffers. * Sets up the loss weight multiplier blobs for any non-zero loss weights. * This method may not be overridden. */ void SetUp(const vector
/** * @brief Does layer-specific setup: your layer should implement this function * as well as Reshape. * * @param bottom * the preshaped input blobs, whose data fields store the input data for * this layer * @param top * the allocated but unshaped output blobs * * This method should do one-time layer specific setup. This includes reading * and processing relevent parameters from the layer_param_
. * Setting up the shapes of top blobs and internal buffers should be done in * Reshape
, which will be called before the forward pass to * adjust the top blob sizes. */ virtual void LayerSetUp(const vector
/** * @brief Whether a layer should be shared by multiple nets during data * parallelism. By default, all layers except for data layers should * not be shared. data layers should be shared to ensure each worker * solver access data sequentially during data parallelism. */ virtual inline bool ShareInParallel() const { return false; }
/** @brief Return whether this layer is actually shared by other nets. * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then this function is expected return true. */ inline bool IsShared() const { return is_shared_; }
/** @brief Set whether this layer is actually shared by other nets * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then is_shared should be set true. */ inline void SetShared(bool is_shared) { CHECK(ShareInParallel() || !is_shared) << type() << "Layer does not support sharing."; is_shared_ = is_shared; }
/** * @brief Adjust the shapes of top blobs and internal buffers to accommodate * the shapes of the bottom blobs. * * @param bottom the input blobs, with the requested input shapes * @param top the top blobs, which should be reshaped as needed * * This method should reshape top blobs as needed according to the shapes * of the bottom (input) blobs, as well as reshaping any internal buffers * and making any other necessary adjustments so that the layer can * accommodate the bottom blobs. */ virtual void Reshape(const vector
/** * @brief Given the bottom blobs, compute the top blobs and the loss. * * @param bottom * the input blobs, whose data fields store the input data for this layer * @param top * the preshaped output blobs, whose data fields will store this layers' * outputs * /return The total loss from the layer. * * The Forward wrapper calls the relevant device wrapper function * (Forward_cpu or Forward_gpu) to compute the top blob values given the * bottom blobs. If the layer has any non-zero loss_weights, the wrapper * then computes and returns the loss. * * Your layer should implement Forward_cpu and (optionally) Forward_gpu. */ inline Dtype Forward(const vector