MyCaffe  1.11.7.7
Deep learning software for Windows C# programmers.
MyCaffe.param.SolverParameter Class Reference

The SolverParameter is a parameter for the solver, specifying the train and test networks. More...

Inheritance diagram for MyCaffe.param.SolverParameter:
MyCaffe.basecode.BaseParameter

Public Types

enum  EvaluationType { CLASSIFICATION , DETECTION }
 Defines the evaluation method used in the SSD algorithm. More...
 
enum  SnapshotFormat { BINARYPROTO = 1 }
 Defines the format of each snapshot. More...
 
enum  SolverType {
  SGD = 0 , NESTEROV = 1 , ADAGRAD = 2 , RMSPROP = 3 ,
  ADADELTA = 4 , ADAM = 5 , LBFGS = 6 , _MAX = 7
}
 Defines the type of solver. More...
 
enum  LearningRatePolicyType {
  FIXED , STEP , EXP , INV ,
  MULTISTEP , POLY , SIGMOID
}
 Defines the learning rate policy to use. More...
 
enum  RegularizationType { L1 , L2 }
 Defines the regularization type. More...
 

Public Member Functions

 SolverParameter ()
 The SolverParameter constructor. More...
 
SolverParameter Clone ()
 Creates a new copy of the SolverParameter. More...
 
override RawProto ToProto (string strName)
 Converts the SolverParameter into a RawProto. More...
 
string DebugString ()
 Returns a debug string for the SolverParameter. More...
 
- Public Member Functions inherited from MyCaffe.basecode.BaseParameter
 BaseParameter ()
 Constructor for the parameter. More...
 
virtual bool Compare (BaseParameter p)
 Compare this parameter to another parameter. More...
 

Static Public Member Functions

static SolverParameter FromProto (RawProto rp)
 Parses a new SolverParameter from a RawProto. More...
 
- Static Public Member Functions inherited from MyCaffe.basecode.BaseParameter
static double ParseDouble (string strVal)
 Parse double values using the US culture if the decimal separator = '.', then using the native culture, and if then lastly trying the US culture to handle prototypes containing '.' as the separator, yet parsed in a culture that does not use '.' as a decimal. More...
 
static bool TryParse (string strVal, out double df)
 Parse double values using the US culture if the decimal separator = '.', then using the native culture, and if then lastly trying the US culture to handle prototypes containing '.' as the separator, yet parsed in a culture that does not use '.' as a decimal. More...
 
static float ParseFloat (string strVal)
 Parse float values using the US culture if the decimal separator = '.', then using the native culture, and if then lastly trying the US culture to handle prototypes containing '.' as the separator, yet parsed in a culture that does not use '.' as a decimal. More...
 
static bool TryParse (string strVal, out float f)
 Parse doufloatble values using the US culture if the decimal separator = '.', then using the native culture, and if then lastly trying the US culture to handle prototypes containing '.' as the separator, yet parsed in a culture that does not use '.' as a decimal. More...
 

Properties

bool output_average_results [getset]
 Specifies to average loss results before they are output - this can be faster when there are a lot of results in a cycle. More...
 
string custom_trainer [getset]
 Specifies the Name of the custom trainer (if any) - this is an optional setting used by exteral software to provide a customized training process. Each custom trainer must implement the IDnnCustomTraininer interface which contains a 'Name'property - the named returned from this property is the value set here as the 'custom_trainer'. More...
 
string custom_trainer_properties [getset]
 Specifies the custom trainer properties (if any) - this is an optional setting used by exteral software to provide the propreties for a customized training process. More...
 
NetParameter net_param [getset]
 Inline train net param, possibly combined with one or more test nets. More...
 
NetParameter train_net_param [getset]
 Inline train net param, possibly combined with one or more test nets. More...
 
List< NetParametertest_net_param [getset]
 Inline test net params. More...
 
NetState train_state [getset]
 The states for the train/test nets. Must be unspecified or specified once per net. More...
 
List< NetStatetest_state [getset]
 The states for the train/test nets. Must be unspecified or specified once per net. More...
 
List< int > test_iter [getset]
 The number of iterations for each test. More...
 
int test_interval [getset]
 The number of iterations between two testing phases. More...
 
bool test_compute_loss [getset]
 Test the compute loss. More...
 
bool test_initialization [getset]
 If true, run an initial test pass before the first iteration, ensuring memory availability and printing the starting value of the loss. More...
 
double base_lr [getset]
 The base learning rate. More...
 
int display [getset]
 The number of iterations between displaying info. If display = 0, no info will be displayed. More...
 
int average_loss [getset]
 Display the loss averaged over the last average_loss iterations. More...
 
int max_iter [getset]
 The maximum number of iterations. More...
 
int iter_size [getset]
 Accumulate gradients over 'iter_size' x 'batch_size' instances. More...
 
LearningRatePolicyType LearningRatePolicy [getset]
 The learning rate decay policy. More...
 
string lr_policy [getset]
 The learning rate decay policy. More...
 
double gamma [getset]
 The 'gamma' parameter to compute the learning rate. More...
 
double power [getset]
 The 'power' parameter to compute the learning rate. More...
 
double momentum [getset]
 Specifies the momentum value - used by all solvers EXCEPT the 'AdaGrad' and 'RMSProp' solvers. For these latter solvers, momentum should = 0. More...
 
double weight_decay [getset]
 The weight decay. More...
 
RegularizationType Regularization [getset]
 The regularization type. More...
 
string regularization_type [getset]
 The regularization type. More...
 
int stepsize [getset]
 The stepsize for learning rate policy 'step'. More...
 
List< int > stepvalue [getset]
 The step values for learning rate policy 'multistep'. More...
 
double clip_gradients [getset]
 Set clip_gradients to >= 0 to clip parameter gradients to that L2 norm, whenever their actual L2 norm is larger. More...
 
bool enable_clip_gradient_status [getset]
 Optionally, enable status output when gradients are clipped (default = true) More...
 
int snapshot [getset]
 Specifies the snapshot interval. More...
 
string snapshot_prefix [getset]
 The prefix for the snapshot. More...
 
bool snapshot_diff [getset]
 Whether to snapshot diff in the results or not. Snapshotting diff will help debugging but the final protocol buffer size will be much larger. More...
 
SnapshotFormat snapshot_format [getset]
 The snapshot format. More...
 
bool snapshot_include_weights [getset]
 Specifies whether or not the snapshot includes the trained weights. The default = true. More...
 
bool snapshot_include_state [getset]
 Specifies whether or not the snapshot includes the solver state. The default = false. Including the solver state will slow down the time of each snapshot. More...
 
int device_id [getset]
 The device id that will be used when run on the GPU. More...
 
long random_seed [getset]
 If non-negative, the seed with which the Solver will initialize the caffe random number generator – useful for repoducible results. Otherwise (and by default) initialize using a seed derived from the system clock. More...
 
SolverType type [getset]
 Specifies the solver type. More...
 
double delta [getset]
 Numerical stability for RMSProp, AdaGrad, AdaDelta and Adam solvers. More...
 
double momentum2 [getset]
 An additional momentum property for the Adam solver. More...
 
double rms_decay [getset]
 RMSProp decay value. More...
 
bool debug_info [getset]
 If true, print information about the state of the net that may help with debugging learning problems. More...
 
int lbgfs_corrections [getset]
 Specifies the number of lbgfs corrections used with the L-BGFS solver. More...
 
bool snapshot_after_train [getset]
 If false, don't save a snapshot after training finishes. More...
 
EvaluationType eval_type [getset]
 Specifies the evaluation type to use when using Single-Shot Detection (SSD) - (default = NONE, SSD not used). More...
 
ApVersion ap_version [getset]
 Specifies the AP Version to use for average precision when using Single-Shot Detection (SSD) - (default = INTEGRAL). More...
 
bool show_per_class_result [getset]
 Specifies whether or not to display results per class when using Single-Shot Detection (SSD) - (default = false). More...
 
int accuracy_average_window [getset]
 Specifies the window over which to average the accuracies (default = 0 which ignores averaging). More...
 

Detailed Description

The SolverParameter is a parameter for the solver, specifying the train and test networks.

Exactly one train net must be specified using one of the following fields: train_net_param, train_net, net_param, net

One or more of the test nets may be specified using any of the following fields: test_net_param, test_net, net_param, net

If more than one test net field is specified (e.g., both net and test_net are specified), they will be evaluated in the field order given above: (1) test_net_param, (2) test_net, (3) net_param/net

A test_iter must be specified for each test_net. A test_level and/or test_stage may also be specified for each test_net.

Definition at line 29 of file SolverParameter.cs.

Member Enumeration Documentation

◆ EvaluationType

Defines the evaluation method used in the SSD algorithm.

Enumerator
CLASSIFICATION 

Specifies to run a standard classification evaluation.

DETECTION 

Specifies detection evaluation used in the SSD algorithm.

Definition at line 83 of file SolverParameter.cs.

◆ LearningRatePolicyType

Defines the learning rate policy to use.

Enumerator
FIXED 

Use a fixed learning rate which always returns base_lr.

STEP 

Use a stepped learning rate which returns $ base_lr * gamma ^ {floor{iter/step}} $

EXP 

Use an exponential learning rate which returns $ base_lr * gamma ^ {iter} $

INV 

Use an inverse learning rate which returns $ base_lr * {1 + gamma * iter}^{-power} $

MULTISTEP 

Use a multi-step learning rate which is similar to INV, but allows for non-uniform steps defined by stepvalue.

POLY 

Use a polynomial learning rate where the effective learning rate follows a polynomial decay, to be zero by the max_iter. Returns $ base_lr * {1 - iter/max_iter}^{power} $

SIGMOID 

Use a sigmoid learning rate where the effective learning rate follows a sigmoid decay. Returns $ base_lr * {1/{1 + exp{-gamma * {iter - stepsize}}}} $

Definition at line 170 of file SolverParameter.cs.

◆ RegularizationType

Defines the regularization type.

Enumerator
L1 

Specifies L1 regularization.

L2 

Specifies L2 regularization.

Definition at line 205 of file SolverParameter.cs.

◆ SnapshotFormat

Defines the format of each snapshot.

Enumerator
BINARYPROTO 

Save snapshots in the binary prototype format.

Definition at line 98 of file SolverParameter.cs.

◆ SolverType

Defines the type of solver.

Enumerator
SGD 

Use Stochastic Gradient Descent solver with momentum updates weights by a linear combination of the negative gradient and the previous weight update.

See also
Stochastic Gradient Descent Wikipedia.
NESTEROV 

Use Nesterov's accelerated gradient, similar to SGD, but error gradient is computed on the weights with added momentum.

See also
Lecture 6c The momentum method by Hinton, Geoffrey and Srivastava, Nitish and Swersky, Kevin, 2012.
Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent by Botev, Alexandar and Lever, Guy and Barber, David, 2016.
ADAGRAD 

Use Gradient based optimization like SGD that tries to find rarely seen features

See also
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization by Duchi, John and Hazan, Elad, and Singer, Yoram, 2011.
RMSPROP 

Use RMS Prop gradient based optimization like SGD.

See also
Lecture 6e rmsprop: Divide the gradient by a running average of its recent magnitude by Tieleman and Hinton, 2012,
RMSProp and equilibrated adaptive learning rates for non-convex optimization by Dauphin, Yann N. and de Vries, Harm and Chung, Junyoung and Bengio, Yoshua, 2015.
ADADELTA 

Use AdaDelta gradient based optimization like SGD.

See ADADELTA: An Adaptive Learning Rate Method by Zeiler, Matthew D., 2012.

ADAM 

Use Adam gradient based optimization like SGD that includes 'adaptive momentum estimation' and can be thougth of as a generalization of AdaGrad.

See also
Adam: A Method for Stochastic Optimization by Kingma, Diederik P. and Ba, Jimmy, 2014.
LBFGS 

Use the L-BFGS solver based on the implementation of minFunc by Marc Schmidt.

See also
minFunc by Marc Schmidt, 2005

Definition at line 109 of file SolverParameter.cs.

Constructor & Destructor Documentation

◆ SolverParameter()

MyCaffe.param.SolverParameter.SolverParameter ( )

The SolverParameter constructor.

Definition at line 220 of file SolverParameter.cs.

Member Function Documentation

◆ Clone()

SolverParameter MyCaffe.param.SolverParameter.Clone ( )

Creates a new copy of the SolverParameter.

Returns
A new instance of the SolverParameter is returned.

Definition at line 229 of file SolverParameter.cs.

◆ DebugString()

string MyCaffe.param.SolverParameter.DebugString ( )

Returns a debug string for the SolverParameter.

Returns
The debug string is returned.

Definition at line 1296 of file SolverParameter.cs.

◆ FromProto()

static SolverParameter MyCaffe.param.SolverParameter.FromProto ( RawProto  rp)
static

Parses a new SolverParameter from a RawProto.

Parameters
rpSpecifies the RawProto representing the SolverParameter.
Returns
The new SolverParameter instance is returned.

Definition at line 1053 of file SolverParameter.cs.

◆ ToProto()

override RawProto MyCaffe.param.SolverParameter.ToProto ( string  strName)
virtual

Converts the SolverParameter into a RawProto.

Parameters
strNameSpecifies a name given to the RawProto.
Returns
The new RawProto representing the SolverParameter is returned.

Implements MyCaffe.basecode.BaseParameter.

Definition at line 929 of file SolverParameter.cs.

Property Documentation

◆ accuracy_average_window

int MyCaffe.param.SolverParameter.accuracy_average_window
getset

Specifies the window over which to average the accuracies (default = 0 which ignores averaging).

Definition at line 918 of file SolverParameter.cs.

◆ ap_version

ApVersion MyCaffe.param.SolverParameter.ap_version
getset

Specifies the AP Version to use for average precision when using Single-Shot Detection (SSD) - (default = INTEGRAL).

Definition at line 897 of file SolverParameter.cs.

◆ average_loss

int MyCaffe.param.SolverParameter.average_loss
getset

Display the loss averaged over the last average_loss iterations.

Definition at line 408 of file SolverParameter.cs.

◆ base_lr

double MyCaffe.param.SolverParameter.base_lr
getset

The base learning rate.

Definition at line 386 of file SolverParameter.cs.

◆ clip_gradients

double MyCaffe.param.SolverParameter.clip_gradients
getset

Set clip_gradients to >= 0 to clip parameter gradients to that L2 norm, whenever their actual L2 norm is larger.

Definition at line 684 of file SolverParameter.cs.

◆ custom_trainer

string MyCaffe.param.SolverParameter.custom_trainer
getset

Specifies the Name of the custom trainer (if any) - this is an optional setting used by exteral software to provide a customized training process. Each custom trainer must implement the IDnnCustomTraininer interface which contains a 'Name'property - the named returned from this property is the value set here as the 'custom_trainer'.

Definition at line 253 of file SolverParameter.cs.

◆ custom_trainer_properties

string MyCaffe.param.SolverParameter.custom_trainer_properties
getset

Specifies the custom trainer properties (if any) - this is an optional setting used by exteral software to provide the propreties for a customized training process.

Note all spaces are replaced with '~' characters to avoid parsing errors.

Definition at line 268 of file SolverParameter.cs.

◆ debug_info

bool MyCaffe.param.SolverParameter.debug_info
getset

If true, print information about the state of the net that may help with debugging learning problems.

Definition at line 853 of file SolverParameter.cs.

◆ delta

double MyCaffe.param.SolverParameter.delta
getset

Numerical stability for RMSProp, AdaGrad, AdaDelta and Adam solvers.

Definition at line 816 of file SolverParameter.cs.

◆ device_id

int MyCaffe.param.SolverParameter.device_id
getset

The device id that will be used when run on the GPU.

Definition at line 775 of file SolverParameter.cs.

◆ display

int MyCaffe.param.SolverParameter.display
getset

The number of iterations between displaying info. If display = 0, no info will be displayed.

Definition at line 398 of file SolverParameter.cs.

◆ enable_clip_gradient_status

bool MyCaffe.param.SolverParameter.enable_clip_gradient_status
getset

Optionally, enable status output when gradients are clipped (default = true)

Definition at line 694 of file SolverParameter.cs.

◆ eval_type

EvaluationType MyCaffe.param.SolverParameter.eval_type
getset

Specifies the evaluation type to use when using Single-Shot Detection (SSD) - (default = NONE, SSD not used).

Definition at line 886 of file SolverParameter.cs.

◆ gamma

double MyCaffe.param.SolverParameter.gamma
getset

The 'gamma' parameter to compute the learning rate.

Definition at line 560 of file SolverParameter.cs.

◆ iter_size

int MyCaffe.param.SolverParameter.iter_size
getset

Accumulate gradients over 'iter_size' x 'batch_size' instances.

Definition at line 430 of file SolverParameter.cs.

◆ lbgfs_corrections

int MyCaffe.param.SolverParameter.lbgfs_corrections
getset

Specifies the number of lbgfs corrections used with the L-BGFS solver.

Definition at line 864 of file SolverParameter.cs.

◆ LearningRatePolicy

LearningRatePolicyType MyCaffe.param.SolverParameter.LearningRatePolicy
getset

The learning rate decay policy.

The currently implemented learning rate policies are as follows:

  • fixed: always return $ base_lr $.
  • step: return $ base_lr * gamma ^ {floor{iter / step}} $
  • exp: return $ base_lr * gamma ^ iter $
  • inv: return $ base_lr * {1 + gamma * iter} ^ {-power} $
  • multistep: similar to step but it allows non-uniform steps defined by stepvalue.
  • poly: the effective learning rate follows a polynomial decay, to be zero by the max_iter. return $ base_lr * {1 - iter/max_iter} ^ {power} $
  • sigmoid: the effective learning rate follows a sigmoid decay. return $ base_lr * {1/{1 + exp{-gamma * {iter - stepsize}}}} $

where base_lr, max_iter, gamma, step, stepvalue and power are defined int the solver protocol buffer, and iter is the current iteration.

Definition at line 459 of file SolverParameter.cs.

◆ lr_policy

string MyCaffe.param.SolverParameter.lr_policy
getset

The learning rate decay policy.

The currently implemented learning rate policies are as follows:

  • fixed: always return base_lr.
  • step: return base_lr * gamma ^ (floor(iter / step))
  • exp: return base_lr * gamma ^ iter
  • inv: return base_lr * (1 + gamma * iter) ^ (-power)
  • multistep: similar to step but it allows non-uniform steps defined by stepvalue.
  • poly: the effective learning rate follows a polynomial decay, to be zero by the max_iter. return base_lr * (1 - iter/max_iter) ^ (power)
  • sigmoid: the effective learning rate follows a sigmoid decay. return base_lr * (1/(1 + exp(-gamma * (iter - stepsize))))

where base_lr, max_iter, gamma, step, stepvalue and power are defined int the solver protocol buffer, and iter is the current iteration.

See also
Don't Decay the Learning Rate, Increase the Batch Size by Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying and Quoc V. Le, 2017.

Definition at line 549 of file SolverParameter.cs.

◆ max_iter

int MyCaffe.param.SolverParameter.max_iter
getset

The maximum number of iterations.

Definition at line 419 of file SolverParameter.cs.

◆ momentum

double MyCaffe.param.SolverParameter.momentum
getset

Specifies the momentum value - used by all solvers EXCEPT the 'AdaGrad' and 'RMSProp' solvers. For these latter solvers, momentum should = 0.

Definition at line 583 of file SolverParameter.cs.

◆ momentum2

double MyCaffe.param.SolverParameter.momentum2
getset

An additional momentum property for the Adam solver.

Definition at line 827 of file SolverParameter.cs.

◆ net_param

NetParameter MyCaffe.param.SolverParameter.net_param
getset

Inline train net param, possibly combined with one or more test nets.

Definition at line 278 of file SolverParameter.cs.

◆ output_average_results

bool MyCaffe.param.SolverParameter.output_average_results
getset

Specifies to average loss results before they are output - this can be faster when there are a lot of results in a cycle.

Definition at line 240 of file SolverParameter.cs.

◆ power

double MyCaffe.param.SolverParameter.power
getset

The 'power' parameter to compute the learning rate.

Definition at line 571 of file SolverParameter.cs.

◆ random_seed

long MyCaffe.param.SolverParameter.random_seed
getset

If non-negative, the seed with which the Solver will initialize the caffe random number generator – useful for repoducible results. Otherwise (and by default) initialize using a seed derived from the system clock.

Definition at line 787 of file SolverParameter.cs.

◆ Regularization

RegularizationType MyCaffe.param.SolverParameter.Regularization
getset

The regularization type.

The regularization types supported are:

  • L1 and L2 controlled by weight_decay.

Definition at line 608 of file SolverParameter.cs.

◆ regularization_type

string MyCaffe.param.SolverParameter.regularization_type
getset

The regularization type.

The regularization types supported are:

  • L1 and L2 controlled by weight_decay.

Definition at line 651 of file SolverParameter.cs.

◆ rms_decay

double MyCaffe.param.SolverParameter.rms_decay
getset

RMSProp decay value.

MeanSquare(t) = rms_decay * MeanSquare(t-1) + (1 - rms_decay) * SquareGradient(t)

Definition at line 841 of file SolverParameter.cs.

◆ show_per_class_result

bool MyCaffe.param.SolverParameter.show_per_class_result
getset

Specifies whether or not to display results per class when using Single-Shot Detection (SSD) - (default = false).

Definition at line 908 of file SolverParameter.cs.

◆ snapshot

int MyCaffe.param.SolverParameter.snapshot
getset

Specifies the snapshot interval.

Definition at line 705 of file SolverParameter.cs.

◆ snapshot_after_train

bool MyCaffe.param.SolverParameter.snapshot_after_train
getset

If false, don't save a snapshot after training finishes.

Definition at line 875 of file SolverParameter.cs.

◆ snapshot_diff

bool MyCaffe.param.SolverParameter.snapshot_diff
getset

Whether to snapshot diff in the results or not. Snapshotting diff will help debugging but the final protocol buffer size will be much larger.

Definition at line 728 of file SolverParameter.cs.

◆ snapshot_format

SnapshotFormat MyCaffe.param.SolverParameter.snapshot_format
getset

The snapshot format.

Currently only the Binary Proto Buffer format is supported.

Definition at line 742 of file SolverParameter.cs.

◆ snapshot_include_state

bool MyCaffe.param.SolverParameter.snapshot_include_state
getset

Specifies whether or not the snapshot includes the solver state. The default = false. Including the solver state will slow down the time of each snapshot.

Definition at line 764 of file SolverParameter.cs.

◆ snapshot_include_weights

bool MyCaffe.param.SolverParameter.snapshot_include_weights
getset

Specifies whether or not the snapshot includes the trained weights. The default = true.

Definition at line 753 of file SolverParameter.cs.

◆ snapshot_prefix

string MyCaffe.param.SolverParameter.snapshot_prefix
getset

The prefix for the snapshot.

Definition at line 716 of file SolverParameter.cs.

◆ stepsize

int MyCaffe.param.SolverParameter.stepsize
getset

The stepsize for learning rate policy 'step'.

Definition at line 662 of file SolverParameter.cs.

◆ stepvalue

List<int> MyCaffe.param.SolverParameter.stepvalue
getset

The step values for learning rate policy 'multistep'.

Definition at line 673 of file SolverParameter.cs.

◆ test_compute_loss

bool MyCaffe.param.SolverParameter.test_compute_loss
getset

Test the compute loss.

Definition at line 364 of file SolverParameter.cs.

◆ test_initialization

bool MyCaffe.param.SolverParameter.test_initialization
getset

If true, run an initial test pass before the first iteration, ensuring memory availability and printing the starting value of the loss.

Definition at line 376 of file SolverParameter.cs.

◆ test_interval

int MyCaffe.param.SolverParameter.test_interval
getset

The number of iterations between two testing phases.

Definition at line 354 of file SolverParameter.cs.

◆ test_iter

List<int> MyCaffe.param.SolverParameter.test_iter
getset

The number of iterations for each test.

Definition at line 343 of file SolverParameter.cs.

◆ test_net_param

List<NetParameter> MyCaffe.param.SolverParameter.test_net_param
getset

Inline test net params.

Definition at line 298 of file SolverParameter.cs.

◆ test_state

List<NetState> MyCaffe.param.SolverParameter.test_state
getset

The states for the train/test nets. Must be unspecified or specified once per net.

By default, all states will have solver = true; train_state will have phase = TRAIN, and all test_state's will have phase = TESET. Other defaults are set according to NetState defaults.

Definition at line 332 of file SolverParameter.cs.

◆ train_net_param

NetParameter MyCaffe.param.SolverParameter.train_net_param
getset

Inline train net param, possibly combined with one or more test nets.

Definition at line 288 of file SolverParameter.cs.

◆ train_state

NetState MyCaffe.param.SolverParameter.train_state
getset

The states for the train/test nets. Must be unspecified or specified once per net.

By default, all states will have solver = true; train_state will have phase = TRAIN, and all test_state's will have phase = TESET. Other defaults are set according to NetState defaults.

Definition at line 315 of file SolverParameter.cs.

◆ type

SolverType MyCaffe.param.SolverParameter.type
getset

Specifies the solver type.

Definition at line 805 of file SolverParameter.cs.

◆ weight_decay

double MyCaffe.param.SolverParameter.weight_decay
getset

The weight decay.

Definition at line 593 of file SolverParameter.cs.


The documentation for this class was generated from the following file: