Skip to main content

Table 1 Most important hyperparameters for fitting a DBM. These parameters can be specified in the function “ds.monitored_fitdbm” (see Table 2). The parameters for pre-training can also be controlled individually for each layer (i.e. for each RBM in the stack) via the function “ds.bm.defineLayer”. Together with the function “ds.bm.definePartitionedLayer”, this allows to also create models with partitioned architectures

From: Deep generative models in DataSHIELD

Hyperparameter name

Meaning of hyperparameter

learningrate

Learning rate for stochastic gradient descent optimization

learningratepretraining

Learning rate for pre-training, may be specified separately

epochs

Number of training epochs

epochspretraining

Number of epochs for pre-training, may be specified separately

nhiddens

Number of hidden nodes specified as a vector of numbers, containing one number for each hidden layer

batchsizepretraining

Batch size used in pre-training