Cockpit
- class cockpit.Cockpit(params, quantities=None)[source]
Cockpit class.
Initialize a cockpit.
- Parameters
params (iterable) – List or sequence of parameters on which the quantities will be evaluated. Every parameter must have
require_grad = True
, otherwise the computation cannot be executed.quantities (list, optional) – List of
Quantity
(instances) that will be tracked. Defaults to None, which will use no quantities.
- Raises
ValueError – If not all passed parameters have
required_grad=True
.
- BACKPACK_CONV_SAVE_MEMORY = True
Tell BackPACK to use a more memory-efficient Jacobian-vector product algorithm for weights in convolution layers. Default:
True
.- Type
- add(quantity)[source]
Add quantity to tracked quantities.
- Parameters
quantity (cockpit.quantites.Quantity) – The quantity to be added.
- Raises
ValueError – If passed quantity is not a
cockpit.quantity
.
- create_graph(global_step)[source]
Return if computation graph should be kept for computing quantities.
- get_output()[source]
Return a nested dictionary that stores the results of all tracked quantities.
First key corresponds to the iteration, second key is the quantity class name, values represent the computational result of the quantity at that iteration.
Example
>>> cockpit = Cockpit(...) >>> # information tracked at iteration 3 >>> global_step = 3 >>> global_step_output = cockpit.get_output()[global_step] >>> # information tracked at iteration 3 by Hessian max eigenvalue quantity >>> key = "HessMaxEV" >>> max_ev_global_step_output = cockpit.output[global_step][key]
- Returns
Nested dictionary with the results of all tracked quantities.
- Return type
- log(global_step, epoch_count, train_loss, valid_loss, test_loss, train_accuracy, valid_accuracy, test_accuracy, learning_rate)[source]
Tracking function for quantities computed at every epoch.
- Parameters
global_step (int) – Current number of iteration/global step.
epoch_count (int) – Current number of epoch.
train_loss (float) – Loss on the train (eval) set.
valid_loss (float) – Loss on the validation set.
test_loss (float) – Loss on the test set.
train_accuracy (float) – Accuracy on the train (eval) set.
valid_accuracy (float) – Accuracy on the validation set.
test_accuracy (float) – Accuracy on the test set.
learning_rate (float) – Learning rate of the optimizer. We assume, that the optimizer uses a single global learning rate, which is used for all parameter groups.