mighty.utils.stub.OptimizerStub

class mighty.utils.stub.OptimizerStub[source]

An Optimizer stub for trainers that update model weights in a gradient-free fashion.

Methods

__init__()

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

load_state_dict(state_dict)

Loads the optimizer state.

profile_hook_step(func)

register_load_state_dict_post_hook(hook[, ...])

Register a load_state_dict post-hook which will be called after load_state_dict() is called. It should have the following signature::.

register_load_state_dict_pre_hook(hook[, ...])

Register a load_state_dict pre-hook which will be called before load_state_dict() is called. It should have the following signature::.

register_state_dict_post_hook(hook[, prepend])

Register a state dict post-hook which will be called after state_dict() is called. It should have the following signature::.

register_state_dict_pre_hook(hook[, prepend])

Register a state dict pre-hook which will be called before state_dict() is called. It should have the following signature::.

register_step_post_hook(hook)

Register an optimizer step post hook which will be called after optimizer step. It should have the following signature::.

register_step_pre_hook(hook)

Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature::.

state_dict()

Returns the state of the optimizer as a dict.

step([closure])

Performs a single optimization step (parameter update).

zero_grad([set_to_none])

Resets the gradients of all optimized torch.Tensor s.

Attributes

OptimizerPostHook

alias of Callable[[Self, Tuple[Any, ...], Dict[str, Any]], None]

OptimizerPreHook

alias of Callable[[Self, Tuple[Any, ...], Dict[str, Any]], Optional[Tuple[Tuple[Any, ...], Dict[str, Any]]]]

add_param_group(param_group: Dict[str, Any]) None

Add a param group to the Optimizer s param_groups.

This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.

Args:
param_group (dict): Specifies what Tensors should be optimized along with group

specific optimization options.

load_state_dict(state_dict: dict) None[source]

Loads the optimizer state.

Args:
state_dict (dict): optimizer state. Should be an object returned

from a call to state_dict().

register_load_state_dict_post_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle

Register a load_state_dict post-hook which will be called after load_state_dict() is called. It should have the following signature:

hook(optimizer) -> None

The optimizer argument is the optimizer instance being used.

The hook will be called with argument self after calling load_state_dict on self. The registered hook can be used to perform post-processing after load_state_dict has loaded the state_dict.

Args:

hook (Callable): The user defined hook to be registered. prepend (bool): If True, the provided post hook will be fired before

all the already registered post-hooks on load_state_dict. Otherwise, the provided hook will be fired after all the already registered post-hooks. (default: False)

Returns:
torch.utils.hooks.RemoveableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_load_state_dict_pre_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle

Register a load_state_dict pre-hook which will be called before load_state_dict() is called. It should have the following signature:

hook(optimizer, state_dict) -> state_dict or None

The optimizer argument is the optimizer instance being used and the state_dict argument is a shallow copy of the state_dict the user passed in to load_state_dict. The hook may modify the state_dict inplace or optionally return a new one. If a state_dict is returned, it will be used to be loaded into the optimizer.

The hook will be called with argument self and state_dict before calling load_state_dict on self. The registered hook can be used to perform pre-processing before the load_state_dict call is made.

Args:

hook (Callable): The user defined hook to be registered. prepend (bool): If True, the provided pre hook will be fired before

all the already registered pre-hooks on load_state_dict. Otherwise, the provided hook will be fired after all the already registered pre-hooks. (default: False)

Returns:
torch.utils.hooks.RemoveableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_state_dict_post_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle

Register a state dict post-hook which will be called after state_dict() is called. It should have the following signature:

hook(optimizer, state_dict) -> state_dict or None

The hook will be called with arguments self and state_dict after generating a state_dict on self. The hook may modify the state_dict inplace or optionally return a new one. The registered hook can be used to perform post-processing on the state_dict before it is returned.

Args:

hook (Callable): The user defined hook to be registered. prepend (bool): If True, the provided post hook will be fired before

all the already registered post-hooks on state_dict. Otherwise, the provided hook will be fired after all the already registered post-hooks. (default: False)

Returns:
torch.utils.hooks.RemoveableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_state_dict_pre_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle

Register a state dict pre-hook which will be called before state_dict() is called. It should have the following signature:

hook(optimizer) -> None

The optimizer argument is the optimizer instance being used. The hook will be called with argument self before calling state_dict on self. The registered hook can be used to perform pre-processing before the state_dict call is made.

Args:

hook (Callable): The user defined hook to be registered. prepend (bool): If True, the provided pre hook will be fired before

all the already registered pre-hooks on state_dict. Otherwise, the provided hook will be fired after all the already registered pre-hooks. (default: False)

Returns:
torch.utils.hooks.RemoveableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_step_post_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], None]) RemovableHandle

Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:

hook(optimizer, args, kwargs) -> None

The optimizer argument is the optimizer instance being used.

Args:

hook (Callable): The user defined hook to be registered.

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

register_step_pre_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], Tuple[Tuple[Any, ...], Dict[str, Any]] | None]) RemovableHandle

Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:

hook(optimizer, args, kwargs) -> None or modified args and kwargs

The optimizer argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.

Args:

hook (Callable): The user defined hook to be registered.

Returns:
torch.utils.hooks.RemovableHandle:

a handle that can be used to remove the added hook by calling handle.remove()

state_dict() dict[source]

Returns the state of the optimizer as a dict.

It contains two entries:

  • state: a Dict holding current optimization state. Its content

    differs between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved. state is a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.

  • param_groups: a List containing all parameter groups where each

    parameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group.

NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group params (int IDs) and the optimizer param_groups (actual nn.Parameter s) in order to match state WITHOUT additional verification.

A returned state dict might look something like:

{
    'state': {
        0: {'momentum_buffer': tensor(...), ...},
        1: {'momentum_buffer': tensor(...), ...},
        2: {'momentum_buffer': tensor(...), ...},
        3: {'momentum_buffer': tensor(...), ...}
    },
    'param_groups': [
        {
            'lr': 0.01,
            'weight_decay': 0,
            ...
            'params': [0]
        },
        {
            'lr': 0.001,
            'weight_decay': 0.5,
            ...
            'params': [1, 2, 3]
        }
    ]
}
step(closure: Callable[[], float] | None = Ellipsis)[source]

Performs a single optimization step (parameter update).

Args:
closure (Callable): A closure that reevaluates the model and

returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

zero_grad(set_to_none: bool = True) None

Resets the gradients of all optimized torch.Tensor s.

Args:
set_to_none (bool): instead of setting to zero, set the grads to None.

This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).