dtw_loss_functions.soft_dtw_implementations.soft_dtw_cuda_ron.module module

class dtw_loss_functions.soft_dtw_implementations.soft_dtw_cuda_ron.module.SoftDTW(*, gamma: float = 1.0, bandwidth: float | None = None, normalize: bool = False, dist: str = 'sqeuclidean', fused: bool | None = None)[source]

Bases: Module

User-facing module.

  • dist: currently supports “sqeuclidean”

  • normalize: SoftDTW(x,y) - 0.5*(SoftDTW(x,x)+SoftDTW(y,y))

  • fused:

    None -> auto (use fused only when possible) True -> require fused (error if not possible) False -> never fused (always materialize D and use D-based autograd)

Methods

forward(x, y)

Define the computation performed at every call.

forward(x: Tensor, y: Tensor) Tensor[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.