You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue:
The current ScalerModule implementation inherits the bias and scale parameter from the given Scaler and then uses its own transform implementation. This is not transparent to the user, as, even if one overrides the base implementation of Scaler.transform, the ScalerModule will resort to its own way of scaling the input.
This may lead to unexpected results; in particular considering that the SpatioTemporalDataset wraps every given Scaler into a ScalerModule.
Proposed Solution:
The ScalerModule should inherit also the way in which the original Scaler implements the transform.
The text was updated successfully, but these errors were encountered:
Hi Luca, you're right. The ScalerModule must be a transparent and clear translation of a Scaler for working with torch tensors. I'll put this in the roadmap.
If you already implemented a smart way to do it, feel free to contribute with a PR!
The issue:
The current ScalerModule implementation inherits the bias and scale parameter from the given Scaler and then uses its own transform implementation. This is not transparent to the user, as, even if one overrides the base implementation of Scaler.transform, the ScalerModule will resort to its own way of scaling the input.
This may lead to unexpected results; in particular considering that the SpatioTemporalDataset wraps every given Scaler into a ScalerModule.
Proposed Solution:
The ScalerModule should inherit also the way in which the original Scaler implements the transform.
The text was updated successfully, but these errors were encountered: