You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 5, 2023. It is now read-only.
Often times we just want to get an embedding from the input data, NOT the output data. Currently we only support embeddings that come with a model prediction.
It would be nice to have embeddings run just once on the input data.
One option is to have this be a @distill function that we annotate as an embedding....
We could also generalize this to annotations for projections too?
The text was updated successfully, but these errors were encountered:
Yeah but for stuff that needs the full data to compute, we would need a different system.
Like tsne could not work on distill unless we allow people to define batch sizes in the distill decorator itself distill(batches=1) so it just gets run on the entire data.
Often times we just want to get an embedding from the input data, NOT the output data. Currently we only support embeddings that come with a model prediction.
It would be nice to have embeddings run just once on the input data.
One option is to have this be a
@distill
function that we annotate as an embedding....We could also generalize this to annotations for projections too?
The text was updated successfully, but these errors were encountered: