Read documentation »
Try it out »
UpTrain is an open-source, data-secure tool for ML practitioners to observe and refine their ML models by monitoring their performance, checking for (data) distribution shifts, and collecting edge cases to retrain them upon. It integrates seamlessly with your existing production pipelines and takes minutes to get started ⚡.
- Data Drift Checks - Identify distribution shifts in your model inputs.
- Performance Monitoring - Track the performance of your models in realtime and get alerted as soon as a dip is observed.
- Embeddings Support - Specialized dashboards to understand model-inferred embeddings.
- Edge Case Signals - User-defined signals and statistical techniques to detect out-of-distribution data-points.
- Data Integrity Checks - Checks for missing or inconsistent data, duplicate records, data quality, etc.
- Customizable metrics - Define custom metrics that make sense for your use case.
- Automated Retraining - Automate model retraining by attaching your training and inference pipelines.
- Model Bias - Track popularity bias in your recommendation models.
- Data Security - Your data never goes out of your machine.
- Realtime Dashboards - To visualize your model's health.
- Slack Integration - Get alerts on Slack.
- Label Shift - Identify drifts in your predictions. Specially useful in cases when ground truth is unavailable.
- Prediction Stability - Filter cases where model prediction is not stable.
- AI Explainability - Understand relative importance of multiple features on predictions.
- Adversarial Checks - Combat adversarial attacks
And more.
You can quickly get started with Google collab here.
To run it in your machine, follow the steps below:
pip install uptrain
git clone [email protected]:uptrain-ai/uptrain.git
cd uptrain/examples
pip install jupyterlab
jupyter lab
For more info, visit our get started guide.
UpTrain in action 🎬
One of the most common use cases of ML today is language models, be it text summarization, NER, chatbots, language translation, etc. UpTrain provides ways to visualize differences in the training and real-world data via UMAP clustering of text embeddings (inferred from bert).
Additionally, UpTrain also provides statistical measures to quantify these differences and enables automated alerts whenever this drift crosses a certain threshold.
Machine learning (ML) models are widely used to make critical business decisions. Still, no ML model is 100% accurate, and, further, their accuracy deteriorates over time 😣. For example, Sales prediction becomes inaccurate over time due to a shift in consumer buying habits. Additionally, due to the black box nature of ML models, it's challenging to identify and fix their problems.
UpTrain solves this. We make it easy for data scientists and ML engineers to understand where their models are going wrong and help them fix them before others complain 🗣️.
UpTrain can be used for a wide variety of Machine learning models such as LLMs, recommendation models, prediction models, Computer vision models, etc.
We are constantly working to make UpTrain better. Want a new feature or need any integrations? Feel free to create an issue or contribute directly to the repository.
This repo is published under Apache 2.0 license. We're currently focused on developing non-enterprise offerings that should cover most use cases. In the future, we will add a hosted version which we might charge for.
We are continuously adding tons of features and use cases. Please support us by giving the project a star ⭐!
We are building UpTrain in public. Help us improve by giving your feedback here.
We welcome contributions to uptrain. Please see our contribution guide for details.