Name Story: the inspiration of the name Manta
is coming from Dota2, called Manta Style, which will create 2 images of your hero just like peers in the P2P network.
We're reframing the Manta to make it a general distributed cache system with POSIX promise, the current capacities are still available with the latest v0.0.4 release. Let's see what will happen.
Note: llmaz is just one kind of integrations, Manta can be deployed and used independently.
- Model Hub Support: Models could be downloaded directly from model hubs (Huggingface etc.) or object storages, no other effort.
- Model Preheat: Models could be preloaded to clusters, or specified nodes to accelerate the model serving.
- Model Cache: Models will be cached as chunks after downloading for faster model loading.
- Model Lifecycle Management: Model lifecycle is managed automatically with different strategies, like
Retain
orDelete
. - Plugin Framework: Filter and Score plugins could be extended to pick up the best candidates.
- Memory Management(WIP): Manage the reserved memories for caching, together with LRU algorithm for GC.
- Manta is not an all-in-one solution for model management, instead, it offers a lightweight solution to utilize the idle bandwidth and cost-effective disk, helping you save money.
- It requires no additional components like databases or storage systems, simplifying setup and reducing effort.
- All the models will be stored under the host path of
/mnt/models/
- After all, it's just a cache system.
Read the Installation for guidance.
A sample to preload the Qwen/Qwen2.5-0.5B-Instruct
model. Once preheated, no longer to fetch the models from cold start, but from the cache instead.
apiVersion: manta.io/v1alpha1
kind: Torrent
metadata:
name: torrent-sample
spec:
hub:
name: Huggingface
repoID: Qwen/Qwen2.5-0.5B-Instruct
If you want to preload the model to specified nodes, use the NodeSelector
:
apiVersion: manta.io/v1alpha1
kind: Torrent
metadata:
name: torrent-sample
spec:
hub:
name: Huggingface
repoID: Qwen/Qwen2.5-0.5B-Instruct
nodeSelector:
foo: bar
Once you have a Torrent, you can access the model simply from host path of `/mnt/models/. What you need to do is just set the Pod label like:
metadata:
labels:
manta.io/torrent-name: "torrent-sample"
Note: you can make the Torrent Standby
by setting the preheat to false (true by default), then preheating will process in runtime, which obviously wll slow down the model loading.
apiVersion: manta.io/v1alpha1
kind: Torrent
metadata:
name: torrent-sample
spec:
preheat: false
If you want to remove the model weights once Torrent
is deleted, set the ReclaimPolicy=Delete
, default to Retain
:
apiVersion: manta.io/v1alpha1
kind: Torrent
metadata:
name: torrent-sample
spec:
hub:
name: Huggingface
repoID: Qwen/Qwen2.5-0.5B-Instruct
reclaimPolicy: Delete
More details refer to the APIs.
In the long term, we hope to make Manta an unified cache system within MLOps.
- Preloading datasets from model hubs
- RDMA support for faster model loading
- More integrations with MLOps system, including training and serving
Join us for more discussions:
- Slack Channel: #manta
All kinds of contributions are welcomed ! Please following CONTRIBUTING.md.