-
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do we have an IO issue on storage node? #170
Comments
@benoit74 You mean 4TB of ZIM I guess for the prod library? |
Current prod library is 4.23TiB |
If there is no obvious technical optimisation in view, this looks like the logical approach. But we should have more buffer and probably count with around 8TB in at least Raid5. How much would that cost? |
As I said, this is only food for thought for now. Having thought a bit (I just had a shower 🤣) I think we have other tracks to follow:
|
Duplicate of #227, solved by moving workload to another cloud provider |
Ah I wanted to comment that we haven't enabled SSD but there's #246 just for that :) |
Global overview of the situation, only some food for thought for now
dev-library consumes about 100 to 180 IOPS (Read+Write) and 10 to 25 MB/s (Read+Write)
dev-library-generator is quite fast (2-3 mins) but consumes even more.
As a comparison, each library-data (prod serving ZIMs) consumes 3-4 IOPS in average (Read+Write, there is some peaks at 10 to 30) and a MB/s (Read+Write, there is some peaks at 4)
But rsyncd is even more intensive
One idea from @rgaudin: should we move prod library (most time sensitive application on this server) to a new server, with prod ZIMs mirrored from storage, where the service could be more quiet? (and only need about 4G of ZIMs, no need for the double copy, no need for dev ZIMs, nightlies, ...)
The text was updated successfully, but these errors were encountered: