-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Usage WebAPI #2358
Comments
WebAPI uses a lot of caching, but that's not user-specific. So you might want to see what it looks like when 10 users log in (for example) and it shouldn't expand the memory linearly based on number of users. Of course, as users fetch results there is the intermediate memory usage of fetching results in the service tier, so it will be the case that 10 users doing things exactly simultaneously will cause spikes in the memory usage, but Atlas isn't a high-transaction sort of application. As far as identifying memory consumption, it would involve some profiling to see where memory is consumed. But the service itself is stateless and so we shouldn't be running into cases where we get into situations with memory leaks or the like (but of course, all software have bugs). WebAPI isn't a microservice architecture, so there's no 'offloading to other services' that you can do. |
Thanks Chris, I guess my concern is partly the number of users but also the number of different data sources that we might have and the scale of these. In our use case I'm not exactly sure how many datasets we're going to have at any given time but we are likely to have multiple data sources for different various different NHS organisations and these tend to add up quickly in my experience, so even though we are not using. My concern would be around creating a solution that is overly hungry on any particular resource whether it be memory, CPU, disk or otherwise and not really scalable and then try to fix this once we were already at limit rather than addressing up-front. Thinking out loud here as I've not reviewed the codebase yet to see how caching works, but in theory could we not potentially abstract away the caching service? so for those that want to do so we could use the existing caching methodology in the WebAPI service, but for those of us with more of an enterprise setup we could utilise a caching server such as redis (or similar) and potentially offload that way? While this may seem like moving the problem, it potentially opens up a variety of caching strategies to us that could allow us to better control usage and future proof a scalable solution? I'm wondering if this could be worth investigating and a potential idea for the next major version of WebAPI? |
Caching is definitely something we would want to revisit as part of the WebAPI 3.x update. |
When WebAPI is started on our kubernetes cluster it is currently using 1168Mi of memory when idle whereas Hades is using 75Mi and Atlas is using 4Mi. This makes WebAPI relatively expensive
I've seen on other threads a recommendation for at least 2GB and I've seen other recommendations suggesting an allocation of up to 16GB of memory to be allocated to the environment which is fine in itself but if the service is using 1GB when idle then I worry how much this could expand when in use, with a single user logged in and using the base source it got up to 1937Mi.
Is there any way in which we could reduce the memory requirements of the service (especially when idle), or offload this requirement onto other services? Are there any magic settings which might help?
And what would the recommendation for right-sizing this service for use at scale in a production environment with multiple data sources?
The text was updated successfully, but these errors were encountered: