-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache the block which most recently successfully allocated something #1284
Cache the block which most recently successfully allocated something #1284
Conversation
Otherwise, once the most recently allocated block was full, all allocations would need to scan the whole collection of blocks to find free space, which could take a while if lots of blocks had already been allocated but contained free space due to things being freed.
ooo that's a big improvement. I think this change might need some refinement, but broadly it looks a sensible direction to follow. Once I've wrapped up my present dev work I'll have do a proper review. |
Generally, I'd be in favour of using an off-the-shelf allocator instead of a custom one so we could be relatively sure there weren't any undiscovered pathological cases that people would run into, but I did a review of what's available, and the ones that natively supported something like the VSG affinity system (which they mostly referred to as multiple heaps) either were tied to a specific platform or didn't support allocating and freeing from one heap on different threads. Mimalloc has microsoft/mimalloc#925 where one of the maintainers states that they eventually intend to make it possible, so the situation may improve, but we're stuck with something custom in the short term. |
I've altered the vsggroups example so it can detect the problems without this PR and show the benefit when this MR is added. That's available as vsg-dev/vsgExamples#318. Here are some raw terminal logs from some testing: Original
With this PR
I'll collate the data to make it nicer to read... |
The data from another set of runs (as the first were missing some lines due to Windows 10 still shipping PowerShell with PSReadLine 2.0 installed, which has a bug that was fixed in 2018 that makes some lines disappear from the scrollback buffer with some terminals), but formatted as a table so it's vaguely readable: Original
New
In light of this, I can say that:
The OS doing secret stuff under the hood should have been prevented by running the std allocator test first as that should have meant there was enough memory that wasn't being used for disk cache or anything else that might need paging out. |
Wow, that's a huge demonstration of the multi create/destroy performance bug. Kudos. I'm aiming to wrap up the PolytopeIntersector work today and then can do full review/test. |
I have done a code review and the change as is looks perfectly safe and straight forward so I've gone ahead and merged it. I'm surprised I missed this small optimization of the allocation fallback code as it seems so obvious now, but hey gotta love open source and many eye balls, even one extra pair can make a difference :-) I'll have a longer think about whether there is a further refinement we can make, but even I come up with something it'll not make a massive further improvement like this fix did as the main bottleneck has been resolved. |
Otherwise, once the most recently allocated block was full, all allocations would need to scan the whole collection of blocks to find free space, which could take a while if lots of blocks had already been allocated but contained free space due to things being freed.
For the app I've tried this with, without this PR, loading a large scene takes 1:08 and reloading it takes 19:58, but with this PR, initial loading takes 1:04 (so about the same) and reloading it takes only 0:54, just 4.5% as long as without. I'm going to run some of the allocation benchmarks from the vsgExamples AllocatorRefactor branch, and probably tweak them so they also measure how much slower or faster reusing blocks is as it's possible I've ended up making something worse.