-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
srtp_stream_list_ctx_t_: standarize naming #688
srtp_stream_list_ctx_t_: standarize naming #688
Conversation
Unrelated to this PR, but you may want to consider changing the grow factor of the array to 1.5 (i.e. to Also, while at it, you may want to check for capacity overflow, i.e. |
@Lastique, nice comments. I'll do some modifications in this PR. |
Actually, this condition is not quite correct. It should be something like this: |
The PR is now ready for review. |
978879d
to
0497fc4
Compare
I've rebased to a single commit in order to have a better git history. |
I read the material at the referenced link, but I fail to see the problem. If a vector has 100 elements and it grows by even 1, it will have to allocate a new buffer. If calling What am I missing? |
A factor of 2 will never let you to reuse previously used chunks. That will also contribute to memory fragmentation. |
Sure, but those chunks will be returned, and the OS will (ideally coalesce those into a larger contiguous chunk and) give those chunks to other application logic that requests memory. If the OS does not coalesce memory, then those smaller, previously allocated chunks will remain fragmented and could not be given to a vector that is growing (even at 1.5*C) since all chunks will be smaller than required size. If the OS does coalesce memory, sure: the vector could potentially be placed where it was before. In any complex system, though, it's likely that memory is going to be handed to something else, anyway. If there is no coalescing happening, the 1.5 * C allocations will actually produce more fragmentation. |
This will at best only happen when the allocator has a whole unused page to free to the OS, which may not happen soon, depending on the vector sizes and the number of allocations. And that is assuming the allocator actually does release the page and not e.g. cache it for potential reuse in the future. Either way, reusing the memory within the application, without round tripping to the OS, is more efficient. The point of the 1.5 grow factor is that we allow for such reuse to happen, whereas with the factor of 2 we would not. It doesn't mean such reuse will happen, though. |
@paulej I definitely cannot defend the 1.5 factor growth. Since that change is unrelated to the initial PR intention we can go back to the initial factor 2 and apply the name changes and the capacity overflow check. |
I have no strong objection either way. I was just confused by the claim. Now that I understand it better, I do think that claim, while not wrong on a pure sense, is wrong in reality since it doesn't take into account the fact that the system is doing lots of other things. It's more likely than not that previously freed memory is doing to be consumed by something else. However, either works and I cannot prescribe what the best growth factor is. I suspect the best answer here is what works best for a typical, growing multi-party conference. I don't think there is a good answer, unless we know a priori what to expect. |
Use 'capacity' and 'size' in order to represent the allocated storage and the number of elements respectively. Check for capacity overflow.
0497fc4
to
ebefcc9
Compare
I have removed the 1.5 factor and applied back the previous 2. This PR does two things:
As I get from the comments, changing the increase factor requires its own space (PR) and corresponding rationale, data, etc. |
use 'capacity' and 'size' in order to represent the allocated storage and the number of elements respectively.
As per @Lastique comments here