Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consumers segfaults if processing data too fast #14

Open
git-blame opened this issue Apr 25, 2022 · 5 comments
Open

Consumers segfaults if processing data too fast #14

git-blame opened this issue Apr 25, 2022 · 5 comments

Comments

@git-blame
Copy link

It seems if producers/consumers are writing/reading (i.e., changing content of buffer) very quickly, we get a situation where the pipe (data) is "clobbered" somehow resulting in crash.

This doesn't seem to occur in the regular code because of #6, the pipe buffer continually grows. However, if you apply the patch from the issue, you may run into this problem.

@git-blame
Copy link
Author

Pipe maintains a circular buffer, where s.begin + elem_size points to the 1st elem to pop. There is a sentinel that is elem_size. s.bufend is the end of the buffer. When the pipe is "wrapping around", it looks like this. From the source code comments:

 *     buffer        end                 begin                 bufend
 *       [============>                    >=====================]

In the pop function, there is a potential underflow if the number of bytes between begin and bufend < elem_size. In other words, there is no data between begin/bufend, only the sentinel.

    // Copy either as many bytes as requested, or the available bytes in the RHS
    // of a wrapped buffer - whichever is smaller.
    {
        size_t first_bytes_to_copy = min(bytes_to_copy, (size_t)(s.bufend - s.begin - elem_size));

        target = offset_memcpy(target, s.begin + elem_size, first_bytes_to_copy);

When this happens, this leads to large number, so that min() always evaluate to bytes_to_copy. If this is elem_size or bigger (if we are fetching 1 or more elements), we are actually reading past the end of the buffer. This can lead to garbage data or a segmentation fault. Here is a stack trace of a segfault:

(gdb) p elem_size
$30 = 4096
(gdb) p s.bufend - s.begin
$31 = 2048
(gdb) p s.bufend - s.begin - elem_size
$32 = 18446744073709549568
(gdb) p bytes_to_copy
$33 = 4096
(gdb) p first_bytes_to_copy
$34 = 4096

@git-blame
Copy link
Author

The fix is simply making sure s.bufend - s.begin > elem_size before copying.

--- a/pipe.c
+++ b/pipe.c
@@ -960,11 +960,14 @@ static inline snapshot_t pop_without_locking(snapshot_t s,
     assertume(s.begin != s.bufend);

     size_t elem_size = s.elem_size;
+    // from begin to bufend could just be the sentinel (or part of one)
+    size_t remaining = s.bufend - s.begin;

     // Copy either as many bytes as requested, or the available bytes in the RHS
     // of a wrapped buffer - whichever is smaller.
+    if(likely(remaining > elem_size))
     {
-        size_t first_bytes_to_copy = min(bytes_to_copy, (size_t)(s.bufend - s.begin - elem_size));
+        size_t first_bytes_to_copy = min(bytes_to_copy, (size_t)(remaining - elem_size));

         target = offset_memcpy(target, s.begin + elem_size, first_bytes_to_copy);

@git-blame
Copy link
Author

See also commits in https://github.com/git-blame/pipe

@cgaebel
Copy link
Owner

cgaebel commented Apr 26, 2022

If you send a PR, I'm happy to merge.

@rshalit2023
Copy link

Hi, Is this issue fixed ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants