Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected buffer sharing caused request body concatenated or replaced #917

Closed
stonelgh opened this issue Nov 19, 2024 · 6 comments · Fixed by #918 or #919
Closed

Unexpected buffer sharing caused request body concatenated or replaced #917

stonelgh opened this issue Nov 19, 2024 · 6 comments · Fixed by #918 or #919
Assignees
Labels

Comments

@stonelgh
Copy link

stonelgh commented Nov 19, 2024

In our usage, we found that files were being transferred to incorrect paths, or two files were being merged and transferred to the same location.

After investigation, we found that this issue can be reproduced in the following scenario, from v2.9.1 to the latest v2.16.0: SetContentLength(true), set a reader stream as the request body and when retry occurs.

This issue can be observed with the following code snippet:

package resty

import (
	"bytes"
	"io"
	"net/http"
	"net/http/httptest"
	"sync"
	"testing"

	"github.com/go-resty/resty/v2"
	"github.com/stretchr/testify/assert"
	"github.com/stretchr/testify/require"
)

func TestRestyClient(t *testing.T) {
	// Mock server returns 500 status code to cause client retries.
	srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		b, err := io.ReadAll(r.Body)
		assert.NoError(t, err)
		if len(b) > 0 {
			// sometimes, the body is "testtest" instead of "test"
			require.Equal(t, "test", string(b), "body is concatenated due to erroneous buffer reuse!")
		}
		w.WriteHeader(http.StatusInternalServerError)
	}))

	client := resty.New().AddRetryCondition(
		func(r *resty.Response, err error) bool {
			return err != nil || r.StatusCode() > 499
		},
	).SetRetryCount(3)

	wg := sync.WaitGroup{}
	// Run tests concurrently to make the issue easily to observe.
	for i := 0; i < 100; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			for j := 0; j < 10; j++ {
				buf := bytes.NewBufferString("test")
				// Trigger some retries
				resp, err := client.R().SetBody(buf).SetContentLength(true).Execute(http.MethodPost, srv.URL)
				assert.Nil(t, err)
				assert.Equal(t, http.StatusInternalServerError, resp.StatusCode())
				assert.Equal(t, "", string(resp.Body()))
			}
		}()
	}
	wg.Wait()
}

Similar or related:

@jeevatkm
Copy link
Member

@stonelgh Thanks for reaching out. I will analyze the issue with given code snippet and get back.

FYI -

Similar or related:

Security: Don't put the same bytes.Buffer into sync.Pool twice #745

This PR was merged long ago, so there is no buffer reuse.

Remove setcontentlength from reader flow #882

This PR is not related to Resty v2 since it is related to upcoming Resty v3. You can see it via labeling.

@jeevatkm jeevatkm self-assigned this Nov 20, 2024
@jeevatkm jeevatkm added the bug label Nov 20, 2024
@jeevatkm
Copy link
Member

@stonelgh I have successfully reproduced the issue at my end using the code snippet you provided. Buffer reset issue happens with the io.Reader and ContentLength property set to true.

@stonelgh
Copy link
Author

stonelgh commented Nov 20, 2024

@jeevatkm Commit 67cf80b doesn't actually fix the issue. It simply hides it by avoiding buffer reuse. The root cause to the issue is that the same buffer is put back to the pool each time the request gets retried in the mentioned scenario. As a result, the same buffer may be concurrently used.
Sorry for my frankness.

@jeevatkm
Copy link
Member

@stonelgh, I would like to ensure that I have not overlooked any important details. During my analysis of the issue, I observed that the buffer reset was not functioning correctly in this scenario. Therefore, I implemented logic to prevent the use of the buffer if it contains unread bytes. Instead, I create a new buffer to avoid potential sharing or concurrent issues.

Engaging in open and constructive discussions is beneficial.

@stonelgh
Copy link
Author

stonelgh commented Nov 21, 2024

@jeevatkm When it enters parseRequestBody for the first time, it will call handleRequestBody due to r.Body != nil evaluated to true. Then it enters handleRequestBody, it will execute the branch: middleware.go:521:

	switch body := r.Body.(type) {
	case io.Reader:
		if c.setContentLength || r.setContentLength { // keep backward compatibility
			r.bodyBuf = acquireBuffer()
			if _, err := r.bodyBuf.ReadFrom(body); err != nil {
				return err
			}
			r.Body = nil

It acquires a buffer from the pool and reads the body into the buffer. It stores the buffer to r.bodyBuf and it also clears r.Body.
After the attempt is done, it reaches the line transfer.go:385: t.BodyCloser.Close(), and buffer is released back to the pool, but r.bodyBuf is still pointing to the buffer.

Once a retry is needed, in the following attempts, it will not enter handleRequestBody again due to r.Body is nil. When it reaches the line client.go:1239: req.RawRequest.Body = newRequestBodyReleaser(req.RawRequest.Body, req.bodyBuf), req.bodyBuf is wrapped within requestBodyReleaser.
When the attempt is done, it reaches the line transfer.go:385: t.BodyCloser.Close(), and buffer is released back to the pool again.

So the buffer is released back to the pool multiple times in the retry situation. And this is the root cause of the issue. To fix it, we either do not release the buffer back to the pool until the request is done, or we acquire a new buffer from the pool and release it back in each attempt.

@jeevatkm
Copy link
Member

@stonelgh Thanks for the detailed walkthrough. Now I understand what you have meant in the previous message. It seems I overlooked the requestBodyReleaser part.
I will get back with an appropriate fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment