Skip to content

Commit

Permalink
fix(crawler): separate timeout per query
Browse files Browse the repository at this point in the history
This aims to correctly apply intention of fix from
#996

It did not work correctly because `context.WithTimeout`  was created once
per goroutine but was used for multiple queries within
the `for p := range jobs` loop.

This meant once the timeout expires for the first query, all subsequent
queries in that worker would immediately fail since they're using the
same expired context.
  • Loading branch information
lidel committed Nov 15, 2024
1 parent f671a50 commit 02e9c54
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions crawler/crawler.go
Original file line number Diff line number Diff line change
Expand Up @@ -147,10 +147,10 @@ func (c *DefaultCrawler) Run(ctx context.Context, startingPeers []*peer.AddrInfo
for i := 0; i < c.parallelism; i++ {
go func() {
defer wg.Done()
ctx, cancel := context.WithTimeout(ctx, c.queryTimeout)
defer cancel()
for p := range jobs {
res := c.queryPeer(ctx, p)
qctx, cancel := context.WithTimeout(ctx, c.queryTimeout)
res := c.queryPeer(qctx, p)
cancel() // do not defer, cleanup after each job

Check warning on line 153 in crawler/crawler.go

View check run for this annotation

Codecov / codecov/patch

crawler/crawler.go#L151-L153

Added lines #L151 - L153 were not covered by tests
results <- res
}
}()
Expand Down

0 comments on commit 02e9c54

Please sign in to comment.