You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Andrew,
I do not think you are spending any time on Wookie or cl-async, but thought you might be interested in a small stress test I just did. I was using wrk (https://github.com/wg/wrk) on a separate box across a lightly loaded local network to a wookie server.
I set up the simplest wookie package I could think of - just a default route to a directory with three jpg files (13k, 80k and 133k) with the directory listing allowed and had wookie serve up the files, increasing the threads and open connections until cl-async threw a streamish-broken-pipe error. Each test was 30 seconds.
You will notice that the number of open connections before eventually triggering the pipe error is related to the size of the file being served. Although the table doesn't show this, when the pipe error was thrown, it always came at the end of the wrk session and wrk was typically not reporting any errors. I am guessing that some is looking strange to cl-async when wrk was closing the open connections at the end of the session and it has something to do with the size of the file being served. From the repl, if I told the package to revert back to the event loop, it would throw another pipe error.
Column 1 is number of threads generated by wrk and the other column data is the highest number of connections kept open by wrk with that number of threads before I triggered the broken pipe error. The 2-5 columns are the result serving up just the directory, just the 13k jpg, just the 80k jpg and just the 133k jpg. Yes, I know the numbers look a bit strange. They were, however, repeatable in my test.
| threads | directory listing only | 13k file | 80k file | 133k file |
| (wrk) | # connections open | # connections | # connections | # connections |
|-------------+------------------------------+---------------------+---------------------+---------------------|
| 1 | 270 | 60 | 10 | 8 |
| 2 | 150 | 45 | 15 | 14 |
| 4 | 190 | 90 | 20 | 10 |
| 5 | 190 | 45 | 14 | 9 |
| 6 | 220 | 70 | 17 | 20 |
| 8 | 225 | 60 | 12 | 14 |
| 12 | 210 | 55 | 23 | 20 |
The transfer rates were generally around 36-39 MB/sec. The requests per sec were around 300 for the 133k file, 460 for the 80k file, 1350 for the 13k file and around 2000 for the directory listing.
Sample (Mist.jpg is the 133k file):
wrk -t12 -c20 -d30s http://192.168.1.15:8081/Mist.jpg
Running 30s test @ http://192.168.1.15:8081/Mist.jpg
12 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 40.45ms 6.02ms 116.55ms 86.45%
Req/Sec 24.70 5.16 39.00 50.90%
8913 requests in 30.09s, 1.12GB read
Requests/sec: 296.22
Transfer/sec: 37.96MB
Just as a comparison against nginx serving the same 133k file, same boxes, the transfer rate was 48MB/sec and requests per sec were 377 at 12 threads and 200 connections kept open.
wrk -t12 -c200 -d30s http://192.168.1.15/MistOverLakex1000.jpg
Running 30s test @ http://192.168.1.15/MistOverLakex1000.jpg
12 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 516.68ms 164.03ms 1.62s 71.24%
Req/Sec 31.39 15.71 100.00 65.04%
11056 requests in 30.03s, 1.39GB read
Requests/sec: 368.22
Transfer/sec: 47.26MB
If I lowered the number of open connections, the transfer rate and amount of requests stayed roughly the same for wookie but substantially increased in nginx, e.g.
wrk -t12 -c24 -d30s http://192.168.1.15/MistOverLakex1000.jpg
Running 30s test @ http://192.168.1.15/MistOverLakex1000.jpg
12 threads and 24 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 48.04ms 15.80ms 380.48ms 73.97%
Req/Sec 41.61 10.37 80.00 65.42%
14986 requests in 30.03s, 1.86GB read
Requests/sec: 499.06
Transfer/sec: 63.58MB
In any event, I don't know what is triggering the streamish-broken-pipe error in cl-async.
The text was updated successfully, but these errors were encountered:
From Sabra:
The text was updated successfully, but these errors were encountered: