-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daemon closes response stream prematurely when reaching MaxBlocks limit #329
Comments
What would you like the behaviour to be? If we close without error then how would you know whether the DAG should be 10 blocks or more without doing the traversal locally? I guess it's certainly an approach we could take with it if that was important. "max blocks" was always a bit of a hack, I don't believe it's used by Saturn anymore. The main problem is that because we do depth-first, the blocks you get are unlikely to be all that useful; you might be better off making a cancellation decision on the consumer side. The reason it errors is that the couple of places where we implement it, we do it as We could potentially catch that, I'm just not sure whether it should be considered clean or not. Maybe if you're explicitly opting in to it then it should? It'd be interesting to know the use case here. |
followup block requests from saturn are translated into requests with max blocks set still fwiw |
Thank you for explaining the reasoning that led to the current implementation. As I thought more about this, I saw how the current solution makes sense. It would be nice if the documentation could describe this behaviour so that users know it's intentional and expected. I discovered today that a similar (or the same) error is reported when the retrieval request times out after the response headers were sent. Lassie args:
Retrieval timing out:
When I run Lassie with the default config:
|
I opened a new GH issue, I feel the topic is different enough:
Yeah, I think this depends on how smart the clients must be. Your current approach is great because the client can be dumb; it does not need to understand CAR format. The client can simply treat the response as an opaque stream and trust Lassie to verify correctness and completeness. One option that comes to my mind is to leverage HTTP/2 Trailer. It won't work for HTTP/1.1 clients. Also, the developers writing the client code must take care to check the Trailer to see if there was any error since the response status code will indicate success. IMO, it's safer to indicate errors via HTTP/2 Trailer only when clients explicitly ask for this behaviour. For us (Zinnia & Filecoin Station), we are fine with the current version that aborts the response in a way that triggers a client-side reading error. |
OK, so I'll close this for now, but for anyone else reading this who might have a different use case for which the dirty abort is not satisfactory, we could relitigate this and come up with a solution - perhaps a |
Steps to reproduce:
Start the daemon with a MaxBlocks limit.
Retrieve content that has more than 10 blocks. For example, the full XKCD Archives.
As you can see, curl reports an error. I am getting a similar error when making the HTTP request using Rust's
ureq
crate:What is the expected behaviour in this case?
The text was updated successfully, but these errors were encountered: