-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add custom future type, ResponseFuture, for HTTP responses #1834
base: main
Are you sure you want to change the base?
Conversation
ResponseFuture allows us to simplify user experience while preserving support for lazy/raw responses
8a4a985
to
185e065
Compare
@@ -62,5 +62,4 @@ features = [ | |||
"reqwest_rustls", | |||
"hmac_rust", | |||
"hmac_openssl", | |||
"xml", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just duplicated, you can see it further up if you expand the context
@@ -255,7 +254,7 @@ impl ClientCertificateCredential { | |||
return Err(http_response_from_body(rsp_status, &rsp_body).into_error()); | |||
} | |||
|
|||
let response: AadTokenResponse = rsp.deserialize_body_into().await?; | |||
let response: AadTokenResponse = rsp.into_body().json().await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Azure Identity doesn't use the full pipeline, it calls the HTTP client itself. It's always used the "Raw Response" and manually deserialized.
#[cfg(not(target_arch = "wasm32"))] | ||
pub type PinnedStream = Pin<Box<dyn Stream<Item = crate::Result<Bytes>> + Send + Sync>>; | ||
#[cfg(target_arch = "wasm32")] | ||
pub type PinnedStream = Pin<Box<dyn Stream<Item = crate::Result<Bytes>>>>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved these types here because they are useful across a few different modules in http
. This does mean it's been promoted from typespec_client_core::http::response::PinnedStream
to typespec_client_core::http::PinnedStream
|
||
/// Represents a response where the body has not been downloaded yet, but can be downloaded and deserialized into a value of type `T`. | ||
/// | ||
/// You get a [`LazyResponse`] by calling [`ResponseFuture::lazy()`] on a [`ResponseFuture`] you received from a service API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm being utterly pedantic here, but there's something about the term "Lazy" that is bothering me because it feels somewhat pejorative.
My problem is that I'm not coming up with a better description of how to name it. Lazy has the advantage that it is succinct, whereas other descriptive terms (two step result?) are just way too wordy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it would help put you at ease, Lazy is used throughout the language
https://doc.rust-lang.org/std/cell/struct.LazyCell.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I largely agree @LarryOsterman , but it's definitely become a fairly industry-standard term for "A thing that isn't quite there yet". DeferredResponse
could be another option, but I feel like lazy()
/LazyResponse<T>
is going to be the "expected" name, especially given @RickWinter 's point about its existing use in Rust.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather we stick with Lazy
as well. It's all over not only in Rust but other languages as well. That, I'm just to lazy to think of a better term.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So be it. I'll put my pedantry back in its box.
Re-drafting this. Hitting some snags around dealing with requests with bodies. With the current changes, because the functions all return |
We should be able to do |
Not quite. Anything that implements
Same, and it's frustrating to see it languish so long. Basically, the problem we have here is that creating a
I'm going to keep poking and see if I can find a clean way to do this. I'm not a huge fan of every service client having to look like this: async fn replace_item<T: Serialize>(
&self,
partition_key: impl Into<PartitionKey>,
item_id: impl AsRef<str>,
item: T,
options: Option<ItemOptions>,
) -> azure_core::ResponseFuture<T> {
// Create the response future to return the raw response body
// ResponseFuture::new expects an async closure returning Result<Response>, so fallible and async operations can be done here
let raw_response_future = ResponseFuture::new(async {
let url = self
.container_url
.with_path_segments(["docs", item_id.as_ref()]);
let mut req = Request::new(url, azure_core::Method::Put);
req.insert_headers(&partition_key.into())?;
req.set_json(&item)?;
self.pipeline
.send(Context::new(), &mut req)
.await
});
// Attach the JSON deserializer and return it.
raw_response_future.json()
} But if we're OK with that, I think we can work with something. Though when I did that, I started hitting lifetime issues around capturing the function parameters that might start to proliferate named lifetimes, which I'd really prefer not to do 😵. I'll keep working a bit on it. It's not blocking my Cosmos work, it's just motivated by my desire to get the 80% scenario down to a single Footnotes
|
Closes #1831
This is a long one, but I really wanted to try and lay out all the details of the change
This PR introduces a new type,
ResponseFuture
, which is intended to be returned by service client APIs. For example:The
Response<T>
type has been simplified dramatically. It is now a plain-ol data object containing the status code, headers, and "body" (which is of typeT
). The default value ofT
isResponseBody
, which is our existing type that wraps a "Pinned Stream" representing an asynchronous stream of bytes coming from the transport.How is
ResponseFuture
different to the user?Our current service client methods return
impl Future<Output = Result<Response<T>, Error>>
. However, in the current code,Response<T>
is itself a promise as well. It does not contain a fully downloaded and deserialized HTTP response body, only the promise of one, whendeserialize_body
is called. By default, aResponseFuture<T>
, when directlyawait
ed, will execute the request, receive the status code and headers, download the body, and deserialize it into aT
. I believe this represents the "majority" of use cases, which means we can turn our current long chain of method calls:Into just one await:
In a situation where most users want to read the body (if there is one) into the default model provided by the service client, this is much more ergonomic. It also removes the need for the
Model
trait, since the service client gave theResponseFuture
a closure it could use to perform the deserialization.What about more advanced scenarios?
While I believe eagerly reading and deserializing the body is the majority case, it is not a good approach in all cases. Consider, for example, that Cosmos DB often returns the full JSON of a newly-created item. Often, the user doesn't need this since they just created the item (in fact, we even provide a request header to stop the server from sending the new item in the response). Or consider an API that returns a large payload, but also includes some response headers that the user may want to use to decide if it even wants to download that whole payload. Or any other case where the user wants to download the response headers but defer downloading the body until later. Until the
ResponseFuture
isawait
ed, it only represents the promise of a future response, and the user has the ability to modify it's behavior using methods onResponseFuture
.Calling
.lazy()
will change theResponseFuture
into animpl Future<Output = LazyResponse<T>>
which executes the requests and processes the response headers, but does not process the body untilLazyResponse::into_body
is called.Calling
.raw()
will change theResponseFuture
into animpl Future<Output = Response<ResponseBody>>
, in effect removing the deserialization logic so that the returned value will be aResponse<ResponseBody>
. That gives the user the ability to download the body and deserialize however they want1, including deserializing it to a custom type:How does a service client create a
ResponseFuture
?Rather than being
async fn
s that returnResult<Response<T>>
, service client methods should be non-asyncfn
s that returnResponseFuture<T>
. The main pipeline'ssend
method returns aResponseFuture<T>
, so for most client methods that are assembling a request, sending it, and returning the resultingResponseFuture<T>
, there's no need for the client method itself to beasync
. In the less common event that a client needs to perform follow-up work after a response is received that cannot be achieved using a customPolicy
, we haveResponseFuture::new
which allows a caller to synthesize aResponseFuture
out of any other future that returns aResult<Response>
.ResponseFuture<T>
represents the promise of aResponse<ResponseBody>
(a raw HTTP response) to come at a future time and the ability to deserialize that raw body to aT
when it does arrive. Thesend
method on the pipeline returnsResponseFuture<ResponseBody>
(i.e. a promise for a futureResponse<ResponseBody>
representing a raw, unparsed, HTTP body, with no deserialization logic). From there, service client methods can callResponseFuture::json
andResponseFuture::xml
to convert that "raw"ResponseFuture
into one that deserializes the body into their target type. For example:Boxes. Boxes Everywhere.
The main downside is the liberal use of boxing and dynamic dispatch. We
Box
the future that performs the request, weBox
a closure to deserialize the body, which itself returns anotherBox
ed future representing the downloading and deserialization of the body. I did this because I wanted to avoid having to have per-operation types. If we want to optimize out the boxing, we have another option.We would turn
ResponseFuture<T>
into a trait, intended to be used viaimpl Trait
syntax, and use macros to have each service client return its own custom struct that implementsResponseFuture<T>
. Those client methods would look something like this:In that approach, the type the macro generates would able to be fully static and avoid boxing altogether. I'm open to that idea, and returning custom futures is what the unofficial client does so there's some prior art there. It's just a lot of custom types. Having said that, these types wouldn't have to be public, they could be defined in the method itself and returned as
impl Trait
types so that the user doesn't actually see them.Ownership of
Context
andRequest
Another call I made here was to change the
Pipeline::send
methods to take the context and request by-value instead of taking references. That isn't totally necessary. But it allows service clients to just callPipeline::send
and give it all the necessary data that needs to be stored in the closure used byResponseFuture
. Without taking those values by-value, the service client method would have to create an async closure manually, usingResponseFuture::new
.It felt reasonable to me for the pipeline to take those values. I recognize why Policies don't receive those by value (since policies may be executed multiple times on the same Context/Request pair) but it seemed from my reading that Context and Request were not intended to be shared across multiple separate requests.
Footnotes
In this case, the user is expected to call
ResponseBody::json()
orResponseBody::xml()
manually. I wasn't a fan of that when it was in the primary use-case, but I'm OK with it when the user has explicitly decided to handle processing the response themselves. ↩