Document not found (404)
-This URL is invalid, sorry. Please use the navigation bar or search to continue.
- -diff --git a/master/.nojekyll b/master/.nojekyll index f1731109..86312159 100644 --- a/master/.nojekyll +++ b/master/.nojekyll @@ -1 +1 @@ -This file makes sure that Github Pages doesn't process mdBook's output. +This file makes sure that Github Pages doesn't process mdBook's output. \ No newline at end of file diff --git a/master/404.html b/master/404.html deleted file mode 100644 index c456ccf8..00000000 --- a/master/404.html +++ /dev/null @@ -1,167 +0,0 @@ - - -
- - -This URL is invalid, sorry. Please use the navigation bar or search to continue.
- -A common issue with graphql servers is how the resolvers query their datasource. +This issue results in a large number of unneccessary database queries or http requests. +Say you were wanting to list a bunch of cults people were in
+query {
+ persons {
+ id
+ name
+ cult {
+ id
+ name
+ }
+ }
+}
+
+What would be executed by a SQL database would be:
+SELECT id, name, cult_id FROM persons;
+SELECT id, name FROM cults WHERE id = 1;
+SELECT id, name FROM cults WHERE id = 1;
+SELECT id, name FROM cults WHERE id = 1;
+SELECT id, name FROM cults WHERE id = 1;
+SELECT id, name FROM cults WHERE id = 2;
+SELECT id, name FROM cults WHERE id = 2;
+SELECT id, name FROM cults WHERE id = 2;
+# ...
+
+Once the list of users has been returned, a separate query is run to find the cult of each user. +You can see how this could quickly become a problem.
+A common solution to this is to introduce a dataloader. +This can be done with Juniper using the crate cksac/dataloader-rs, which has two types of dataloaders; cached and non-cached.
+DataLoader provides a memoization cache, after .load() is called once with a given key, the resulting value is cached to eliminate redundant loads.
+DataLoader caching does not replace Redis, Memcache, or any other shared application-level cache. DataLoader is first and foremost a data loading mechanism, and its cache only serves the purpose of not repeatedly loading the same data in the context of a single request to your Application. (read more)
+!FILENAME Cargo.toml
+[dependencies]
+actix-identity = "0.4.0-beta.4"
+actix-rt = "1.0"
+actix-web = {version = "2.0", features = []}
+juniper = { git = "https://github.com/graphql-rust/juniper" }
+futures = "0.3"
+postgres = "0.15.2"
+dataloader = "0.12.0"
+async-trait = "0.1.30"
+
+// use dataloader::cached::Loader;
+use dataloader::non_cached::Loader;
+use dataloader::BatchFn;
+use std::collections::HashMap;
+use postgres::{Connection, TlsMode};
+use std::env;
+
+pub fn get_db_conn() -> Connection {
+ let pg_connection_string = env::var("DATABASE_URI").expect("need a db uri");
+ println!("Connecting to {}", pg_connection_string);
+ let conn = Connection::connect(&pg_connection_string[..], TlsMode::None).unwrap();
+ println!("Connection is fine");
+ conn
+}
+
+#[derive(Debug, Clone)]
+pub struct Cult {
+ pub id: i32,
+ pub name: String,
+}
+
+pub fn get_cult_by_ids(hashmap: &mut HashMap<i32, Cult>, ids: Vec<i32>) {
+ let conn = get_db_conn();
+ for row in &conn
+ .query("SELECT id, name FROM cults WHERE id = ANY($1)", &[&ids])
+ .unwrap()
+ {
+ let cult = Cult {
+ id: row.get(0),
+ name: row.get(1),
+ };
+ hashmap.insert(cult.id, cult);
+ }
+}
+
+pub struct CultBatcher;
+
+#[async_trait]
+impl BatchFn<i32, Cult> for CultBatcher {
+
+ // A hashmap is used, as we need to return an array which maps each original key to a Cult.
+ async fn load(&self, keys: &[i32]) -> HashMap<i32, Cult> {
+ println!("load cult batch {:?}", keys);
+ let mut cult_hashmap = HashMap::new();
+ get_cult_by_ids(&mut cult_hashmap, keys.to_vec());
+ cult_hashmap
+ }
+}
+
+pub type CultLoader = Loader<i32, Cult, CultBatcher>;
+
+// To create a new loader
+pub fn get_loader() -> CultLoader {
+ Loader::new(CultBatcher)
+ // Usually a DataLoader will coalesce all individual loads which occur
+ // within a single frame of execution before calling your batch function with all requested keys.
+ // However sometimes this behavior is not desirable or optimal.
+ // Perhaps you expect requests to be spread out over a few subsequent ticks
+ // See: https://github.com/cksac/dataloader-rs/issues/12
+ // More info: https://github.com/graphql/dataloader#batch-scheduling
+ // A larger yield count will allow more requests to append to batch but will wait longer before actual load.
+ .with_yield_count(100)
+}
+
+#[juniper::graphql_object(Context = Context)]
+impl Cult {
+ // your resolvers
+
+ // To call the dataloader
+ pub async fn cult_by_id(ctx: &Context, id: i32) -> Cult {
+ ctx.cult_loader.load(id).await
+ }
+}
+
+
+Once created, a dataloader has the async functions .load()
and .load_many()
.
+In the above example cult_loader.load(id: i32).await
returns Cult
. If we had used cult_loader.load_many(Vec<i32>).await
it would have returned Vec<Cult>
.
Dataloaders should be created per-request to avoid risk of bugs where one user is able to load cached/batched data from another user/ outside of its authenticated scope. +Creating dataloaders within individual resolvers will prevent batching from occurring and will nullify the benefits of the dataloader.
+For example:
+When you declare your context
+use juniper;
+
+#[derive(Clone)]
+pub struct Context {
+ pub cult_loader: CultLoader,
+}
+
+impl juniper::Context for Context {}
+
+impl Context {
+ pub fn new(cult_loader: CultLoader) -> Self {
+ Self {
+ cult_loader
+ }
+ }
+}
+
+Your handler for GraphQL (Note: instantiating context here keeps it per-request)
+pub async fn graphql(
+ st: web::Data<Arc<Schema>>,
+ data: web::Json<GraphQLRequest>,
+) -> Result<HttpResponse, Error> {
+
+ // Context setup
+ let cult_loader = get_loader();
+ let ctx = Context::new(cult_loader);
+
+ // Execute
+ let res = data.execute(&st, &ctx).await;
+ let json = serde_json::to_string(&res).map_err(error::ErrorInternalServerError)?;
+
+ Ok(HttpResponse::Ok()
+ .content_type("application/json")
+ .body(json))
+}
+
+For a full example using Dataloaders and Context check out jayy-lmao/rust-graphql-docker.
+ +There are two ways that a client can submit a null argument or field in a query.
+They can use a null literal:
+{
+ field(arg: null)
+}
+
+Or they can simply omit the argument:
+{
+ field
+}
+
+The former is an explicit null and the latter is an implicit null.
+There are some situations where it's useful to know which one the user provided.
+For example, let's say your business logic has a function that allows users to +perform a "patch" operation on themselves. Let's say your users can optionally +have favorite and least favorite numbers, and the input for that might look +like this:
++/// Updates user attributes. Fields that are `None` are left as-is. +pub struct UserPatch { + /// If `Some`, updates the user's favorite number. + pub favorite_number: Option<Option<i32>>, + + /// If `Some`, updates the user's least favorite number. + pub least_favorite_number: Option<Option<i32>>, +} + +# fn main() {} +
To set a user's favorite number to 7, you would set favorite_number
to
+Some(Some(7))
. In GraphQL, that might look like this:
mutation { patchUser(patch: { favoriteNumber: 7 }) }
+
+To unset the user's favorite number, you would set favorite_number
to
+Some(None)
. In GraphQL, that might look like this:
mutation { patchUser(patch: { favoriteNumber: null }) }
+
+If you want to leave the user's favorite number alone, you would set it to
+None
. In GraphQL, that might look like this:
mutation { patchUser(patch: {}) }
+
+The last two cases rely on being able to distinguish between explicit and implicit null.
+In Juniper, this can be done using the Nullable
type:
+# extern crate juniper; +use juniper::{FieldResult, Nullable}; + +#[derive(juniper::GraphQLInputObject)] +struct UserPatchInput { + pub favorite_number: Nullable<i32>, + pub least_favorite_number: Nullable<i32>, +} + +impl Into<UserPatch> for UserPatchInput { + fn into(self) -> UserPatch { + UserPatch { + // The `explicit` function transforms the `Nullable` into an + // `Option<Option<T>>` as expected by the business logic layer. + favorite_number: self.favorite_number.explicit(), + least_favorite_number: self.least_favorite_number.explicit(), + } + } +} + +# pub struct UserPatch { +# pub favorite_number: Option<Option<i32>>, +# pub least_favorite_number: Option<Option<i32>>, +# } + +# struct Session; +# impl Session { +# fn patch_user(&self, _patch: UserPatch) -> FieldResult<()> { Ok(()) } +# } + +struct Context { + session: Session, +} +impl juniper::Context for Context {} + +struct Mutation; + +#[juniper::graphql_object(context = Context)] +impl Mutation { + fn patch_user(ctx: &Context, patch: UserPatchInput) -> FieldResult<bool> { + ctx.session.patch_user(patch.into())?; + Ok(true) + } +} +# fn main() {} +
This type functions much like Option
, but has two empty variants so you can
+distinguish between implicit and explicit null.
The chapters below cover some more advanced scenarios.
+ + +GraphQL defines a special built-in top-level field called __schema
. Querying
+for this field allows one to introspect the schema
+at runtime to see what queries and mutations the GraphQL server supports.
Because introspection queries are just regular GraphQL queries, Juniper supports +them natively. For example, to get all the names of the types supported one +could execute the following query against Juniper:
+{
+ __schema {
+ types {
+ name
+ }
+ }
+}
+
+Many client libraries and tools in the GraphQL ecosystem require a complete
+representation of the server schema. Often this representation is in JSON and
+referred to as schema.json
. A complete representation of the schema can be
+produced by issuing a specially crafted introspection query.
Juniper provides a convenience function to introspect the entire schema. The +result can then be converted to JSON for use with tools and libraries such as +graphql-client:
++ +# #![allow(unused_variables)] +# extern crate juniper; +# extern crate serde_json; +use juniper::{ + graphql_object, EmptyMutation, EmptySubscription, FieldResult, + GraphQLObject, IntrospectionFormat, +}; + +// Define our schema. + +#[derive(GraphQLObject)] +struct Example { + id: String, +} + +struct Context; +impl juniper::Context for Context {} + +struct Query; + +#[graphql_object(context = Context)] +impl Query { + fn example(id: String) -> FieldResult<Example> { + unimplemented!() + } +} + +type Schema = juniper::RootNode< + 'static, + Query, + EmptyMutation<Context>, + EmptySubscription<Context> +>; + +fn main() { + // Create a context object. + let ctx = Context{}; + + // Run the built-in introspection query. + let (res, _errors) = juniper::introspect( + &Schema::new(Query, EmptyMutation::new(), EmptySubscription::new()), + &ctx, + IntrospectionFormat::default(), + ).unwrap(); + + // Convert introspection result to json. + let json_result = serde_json::to_string_pretty(&res); + assert!(json_result.is_ok()); +} +
The GraphQL standard generally assumes there will be one server request for each client operation you want to perform (such as a query or mutation). This is conceptually simple but has the potential to be inefficent.
+Some client libraries such as apollo-link-batch-http have added the ability to batch operations in a single HTTP request to save network round-trips and potentially increase performance. There are some tradeoffs that should be considered before batching requests.
+Juniper's server integration crates support multiple operations in a single HTTP request using JSON arrays. This makes them compatible with client libraries that support batch operations without any special configuration.
+Server integration crates maintained by others are not required to support batch requests. Batch requests aren't part of the official GraphQL specification.
+Assuming an integration supports batch requests, for the following GraphQL query:
+{
+ hero {
+ name
+ }
+}
+
+The json data to POST to the server for an individual request would be:
+{
+ "query": "{hero{name}}"
+}
+
+And the response would be of the form:
+{
+ "data": {
+ "hero": {
+ "name": "R2-D2"
+ }
+ }
+}
+
+If you wanted to run the same query twice in a single HTTP request, the batched json data to POST to the server would be:
+[
+ {
+ "query": "{hero{name}}"
+ },
+ {
+ "query": "{hero{name}}"
+ }
+]
+
+And the response would be of the form:
+[
+ {
+ "data": {
+ "hero": {
+ "name": "R2-D2"
+ }
+ }
+ },
+ {
+ "data": {
+ "hero": {
+ "name": "R2-D2"
+ }
+ }
+ }
+]
+
+
+ Up until now, we've only looked at mapping structs to GraphQL objects. However, +any Rust type can be mapped into a GraphQL object. In this chapter, we'll look +at enums, but traits will work too - they don't have to be mapped into GraphQL +interfaces.
+Using Result
-like enums can be a useful way of reporting e.g. validation
+errors from a mutation:
+# extern crate juniper; +# use juniper::{graphql_object, GraphQLObject}; +# #[derive(juniper::GraphQLObject)] struct User { name: String } +# +#[derive(GraphQLObject)] +struct ValidationError { + field: String, + message: String, +} + +# #[allow(dead_code)] +enum SignUpResult { + Ok(User), + Error(Vec<ValidationError>), +} + +#[graphql_object] +impl SignUpResult { + fn user(&self) -> Option<&User> { + match *self { + SignUpResult::Ok(ref user) => Some(user), + SignUpResult::Error(_) => None, + } + } + + fn error(&self) -> Option<&Vec<ValidationError>> { + match *self { + SignUpResult::Ok(_) => None, + SignUpResult::Error(ref errors) => Some(errors) + } + } +} +# +# fn main() {} +
Here, we use an enum to decide whether a user's input data was valid or not, and +it could be used as the result of e.g. a sign up mutation.
+While this is an example of how you could use something other than a struct to +represent a GraphQL object, it's also an example on how you could implement +error handling for "expected" errors - errors like validation errors. There are +no hard rules on how to represent errors in GraphQL, but there are +some +comments +from one of the authors of GraphQL on how they intended "hard" field errors to +be used, and how to model expected errors.
+ +Yet another point where GraphQL and Rust differs is in how generics work. In +Rust, almost any type could be generic - that is, take type parameters. In +GraphQL, there are only two generic types: lists and non-nullables.
+This poses a restriction on what you can expose in GraphQL from Rust: no generic
+structs can be exposed - all type parameters must be bound. For example, you can
+not make e.g. Result<T, E>
into a GraphQL type, but you can make e.g.
+Result<User, String>
into a GraphQL type.
Let's make a slightly more compact but generic implementation of the last +chapter:
++# extern crate juniper; +# #[derive(juniper::GraphQLObject)] struct User { name: String } +# #[derive(juniper::GraphQLObject)] struct ForumPost { title: String } + +#[derive(juniper::GraphQLObject)] +struct ValidationError { + field: String, + message: String, +} + +# #[allow(dead_code)] +struct MutationResult<T>(Result<T, Vec<ValidationError>>); + +#[juniper::graphql_object( + name = "UserResult", +)] +impl MutationResult<User> { + fn user(&self) -> Option<&User> { + self.0.as_ref().ok() + } + + fn error(&self) -> Option<&Vec<ValidationError>> { + self.0.as_ref().err() + } +} + +#[juniper::graphql_object( + name = "ForumPostResult", +)] +impl MutationResult<ForumPost> { + fn forum_post(&self) -> Option<&ForumPost> { + self.0.as_ref().ok() + } + + fn error(&self) -> Option<&Vec<ValidationError>> { + self.0.as_ref().err() + } +} + +# fn main() {} +
Here, we've made a wrapper around Result
and exposed some concrete
+instantiations of Result<T, E>
as distinct GraphQL objects. The reason we
+needed the wrapper is of Rust's rules for when you can derive a trait - in this
+case, both Result
and Juniper's internal GraphQL trait are from third-party
+sources.
Because we're using generics, we also need to specify a name for our
+instantiated types. Even if Juniper could figure out the name,
+MutationResult<User>
wouldn't be a valid GraphQL type name.
GraphQL subscriptions are a way to push data from the server to clients requesting real-time messages +from the server. Subscriptions are similar to queries in that they specify a set of fields to be delivered to the client, +but instead of immediately returning a single answer a result is sent every time a particular event happens on the +server.
+In order to execute subscriptions you need a coordinator (that spawns connections)
+and a GraphQL object that can be resolved into a stream--elements of which will then
+be returned to the end user. The juniper_subscriptions
crate
+provides a default connection implementation. Currently subscriptions are only supported on the master
branch. Add the following to your Cargo.toml
:
[dependencies]
+juniper = { git = "https://github.com/graphql-rust/juniper", branch = "master" }
+juniper_subscriptions = { git = "https://github.com/graphql-rust/juniper", branch = "master" }
+
+The Subscription
is just a GraphQL object, similar to the query root and mutations object that you defined for the
+operations in your [Schema][Schema]. For subscriptions all fields/operations should be async and should return a Stream.
This example shows a subscription operation that returns two events, the strings Hello
and World!
+sequentially:
+# use juniper::{graphql_object, graphql_subscription, FieldError}; +# use futures::Stream; +# use std::pin::Pin; +# +# #[derive(Clone)] +# pub struct Database; +# impl juniper::Context for Database {} + +# pub struct Query; +# #[graphql_object(context = Database)] +# impl Query { +# fn hello_world() -> &'static str { +# "Hello World!" +# } +# } +pub struct Subscription; + +type StringStream = Pin<Box<dyn Stream<Item = Result<String, FieldError>> + Send>>; + +#[graphql_subscription(context = Database)] +impl Subscription { + async fn hello_world() -> StringStream { + let stream = futures::stream::iter(vec![ + Ok(String::from("Hello")), + Ok(String::from("World!")) + ]); + Box::pin(stream) + } +} +# +# fn main () {} +
Subscriptions require a bit more resources than regular queries and provide a great vector for DOS attacks. This can can bring down a server easily if not handled correctly. The [SubscriptionCoordinator
][SubscriptionCoordinator] trait provides coordination logic to enable functionality like DOS attack mitigation and resource limits.
The [SubscriptionCoordinator
][SubscriptionCoordinator] contains the schema and can keep track of opened connections, handle subscription
+start and end, and maintain a global subscription id for each subscription. Each time a connection is established,
+the [SubscriptionCoordinator
][SubscriptionCoordinator] spawns a [SubscriptionConnection
][SubscriptionConnection]. The [SubscriptionConnection
][SubscriptionConnection] handles a single connection, providing resolver logic for a client stream as well as reconnection
+and shutdown logic.
While you can implement [SubscriptionCoordinator
][SubscriptionCoordinator] yourself, Juniper contains a simple and generic implementation called [Coordinator
][Coordinator]. The subscribe
+operation returns a [Future
][Future] with an Item
value of a Result<Connection, GraphQLError>
,
+where [Connection
][Connection] is a Stream
of values returned by the operation and [GraphQLError
][GraphQLError] is the error when the subscription fails.
+# #![allow(dead_code)] +# extern crate futures; +# extern crate juniper; +# extern crate juniper_subscriptions; +# extern crate serde_json; +# extern crate tokio; +# use juniper::{ +# http::GraphQLRequest, +# graphql_object, graphql_subscription, +# DefaultScalarValue, EmptyMutation, FieldError, +# RootNode, SubscriptionCoordinator, +# }; +# use juniper_subscriptions::Coordinator; +# use futures::{Stream, StreamExt}; +# use std::pin::Pin; +# +# #[derive(Clone)] +# pub struct Database; +# +# impl juniper::Context for Database {} +# +# impl Database { +# fn new() -> Self { +# Self {} +# } +# } +# +# pub struct Query; +# +# #[graphql_object(context = Database)] +# impl Query { +# fn hello_world() -> &'static str { +# "Hello World!" +# } +# } +# +# pub struct Subscription; +# +# type StringStream = Pin<Box<dyn Stream<Item = Result<String, FieldError>> + Send>>; +# +# #[graphql_subscription(context = Database)] +# impl Subscription { +# async fn hello_world() -> StringStream { +# let stream = +# futures::stream::iter(vec![Ok(String::from("Hello")), Ok(String::from("World!"))]); +# Box::pin(stream) +# } +# } +type Schema = RootNode<'static, Query, EmptyMutation<Database>, Subscription>; + +fn schema() -> Schema { + Schema::new(Query {}, EmptyMutation::new(), Subscription {}) +} + +async fn run_subscription() { + let schema = schema(); + let coordinator = Coordinator::new(schema); + let req: GraphQLRequest<DefaultScalarValue> = serde_json::from_str( + r#"{ + "query": "subscription { helloWorld }" + }"#, + ) + .unwrap(); + let ctx = Database::new(); + let mut conn = coordinator.subscribe(&req, &ctx).await.unwrap(); + while let Some(result) = conn.next().await { + println!("{}", serde_json::to_string(&result).unwrap()); + } +} +# +# fn main() { } +
Currently there is an example of subscriptions with [warp][warp], but it still in an alpha state. +GraphQL over [WS][WS] is not fully supported yet and is non-standard.
+ + +[Coordinator]: https://docs.rs/juniper_subscriptions/0.15.0/struct.Coordinator.html +[SubscriptionCoordinator]: https://docs.rs/juniper_subscriptions/0.15.0/trait.SubscriptionCoordinator.html +[Connection]: https://docs.rs/juniper_subscriptions/0.15.0/struct.Connection.html +[SubscriptionConnection]: https://docs.rs/juniper_subscriptions/0.15.0/trait.SubscriptionConnection.html + +[Future]: https://docs.rs/futures/0.3.4/futures/future/trait.Future.html +[warp]: https://github.com/graphql-rust/juniper/tree/master/juniper_warp +[WS]: https://github.com/apollographql/subscriptions-transport-ws/blob/master/PROTOCOL.md +[GraphQLError]: https://docs.rs/juniper/0.14.2/juniper/enum.GraphQLError.html +[Schema]: ../schema/schemas_and_mutations.md + +