Grafast V1; solving GraphQL's execution woes
Grafast V1.0.0 is live!
A radical new approach to GraphQL execution: Grafast’s breadth-first, batched, plan-based execution model integrates deeply with your business logic, making previously challenging optimizations straightforward whilst eliminating the costs and failure modes of traditional resolvers.
Many don’t realise just how costly traditional GraphQL execution can be until it’s too late. The GraphQL specification outlines a reference execution algorithm that focusses on being clear and instructive rather than efficient, but implementations may use any solution so long as the perceived result is equivalent. GraphQL.js — the GraphQL reference implementation — follows the reference execution algorithm nearly verbatim, passing the problems of the depth-first “resolver” pattern down to consumers.
Grafast is a radical alternative execution algorithm focussing on efficiency throughout the stack. Instead of the myopic resolver model of GraphQL.js which has limited coordination with business logic, Grafast takes a holistic approach by planning the full request and optimizing business logic integration before executing it, whilst being sure to keep the perceived result equivalent for spec compliance.
When an operation is seen for the first time, planning begins: each field indicates its data needs and planned execution steps via synchronous “plan resolvers”. The resulting plan is then deduplicated and optimized so that the minimal number of steps, each backed by a batched function, are scheduled for execution. This final plan can be re-used for compatible future requests.
When executing the plan, the number of calls to these functions is independent of the size of the various lists, no matter how deeply they are nested, leading to predictable and scalable execution.
Greater efficiency throughout the stack
Grafast isn't just about making GraphQL execution itself more efficient; it aims to reduce computation costs and latency across your entire backend stack. No more backend over- and under-fetching. Eliminate the N+1 problem, reduce round-trips, and cut serialization/deserialization costs by fetching less data from your datastores.
You can also say goodbye to exponentially increasing resolver and telemetry calls in nested lists. Grafast even reduces event loop tick counts, eliminates per-request AST traversal, and produces the JSON payload directly without intermediate objects.
Backwards compatible
Grafast has a resolver compatibility layer that means that many existing GraphQL.js projects can move directly to Grafast. The requirements for existing schemas to be compatible with Grafast are outline in the using Grafast with an existing GraphQL.js project documentation. We hope to expand this compatibility to cover more esoteric resolvers over time.
An introduction to plan resolvers
Shopify recently shared a post on their journey to faster breadth-first GraphQL execution that goes into detail on these costs, how it is impacting them at scale, and what they’re doing about it, acknowledging inspiration they took from Grafast.
When you have a list with 250 items, each containing a nested list of 250 items, and each of those with 10 fields, that’s already well over half a million resolver calls per request! If you’re wrapping resolver calls with telemetry hooks, authentication checks, or other logic then although the logic itself may seem efficient, calling it half a million times per request is likely to start having a measurable impact on your main runloop!
With Grafast, instead of writing traditional resolvers you write plan resolvers. Plan resolvers describe the dataflow of your operations, indicating which pieces of business logic will be called and determining what is needed for and from each of them, all without actually calling it. This synchronous process outlines the steps necessary to execute your query.
These steps form a tree we call the “execution plan”; for a query such as:
{
products(first: 3) {
nodes {
variants(first: 2) {
nodes {
id
}
}
}
}
}
You might build an execution plan such as:
Grafast can derive a lot from this plan.
Firstly, it can immediately spot the two asynchronous calls:
LoadMany(GetProducts) and LoadMany(GetVariantsByProductId). Since everything
else is synchronous, these are the two steps to wrap with observability. Half a
million observability calls in GraphQL.js become just 2 in Grafast... roughly
5 orders of magnitude fewer!
Secondly, Grafast can see exactly what attributes are necessary. Rather than
fetching all products and all their data from the underlying datastores, the
business logic can now be told just to fetch the id. That’s a massive decrease
in the amount of data sent over the wire on the backend, but more importantly
it’s less data to serialize/deserialize on our infrastructure, reducing our
compute costs and latency.
Batched by design
Grafast uses batch execution, so it calls once into your business logic to get the products, and once to get all of the variants across all of the products. Only two calls into your business logic, only two promises created. This significantly reduces the overhead of garbage collection and micro-ticks when compared to traditional GraphQL.js techniques such as DataLoader. There are no per-item promises in Grafast, unlike with DataLoader, since the entire execution algorithm is batched.
The Grafast plan resolvers to achieve this are straightforward, designed to be written and read by humans:
export const schema = makeGrafastSchema({
typeDefs: /* GraphQL */ `
type Query {
products(first: Int): ProductsConnection
}
type Product {
variants(first: Int): VariantConnection
}
type Variant {
id: ID
}
# Connection types omitted for brevity
`,
plans: {
Query: {
products(_, fieldArgs) {
const $products = loadMany(null, batchLoadProducts);
return connection($products, { fieldArgs });
},
},
Product: {
variants($product, fieldArgs) {
const $productId = get($product, "id");
const $variants = loadMany($productId, batchLoadVariantsByProductId);
return connection($variants, { fieldArgs });
},
},
},
});
The batchLoadProducts and batchLoadVariantsByProductId functions can be the
exact same functions that you would use with a DataLoader — batch loading
functions that receive a list of inputs and must return a list of associated
outputs. Unlike in GraphQL.js, you don’t need to create DataLoaders for each of
these functions and add them to the GraphQL context; Grafast takes care of the
batching automatically since it understands the full lifecycle of the request.
Easy to optimize
To further optimize, the loader callback can be wrapped in an object which indicates
to Grafast its supported features. For example, support for
limit/offset pagination can be indicated via paginationSupport: { offset: true }:
export const batchLoadVariantsByProductId = {
name: "batchLoadVariantsByProductId",
paginationSupport: { offset: true },
async load(productIds, info) {
const {
attributes,
params: { limit, offset },
} = info;
// Implementing business logic is left as an exercise to the reader
const variantsByProductId = await getVariantsFromBusinessLogic({
productIds,
limit,
offset,
attributes,
});
// Correlate the results for each product ID.
return productIds.map((productId) => variantsByProductId[productId]);
},
};
Easy to customize
But it doesn’t stop there... loadMany() and connection() are just generic
built-in steps designed to work with any datasource; you can build your own
custom steps with not much more effort than creating a DataLoader instance.
Custom steps can enable improved DX and more advanced optimizations (think:
deduplication, eager loading, one-time preparation, caching and more).
Embrace efficient execution
Stop papering over the cracks in GraphQL execution with DataLoader,
lookahead, and dodgy ResolveInfo hacks; instead embrace a new paradigm with
Grafast. Start by reading an overview of Grafast or digging in
to loadMany()’s advantages over
DataLoader.
Thank you
Grafast is crowd-funded open-source software, relying on sponsorship from individuals and companies to keep advancing.
We want to extend our sincere thanks to every person, company and organization that has sponsored us throughout the lifetime of the Grafast project, and extend a special thanks to the Featured Sponsors who helped support many of the key efforts during the Grafast and PostGraphile V5 development effort: The Guild, Dovetail, Steelhead, Surge, Netflix, StoryScript, Chad Furman, Fanatics, Enzuzo and Accenture.
If your employer benefits from Grafast, or the wider Graphile suite, please encourage them to fund our work. Our software saves companies substantial time and money by reducing development effort and running costs. Sponsorship is an investment in your product’s future, ensuring the foundations of your software stack remain secure and reliable for years to come.
Find out more about sponsorship at graphile.org/sponsor

