Data model and resolvers¶
Almost everything in the Data model section remains valid in GraphQL integration, with a few differences.
GraphQL specific data model¶
Enum
¶
Enum
members are represented in the schema using their name instead of their value. This is more consistent with the way GraphQL represents enumerations.
TypedDict
¶
TypedDict
is not supported as an output type. (see FAQ)
Union
¶
Unions are only supported between output object type, which means dataclass
and NamedTuple
(and conversions/dataclass model).
There are 2 exceptions which can be always be used in Union
:
None
/Optional
: Types are non-null (marked with an exclamation mark!
in GraphQL schema) by default;Optional
types however results in normal GraphQL types (without!
).apischema.UndefinedType
: it is simply ignored. It is useful in resolvers, see following section
Non-null¶
Types are assumed to be non-null by default, as in Python typing. Nullable types are obtained using typing.Optional
(or typing.Union
with a None
argument).
Note
There is one exception, when resolver parameter default value is not serializable (and thus cannot be included in the schema), the parameter type is then set as nullable to make the parameter non-required. For example parameters not Optional
but with Undefined
default value will be marked as nullable. This is only for the schema, the default value is still used at execution.
Undefined¶
In output, Undefined
is converted to None
; so in the schema, Union[T, UndefinedType]
will be nullable.
In input, fields become nullable when Undefined
is their default value.
Interfaces¶
Interfaces are simply classes marked with apischema.graphql.interface
decorator. An object type implements an interface when its class inherits from an interface-marked class, or when it has flattened fields of interface-marked dataclass.
from dataclasses import dataclass
from graphql import print_schema
from apischema.graphql import graphql_schema, interface
@interface
@dataclass
class Bar:
bar: int
@dataclass
class Foo(Bar):
baz: str
def foo() -> Foo | None:
...
schema = graphql_schema(query=[foo])
schema_str = """\
type Query {
foo: Foo
}
type Foo implements Bar {
bar: Int!
baz: String!
}
interface Bar {
bar: Int!
}
"""
assert print_schema(schema) == schema_str
Resolvers¶
All dataclass
/NamedTuple
fields (excepted skipped) are resolved with their alias in the GraphQL schema.
Custom resolvers can also be added by marking methods with apischema.graphql.resolver
decorator — resolvers share a common interface with apischema.serialized
, with a few differences.
Methods can be synchronous or asynchronous (defined with async def
or annotated with an typing.Awaitable
return type).
Resolvers parameters are included in the schema with their type, and their default value.
from dataclasses import dataclass
from graphql import print_schema
from apischema.graphql import graphql_schema, resolver
@dataclass
class Bar:
baz: int
@dataclass
class Foo:
@resolver
async def bar(self, arg: int = 0) -> Bar:
...
async def foo() -> Foo | None:
...
schema = graphql_schema(query=[foo])
schema_str = """\
type Query {
foo: Foo
}
type Foo {
bar(arg: Int! = 0): Bar!
}
type Bar {
baz: Int!
}
"""
assert print_schema(schema) == schema_str
GraphQLResolveInfo
parameter¶
Resolvers can have an additional parameter of type graphql.GraphQLResolveInfo
(or Optional[graphql.GraphQLResolveInfo]
), which is automatically injected when the resolver is executed in the context of a GraphQL request. This parameter contains the info about the current GraphQL request being executed.
Undefined parameter default — null
vs. undefined
¶
Undefined
can be used as default value of resolver parameters. It can be to distinguish a null
input from an absent/undefined
input. In fact, null
value will result in a None
argument where no value will use the default value, Undefined
so.
from graphql import graphql_sync
from apischema import Undefined, UndefinedType
from apischema.graphql import graphql_schema
def arg_is_absent(arg: int | UndefinedType | None = Undefined) -> bool:
return arg is Undefined
schema = graphql_schema(query=[arg_is_absent])
assert graphql_sync(schema, "{argIsAbsent(arg: null)}").data == {"argIsAbsent": False}
assert graphql_sync(schema, "{argIsAbsent}").data == {"argIsAbsent": True}
Error handling¶
Errors occurring in resolvers can be caught in a dedicated error handler registered with error_handler
parameter. This function takes in parameters the exception, the object, the info and the kwargs of the failing resolver; it can return a new value or raise the current or another exception — it can for example be used to log errors without throwing the complete serialization.
The resulting serialization type will be a Union
of the normal type and the error handling type; if the error handler always raises, use typing.NoReturn
annotation.
error_handler=None
correspond to a default handler which only return None
— exception is thus discarded and the resolver type becomes Optional
.
The error handler is only executed by apischema serialization process, it's not added to the function, so this one can be executed normally and raise an exception in the rest of your code.
Error handler can be synchronous or asynchronous.
from dataclasses import dataclass
from logging import getLogger
from typing import Any
import graphql
from graphql.utilities import print_schema
from apischema.graphql import graphql_schema, resolver
logger = getLogger(__name__)
def log_error(
error: Exception, obj: Any, info: graphql.GraphQLResolveInfo, **kwargs
) -> None:
logger.error(
"Resolve error in %s", ".".join(map(str, info.path.as_list())), exc_info=error
)
return None
@dataclass
class Foo:
@resolver(error_handler=log_error)
def bar(self) -> int:
raise RuntimeError("Bar error")
@resolver
def baz(self) -> int:
raise RuntimeError("Baz error")
def foo(info: graphql.GraphQLResolveInfo) -> Foo:
return Foo()
schema = graphql_schema(query=[foo])
# Notice that bar is Int while baz is Int!
schema_str = """\
type Query {
foo: Foo!
}
type Foo {
bar: Int
baz: Int!
}
"""
assert print_schema(schema) == schema_str
# Logs "Resolve error in foo.bar", no error raised
assert graphql.graphql_sync(schema, "{foo{bar}}").data == {"foo": {"bar": None}}
# Error is raised
assert graphql.graphql_sync(schema, "{foo{baz}}").errors[0].message == "Baz error"
Parameters metadata¶
Resolvers parameters can have metadata like dataclass fields. They can be passed using typing.Annotated
.
from dataclasses import dataclass
from typing import Annotated
from graphql.utilities import print_schema
from apischema import alias, schema
from apischema.graphql import graphql_schema, resolver
@dataclass
class Foo:
@resolver
def bar(
self, param: Annotated[int, alias("arg") | schema(description="argument")]
) -> int:
return param
def foo() -> Foo:
return Foo()
schema_ = graphql_schema(query=[foo])
# Notice that bar is Int while baz is Int!
schema_str = '''\
type Query {
foo: Foo!
}
type Foo {
bar(
"""argument"""
arg: Int!
): Int!
}
'''
assert print_schema(schema_) == schema_str
Note
Metadata can also be passed with parameters_metadata
parameter; it takes a mapping of parameter names as key and mapped metadata as value.
Parameters base schema¶
Following the example of type/field/method base schema, resolver parameters also support a base schema definition
import inspect
from dataclasses import dataclass
from typing import Any, Callable
import docstring_parser
from graphql.utilities import print_schema
from apischema import schema, settings
from apischema.graphql import graphql_schema, resolver
from apischema.schemas import Schema
@dataclass
class Foo:
@resolver
def bar(self, arg: str) -> int:
"""bar method
:param arg: arg parameter
"""
...
def method_base_schema(tp: Any, method: Callable, alias: str) -> Schema | None:
return schema(description=docstring_parser.parse(method.__doc__).short_description)
def parameter_base_schema(
method: Callable, parameter: inspect.Parameter, alias: str
) -> Schema | None:
for doc_param in docstring_parser.parse(method.__doc__).params:
if doc_param.arg_name == parameter.name:
return schema(description=doc_param.description)
return None
settings.base_schema.method = method_base_schema
settings.base_schema.parameter = parameter_base_schema
def foo() -> Foo:
...
schema_ = graphql_schema(query=[foo])
schema_str = '''\
type Query {
foo: Foo!
}
type Foo {
"""bar method"""
bar(
"""arg parameter"""
arg: String!
): Int!
}
'''
assert print_schema(schema_) == schema_str
ID type¶
GraphQL ID
has no precise specification and is defined according API needs; it can be a UUID or and ObjectId, etc.
apischema.graphql_schema
has a parameter id_types
which can be used to define which types will be marked as ID
in the generated schema. Parameter value can be either a collection of types (each type will then be mapped to ID
scalar), or a predicate returning if the given type must be marked as ID
.
from dataclasses import dataclass
from uuid import UUID
from graphql import print_schema
from apischema.graphql import graphql_schema
@dataclass
class Foo:
bar: UUID
def foo() -> Foo | None:
...
# id_types={UUID} is equivalent to id_types=lambda t: t in {UUID}
schema = graphql_schema(query=[foo], id_types={UUID})
schema_str = """\
type Query {
foo: Foo
}
type Foo {
bar: ID!
}
"""
assert print_schema(schema) == schema_str
Note
ID
type could also be identified using typing.Annotated
and a predicate looking into annotations.
apischema also provides a simple ID
type with apischema.graphql.ID
. It is just defined as a NewType
of string, so you can use it when you want to manipulate raw ID
strings in your resolvers.
ID encoding¶
ID
encoding can directly be controlled the id_encoding
parameters of graphql_schema
. A current practice is to use base64 encoding for ID
.
from base64 import b64decode, b64encode
from dataclasses import dataclass
from uuid import UUID
from graphql import graphql_sync
from apischema.graphql import graphql_schema
@dataclass
class Foo:
id: UUID
def foo() -> Foo | None:
return Foo(UUID("58c88e87-5769-4723-8974-f9ec5007a38b"))
schema = graphql_schema(
query=[foo],
id_types={UUID},
id_encoding=(
lambda s: b64decode(s).decode(),
lambda s: b64encode(s.encode()).decode(),
),
)
assert graphql_sync(schema, "{foo{id}}").data == {
"foo": {"id": "NThjODhlODctNTc2OS00NzIzLTg5NzQtZjllYzUwMDdhMzhi"}
}
Note
You can also use relay.base64_encoding
(see next section)
Note
ID
serialization (respectively deserialization) is applied after apischema conversions (respectively before apischema conversion): in the example, uuid is already converted into string before being passed to id_serializer
.
If you use base64 encodeing and an ID type which is converted by apischema to a base64 str, you will get a double encoded base64 string
Tagged unions¶
Important
This feature has a provisional status, as the concerned GraphQL RFC is not finalized.
apischema provides a apischema.tagged_unions.TaggedUnion
base class which helps to implement the tagged union pattern.
It's fields must be typed using apischema.tagged_unions.Tagged
generic type.
from dataclasses import dataclass
from pytest import raises
from apischema import Undefined, ValidationError, alias, deserialize, schema, serialize
from apischema.tagged_unions import Tagged, TaggedUnion, get_tagged
@dataclass
class Bar:
field: str
class Foo(TaggedUnion):
bar: Tagged[Bar]
# Tagged can have metadata like a dataclass fields
i: Tagged[int] = Tagged(alias("baz") | schema(min=0))
# Instantiate using class fields
tagged_bar = Foo.bar(Bar("value"))
# you can also use default constructor, but it's not typed-checked
assert tagged_bar == Foo(bar=Bar("value"))
# All fields that are not tagged are Undefined
assert tagged_bar.bar is not Undefined and tagged_bar.i is Undefined
# get_tagged allows to retrieve the tag and it's value
# (but the value is not typed-checked)
assert get_tagged(tagged_bar) == ("bar", Bar("value"))
# (De)serialization works as expected
assert deserialize(Foo, {"bar": {"field": "value"}}) == tagged_bar
assert serialize(Foo, tagged_bar) == {"bar": {"field": "value"}}
with raises(ValidationError) as err:
deserialize(Foo, {"unknown": 42})
assert err.value.errors == [{"loc": ["unknown"], "msg": "unexpected property"}]
with raises(ValidationError) as err:
deserialize(Foo, {"bar": {"field": "value"}, "baz": 0})
assert err.value.errors == [
{"loc": [], "msg": "property count greater than 1 (maxProperties)"}
]
JSON schema¶
Tagged unions JSON schema uses minProperties: 1
and maxProperties: 1
.
from dataclasses import dataclass
from apischema.json_schema import deserialization_schema, serialization_schema
from apischema.tagged_unions import Tagged, TaggedUnion
@dataclass
class Bar:
field: str
class Foo(TaggedUnion):
bar: Tagged[Bar]
baz: Tagged[int]
assert (
deserialization_schema(Foo)
== serialization_schema(Foo)
== {
"$schema": "http://json-schema.org/draft/2020-12/schema#",
"type": "object",
"properties": {
"bar": {
"type": "object",
"properties": {"field": {"type": "string"}},
"required": ["field"],
"additionalProperties": False,
},
"baz": {"type": "integer"},
},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1,
}
)
GraphQL schema¶
As tagged unions are not (yet?) part of the GraphQL spec, they are just implemented as normal (input) object type with nullable fields. An error is raised if several tags are passed in input.
from dataclasses import dataclass
from graphql import graphql_sync
from graphql.utilities import print_schema
from apischema.graphql import graphql_schema
from apischema.tagged_unions import Tagged, TaggedUnion
@dataclass
class Bar:
field: str
class Foo(TaggedUnion):
bar: Tagged[Bar]
baz: Tagged[int]
def query(foo: Foo) -> Foo:
return foo
schema = graphql_schema(query=[query])
schema_str = """\
type Query {
query(foo: FooInput!): Foo!
}
type Foo {
bar: Bar
baz: Int
}
type Bar {
field: String!
}
input FooInput {
bar: BarInput
baz: Int
}
input BarInput {
field: String!
}
"""
assert print_schema(schema) == schema_str
query_str = """
{
query(foo: {bar: {field: "value"}}) {
bar {
field
}
baz
}
}
"""
assert graphql_sync(schema, query_str).data == {
"query": {"bar": {"field": "value"}, "baz": None}
}
FAQ¶
Why TypedDict
is not supported as an output type?¶
At first, TypedDict
subclasses are not real classes, so they cannot be used to check types at runtime. Runtime check is however requried to disambiguate unions/interfaces. A hack could be done to solve this issue, but there is another one which cannot be hacked: TypedDict
inheritance hierarchy is lost at runtime, so they don't play nicely with the interface concept.