Skip to content

Overview

Apischema

Makes your life easier when it comes to python API.

JSON (de)serialization + schema generation through python typing, with a spoonful of sugar.

Install

pip install apischema

It requires only Python 3.6+ (and dataclasses official backport for version 3.6 only)

Why another library?

This library fulfill the following goals:

  • stay as close as possible to the standard library (dataclasses, typing, etc.) to be as accessible as possible — as a consequence do not need plugins for editor/linter/etc.;
  • be additive and tunable, be able to work with user own types (ORM, etc.) as well as foreign libraries ones; do not need a PR for handling new types like bson.ObjectId, avoid subclassing;
  • avoid dynamic things like using string for attribute name.

No known alternative achieves that.

Note

Actually, Apischema is even adaptable enough to enable support of "rival" libraries in a few dozens of line of code (see conversions section)

Example

from dataclasses import dataclass, field
from uuid import UUID, uuid4

from pytest import raises

from apischema import ValidationError, deserialize, serialize

# Define a schema with standard dataclasses
from apischema.json_schema import deserialization_schema


@dataclass
class Resource:
    id: UUID
    name: str
    tags: set[str] = field(default_factory=set)


# Get some data
uuid = uuid4()
data = {"id": str(uuid), "name": "wyfo", "tags": ["some_tag"]}
# Deserialize data
resource = deserialize(Resource, data)
assert resource == Resource(uuid, "wyfo", {"some_tag"})
# Serialize objects
assert serialize(resource) == data
# Validate during deserialization
with raises(ValidationError) as err:  # pytest check exception is raised
    deserialize(Resource, {"id": "42", "name": "wyfo"})
assert serialize(err.value) == [  # ValidationError is serializable
    {"loc": ["id"], "err": ["badly formed hexadecimal UUID string"]}
]
# Generate JSON Schema
assert deserialization_schema(Resource) == {
    "$schema": "http://json-schema.org/draft/2019-09/schema#",
    "type": "object",
    "properties": {
        "id": {"type": "string", "format": "uuid"},
        "name": {"type": "string"},
        "tags": {"type": "array", "items": {"type": "string"}, "uniqueItems": True},
    },
    "required": ["id", "name"],
    "additionalProperties": False,
}

Apischema works out of the box with you data model.

Note

This example and further ones are using pytest stuff because they are in fact run as tests in the library CI

FAQ

I already have my data model with my SQLAlchemy/ORM tables, will I have to duplicate my code, making one dataclass by table?

Why would you have to duplicate them? Apischema can "work with user own types as well as foreign libraries ones". Some teasing of conversion feature: you can add default serialization for all your tables, or register different serializer that you can select according to your API endpoint, or both.

So SQLAlchemy is supported? Does it support others library?

No, in fact, no library are supported, even SQLAlchemy; it was a choice made to be as small and generic as possible, and to support only the standard library (with types like datetime, UUID). However, the library is flexible enough to code yourself the support you need with, I hope, the minimal effort. It's of course not excluded to add support in additional small plugin libraries. Feedbacks are welcome about the best way to do things.

I need more accurate validation than "ensure this is an integer and no a string ", can I do that?

See the validation section. You can use standard JSON schema validation (maxItems, pattern, etc.) that will be embedded in your schema or add custom Python validators for each class/fields/NewType you want.

Let's start the Apischema tour.