JSV - JSON Schema Validation library for Elixir, with support for 2020-12

Hello!

I would like to present to you my work of the last couple weeks, an up-to-date JSON Schema validation library.

TL;DR: Repo link at the end of the post

History

I started to work on this when I needed to ensure that a bunch of JSON schemas were properly implemented. Validating a piece of data with a JSON schema is easy, but how can you know that your schema is correct? How do you validate the schema itself?

It turns out that the draft/2020-12 meta schema, like its predecessors, is a schema that can validate other schemas, an it can validate itself.

I just needed a standard JSON schema validator that could follow all the rules of that specification.

So I started to work on that. I thought it would be a quick job with the help of the JSON Schema Test
Suite
. Just generate all the test and make sure they are all green!

Oh boy! That specification is something… I could quickly write the main part of all the rules, but there were a lot of cases that required special handling all along the validation process.

Soon enough I had something that would work for my use case, but I was hooked, I wanted to make it work and implement the whole spec, as a personal challenge, cycling between this is so much fun and argh! why is it so convoluted!? let’s rewrite everything again…

And once it was done I thought I could share that with the community, because in the end it works pretty well.

Quick overview

The library works in two parts, first the “build”, converting schemas into a more workable data structure, and the second, the “validation”.

Validation is straightforward once the build is correct, it works like in any other library: take the data, apply a validation function, and return an ok/error tuple. So basically with a schema like {"type": "integer"} the code boils down to a tiny wrapper around is_integer/1.

The build took a lot more time to write. Especially to handle the $dynamicRef and its $dynamicAnchor counterpart. I understand why the spec is done that way but honestly I am not sure the JSON Schema group is going in the right direction. The unevaluated keyword was a huge headache too.

Also, there is now the concept of vocabularies, which JSV follows. The capabilities of a JSON schema is defined by the vocabulary declared in the meta-schema. For instance with this schema:

{"$schema": "https://example.com/meta-schema", "type": "integer"}

The “type” keyword will only work if the https://example.com/meta-schema resource declares a $vocabulary that the implementation knows.

For now, JSV only knows about the vocabulary in https://json-schema.org/draft/2020-12/schema, with special fallbacks for draft-7. Future versions of the library will add support for custom vocabularies.

So, to build a schema JSV will:

  • Fetch the meta-schema and check the vocabulary to pick what validator implementations it will use.
  • Resolve the schema, meaning it will download all the references, recursively.
  • Build all the validators of all those schemas (not the meta-schema). This will lead to some duplication but it’s fast and it can easily be done at compile-time.
  • Finally extract all the used validators, including anchors and dynamic anchors and wrap all of this into a “root” schema, an internal representation of the original JSON schema.

What is supported

  • All features from draft 2020-12 except content validation.
  • All features from draft 7 except content validation.
  • Custom vocabularies are not (yet) supported.
  • Format validation: a default implementation is provided.
  • Custom format validation.
  • Compile-time builds.
  • Custom resolvers.

Other drafts are not supported, notably draft 2019-09. Draft 4 will work if you bother changing all the id to $id.

A note on resolvers

If you read carefully you have probably ticked when I wrote that the library will download all meta-schemas and referenced schemas recursively. This is indeed not quite true, the library will not fetch anything from the web on its own, for security reasons. You can write your own resolver or use the built-in one with a whitelist of URL prefixes. In any case, you will have to explicitly declare a resolver to do so.

This is well explained in the README. At least I hope so.

Basic usage

This is just a copy-pasta from the README:

# 1. Define a schema
schema = %{
  type: :object,
  properties: %{
    name: %{type: :string}
  },
  required: [:name]
}

# 2. Define a resolver
resolver = {JSV.Resolver.BuiltIn, allowed_prefixes: ["https://json-schema.org/"]}

# 3. Build the schema
root = JSV.build!(schema, resolver: resolver)

# 4. Validate the data
case JSV.validate(%{"name" => "Alice"}, root) do
  {:ok, data} ->
    {:ok, data}

  {:error, validation_error} ->
    # Errors can be casted as JSON validator output to return them
    # to the producer of the invalid data
    {:error, JSON.encode!(JSV.normalize_error(validation_error))}
end

Implementation notes

  • For the validation of email addresses, the mail_address library can by pulled in, optionally. It seems to work well.

  • For other formats such as uri, uri-reference, iri, iri-reference, uri-template, json-pointer and relative-json-pointer I used the abnf_parsec library. It is optional as well. Fallback support for uri and uri-reference is provided.

    I am not satisfied with this implementation so far. First because the official RFCs that the JSON Schema Specification points to give the ABNF grammars in a way that makes you doubt your copy-and-paste skills. Then because I have some false negatives. I will need to study ABNF a little bit more. Thumbs up to the author, that library won me a lot of time.

  • The built-in resolver requires a proper JSON implementation. If you are running Elixir 1.17 or below, you will need Jason or Poison. This is generally not a problem.

  • I have not solved the float problem with bigints. The JSON Schema Specification allows to treat 1000000000000000000000000.0 as an integer, but trunc(1000000000000000000000000.0) equals 999999999999999983222784. It’s too late when the value enters my code, the JSON parsers will already have converted that number to 1.0e24.

The future

This library is already useful to me. If I found it is used by many of you I think it will be worth to make it even better. There are a couple ways to do so:

  • Create some benchmarks to give people better information when choosing a library. I have not been too regarding on performance. So far it’s seems to be fast enough for my needs. The 5200 tests that build a schema and validate some data run in less than 3 seconds without concurrency.
  • Support custom vocabularies and vocabularies override. I know this can be useful since I have been storing additional data in JSON schemas in several projects already. This will allow to implement content validation (contentMediaType, contentEncoding, contentSchema). I also would like the library to return errors or warnings if some keyword in the schema were not used during the build.
  • Implement a test suite for Bowtie though I am not sure this is widely used. I just found that out the other day.
  • Write more documentation. The API docs need some work. A guide for custom vocabularies will be needed because, unfortunately, one has to understand the internals of the library in order to build a vocabulary that will report errors correctly.
  • Support for deserialization into Elixir structs.

Use it today :slight_smile:

Thank you for reading!

If you would like to give it a try, I’d be glad to get some feedback from you!

Github repo

Hex.pm page

Happy new year to all alchemists around here!

13 Likes

This is neat! Have you considered a third “use case” where a schema could be used to generate cases in property testing?

Admittedly, I’m not sure of the intricacies of the spec, but a previous now-defunct company had an internal library in Elixir for JSON schemas with property testing. The plan had been to open source it but the company shuttered before that happened. I remember getting it to respect recursive references accurately and without bugs was a pain.

Is that something you would imagine would be useful/possible/appropriate as part of this library? If so, would you be open to an attempt at it? (assuming you didn’t want to do it yourself)

1 Like

Hey @felix-starman

I don’t think that a validation library and that feature would benefit from being tied together.

I think it would be best to implement the recursive and dynamic resolving resolving separately, and build the data generator around that.

JSV schemas once built are represented as {module, arg} tuples, it would be hard to use this representation to generate data from scratch.

So I am not sure it would be appropriate or even possible but if you want to try it we could integrate it, sure, or have a separate jsv_<...> package!

1 Like

This is why JSON parsers need to be callback based, like SAX parsers for XML.
I believe the new JSON parser that ships with OTP does it that way.

Essentially you want the parser to crawl through the JSON emitting the indexes of the elements within that JSON. Then your handler can implement callbacks that produce a structure. This gives the caller complete control of how to cast the elements within the JSON into whatever the caller needs. Once you have that most of the time the casting does the validation, except for when validation requires referencing multiple things in the JSON together. For that you can create validation functions.

1 Like

Yes the new JSON implementation does accept callback decoders to handle each token of the json input.

So with JSV it would be possible to override the schema vocabularies keywords like type or multipleOf to work with a JSON containing custom structs instead of a bare float value.

Version 0.3.0 has been released.

  • We made the ABNF Parsec library better with support for UTF8 character ranges with @princemaple , so now JSV will correctly validate Internationalized Resource Idendifiers and IRI-references.
  • A default resolver for the meta schemas is available, there is no need to specify an HTTP resolver just to download and cache the https://json-schema.org/draft/2020-12/schema schema.
2 Likes

Version 0.4.0 has been released.

  • It is now possible to define a struct based on a schema.

    defmodule MyApp.UserSchema do
      require JSV
    
      JSV.defschema(%{
        type: :object,
        properties: %{
          name: %{type: :string, default: ""},
          age: %{type: :integer, default: 0}
        }
      })
    end    
    
  • It is now possible to use a module name to reference a subschema:

    defmodule MyApp.RoleSchema do
      def schema do
        %{enum: ["admin","author","subscriber"]}
      end
    end
    
    schema = %{
      properties: %{
        # Reference a module that uses defschema/1
        user: MyApp.UserSchema,
        # Reference a module that simply exports schema/1
        role: MyApp.RoleSchema
      }
    }
    
    root = JSV.build!(schema)
    
    JSV.validate(%{"user" => %{"name" => "Alice"}, "role" => "admin"}, root)
    # returns:
    {:ok, %{"user" => %MyApp.UserSchema{age: 0, name: "Alice"}, "role" => "admin"}}
    
5 Likes

Version 0.5.0 has been released.

The main addition of this version is the ability to load schemas from local files or directories. A new helper module can be used to create a JSV.Resolver for schemas referenced in $schema, $ref or $dynamicRef, or just to fetch them easily.

Example:

defmodule MyApp.LocalResolver do
  use JSV.Resolver.Local, source: [
    "priv/schemas",
    "priv/messaging/schemas",
    "priv/special.schema.json"
  ]
end

Now this resolver can be used to build validation roots:

schema = %{"$ref": "myapp:some-local-schema"}
root = JSV.build!(schema, resolver: MyApp.LocalResolver)

The built-in resolvers will be used even if not passed in the :resolver option.

A raw schema can be pulled from the resolver module:

{:ok, schema} = MyApp.LocalResolver.resolve("myapp:some-local-schema")

Changes (from the full changelog)

  • Added JSV.Resolver.Local to resolve disk stored schemas
  • Special error format for additionalProperties:false
  • Provide correct schemaLocation in all errors
  • Added defschema_for to use different modules for schema and struct (undocumented yet)
  • Provide ordered JSON encoding with native JSON modules

That’s all for now :slight_smile:

2 Likes

Hey !

I made some breaking changes in version 0.6 to support the new cast functions feature.

Cast functions allow to define the jsv-cast function in a JSON schema to call a custom function on validation and return a different value that what is in the raw data.

For instance with a module defined like this:

defmodule MyApp.Schemas.Cast do
  import JSV

  defcast to_uppercase(data) do
    {:ok, String.upcase(data)}
  end
end

And the following schema:

%{
  type: :string,
  "jsv-cast": ["Elixir.MyApp.Schemas.Cast", "to_uppercase"]
}

The cast function is automatically called upon validation:

root = JSV.build!(schema)
{:ok, "HELLO"} = JSV.validate("hello", root)

I made a defcast macro because I do not want the schema to contain a [module, function] value directly, for obvious security reasons. Besides that, it’s just a normal function that could be enabled this way:

defcast "to_uppercase", :my_secret_function

defp my_secret_function(data) do
  {:ok, String.upcase(data)}
end

This new feature includes some breaking change because the capability of defining a struct along with a schema is now based on cast functions as well.

More changes from the full changelog

  • Resolvers do not need to normalize schemas anymore
  • Added support to override existing vocabularies
  • Schema definition helpers do not enforce a Schema struct anymore
  • Provide a generic JSON normalizer for data and schemas
  • Allow resolvers to mark schemas as normalized

I think JSV is now ready to support OpenAI specifications tooling but I’m hesitating going for it. So many things to do.

Anyway, have a wonderful week!

4 Likes

Hello :slight_smile:

Tiny release today with the support of Decimal. JSV will validate Decimal.new("1.001") with schemas like {"type": "number"}.

Another change is that the mail_address dependency is no longer used, we now use abnf_parsec for the email format too.

Cheers!

changelog

2 Likes

Hello,

I’ve just released a new version for JSV (a JSON Schema Validation library).

While working on support for OpenAPI 3.1 I needed to be able to use generic JSON documents as a repository for schemas. For instance, in an OpenAPI specification JSON document, the “API spec” itself is not a JSON schema, but it contains schemas. There is no need to transform the whole document in a schema that would validate nothing (as there are no schema keywords like type or properties at the root level).

So it’s now possible to have a document, and only build one or several parts of it into the “validation root” that JSV uses for validation. This is not documented for now as I am still waiting to see if it covers all my needs. But to implement that I had to make some breaking changes, and I’m releasing those now, better sooner than later.

Breaking changes

For regular usage of the library this should not impact your workflow.

  • [breaking] Defschema does not automatically define $id anymore
  • [breaking] Error normalizer will now sort error by instanceLocation
  • [breaking] Changed caster tag of defschema to 0
  • [breaking] Changed order of arguments for Normalizer.normalize/3

Full Changelog.

Incoming changes

In a next release I also want to sunset the composition API for schemas. This API looks like this:

alias JSV.Schema

%Schema{}
|> Schema.object()
|> Schema.properties(%{
  age: Schema.integer(description: "The age")
  my_prop: name_schema
})
|> Schema.required([...])

While this looks nice, I feel it’s pretty useless. If you already know what’s going to be in the schema you can just declare it that way:

%Schema{
  type: :object,
  properties: %{
    age: Schema.integer(description: "The age")
    my_prop: name_schema
  },
  required: [...]
}

This does not prevent you to have dynamic values like the name_schema variable.

The problem with that API is that the functions accept an optional first argument, the base to merge onto. For instance with Schema.integer/1 above,instead of piping in, I just passed the base directly and it’s explicit.

But if you want to define a schema with string_to_atom_enum/2 and a description for instance, you need to do this:

%{
  properties: %{
    my_enum:
      JSV.Schema.string_to_atom_enum(
        %{
          description: "Some description"
        },
        [:aaa, :bbb, :ccc]
      )
  }
}

Whereas what you expect to see is something like that:

%{
  properties: %{
    my_enum: JSV.Schema.string_to_atom_enum(
      [:aaa, :bbb, :ccc],
      description: "Some description"
    )
  }
}

That is, the first argument corresponds to the function name (in this case an enumeration of atoms), and maybe some optional overrides.

So I plan to move all those functions into a “composition API” module, and maybe deprecate them. JSON schemas are data and not code, it should always be possible to know the keys we want in there from the beginning. For the rare cases where we could not, I’ll keep the merge function anyway.

And then replace those functions with new helpers that take overrides as the last argument.

But I’d like to have you opinion on this :slight_smile:

Thank you

6 Likes

Hello,

I just released a new version for JSV

As mentioned in the previous version thread I wanted to change the functional API used to define schemas, mostly because it was kind of useless for that type of data.

Basically I have deprecated building schemas like this…

object() 
|> properties(foo: integer()) 
|> required(:foo)

…because there were no additional value from that syntax, and only a performance cost.

New helpers are available, there is less of them for now, and they are not composable (not pipeable) but they are more readable given the extra attributes (like `description “foo…”) are always the last argument.

I believe this will lead to simpler code given schemas are static data 99% of the time.

More information in the docs API Changes in JSV 0.9 — jsv v0.10.0

I’m also happy to see that the Vaux library uses JSV for attribute validation in HTML components, which is quite cool!

[0.9.0] - 2025-07-05

:rocket: Features

  • Provide a schema representing normalized validation errors
  • Deprecated the schema composition API in favor of presets

:bug: Bug Fixes

  • Emit a build error with empty oneOf/allOf/anyOf
  • Reset errors when using a detached validator
  • Ensure casts are applied after all validations
  • Revert default normalized error to atoms
6 Likes

Hello, sorry to be spammy but I love working on JSV, my JSON Schema Validator library.

This is a small release with three important things:

  • Module-based schemas now need to export a json_schema/0 function instead of a schema/0 function. This makes more sense since we are starting to export json_schemas from other modules like Ecto schema modules. A generic schema/0 function is confusing in that case. Codebases using the old callback will still work, but a warning will be emitted.

    Example:

    defmodule ItemModule do
      def json_schema do
        %{type: :string}
      end
    end
    
    schema = %{type: :array, items: ItemModule} 
    
    data = ["foo", "bar"]
    

    Modules using JSV.defschema/1 will automatically export the new function as well as the old one, with a deprecation warning.

  • That defschema macro now supports passing the properties as a list directly.
    So instead of this:

    defmodule MyApp.UserSchema do
      use JSV.Schema
    
      defschema %{
        type: :object,
        properties: %{
          name: %{type: :string},
          age: %{type: :integer, default: 0}
        }
      }
    end
    

    You can do this:

    defmodule MyApp.UserSchema do
      use JSV.Schema
    
      defschema name: %{type: :string},
                age: %{type: :integer, default: 0}
    end
    

    And because use JSV.Schema imports the new schema definition helpers, it can be as short as this:

    defmodule MyApp.UserSchema do
      use JSV.Schema
    
      defschema name: string(),
                age: integer(default: 0)
    end
    
  • Finally, I’ve added the defschema/3 macro that works like defschema/1 but also defines a module:

    defmodule MyApp.Schemas do
      use JSV.Schema
    
      defschema User, 
        name: string(),
        age: integer(default: 0)
    
      defschema Admin,
        """
        With a schema description and @moduledoc
        """,
        user: User,
        privileges: array_of(string())
    end
    

    I think it’s nice to have this when defining some responses schemas directly in controllers or message queue consumers, à la Pydantic.

And that’s it :slight_smile: Thanks for reading!

[0.10.0] - 2025-07-10

:rocket: Features

  • Define and expect schema modules to export json_schema/0 instead of schema/0
  • Allow to call defschema with a list of properties
  • Added the defschema/3 macro to define schemas as submodules

:bug: Bug Fixes

  • Ensure defschema with keyword syntax supports module-based properties
9 Likes

That’s a commitment to quality

image

1 Like

Haha !

The truth is that I generate many tests from the official JSON schema test suite. All generated tests are in 3 versions:

  • one with raw schemas (string keys)
  • one with atom keys, most of the time using the JSV.Schema struct (only const is not in the struct so I use bare maps with atoms)
  • one using Decimal structs instead of numbers in the test values.

And as I use two suites, Draft 7 and 2020-12, you get a huge amount of tests!

3 Likes

heh.

at first, I tried to believe the tests were built by hand. then, I asked re: jsv:

got some useful hints from your work. thanks

3 Likes

Looking at you files it appears those are LLM generated right? I think it did a pretty good job (though it should have stopped at some point in that test guide, at the end it’s making things up). What did you use?

I tried to use Exdantic with JSV but we have an incompatibility. JSV expects raw data with binary keys while Exdantic seem to expect atom keys only:

Mix.install([
  {:jsv, "~> 0.10"},
  {:exdantic, "~> 0.0.2"}
])

defmodule UserSchema do
  use Exdantic, define_struct: true

  schema "User account information" do
    field :name, :string do
      required()
      min_length(2)
      description("User's full name")
    end

    field :email, :string do
      required()
      format(~r/^[^\s@]+@[^\s@]+\.[^\s@]+$/)
      description("Primary email address")
    end

    field :age, :integer do
      optional()
      gt(0)
      lt(150)
      description("User's age in years")
    end

    field :active, :boolean do
      default(true)
      description("Whether the account is active")
    end

    # Cross-field validation
    model_validator(:validate_adult_email)

    # Computed field derived from other fields
    computed_field(:display_name, :string, :generate_display_name)

    config do
      title("User Schema")
      strict(true)
    end
  end

  def validate_adult_email(input) do
    if input.age && input.age >= 18 && String.contains?(input.email, "example.com") do
      {:error, "Adult users cannot use example.com emails"}
    else
      {:ok, input}
    end
  end

  def generate_display_name(input) do
    display =
      if input.age do
        "#{input.name} (#{input.age})"
      else
        input.name
      end

    {:ok, display}
  end

  use JSV.Schema

  def json_schema do
    JSV.Schema.with_cast([__MODULE__, :from_jsv])
  end

  defcast from_jsv(data) do
    validate(data)
  end

  def format_error("from_jsv", [exdantic_error | _], _) do
    Exdantic.Error.format(exdantic_error)
  end
end

root = JSV.build!(UserSchema) |> dbg()
data = %{"name" => "alice", "email" => "foo@bar.com"}

JSV.validate!(data, root)


1 Like

I just used Claude Code. Yeah, Claude likes being “helpful”, whether hallucinating ideas or running git reset --hard after you ask it to not lose uncommitted work! (!!)

Thanks for looking at that integration doc with JSV. It was just brainstorms, really. Will look when time allows.

1 Like

Haven’t given exdantic (previously a fork of elixact) enough attention. It’s been on the shelf for weeks, awaiting integration.

Thanks for looking at this. You should be able to just disable strict mode.

It might be good for me to deprecate strict mode entirely from exdantic, as it’s not clear yet if it’s needed. Maybe? IDK.

$ elixir strictModeDeprecation/strict_vs_nonstrict_analysis.exs
=== EXDANTIC STRICT vs NON-STRICT BEHAVIOR ANALYSIS ===

STRICT - Valid atom keys:
  Input: %{name: "Alice", email: "alice@example.com", age: 30}
  âś… SUCCESS: %StrictSchema{age: 30, email: "alice@example.com", name: "Alice"}

NON-STRICT - Valid atom keys:
  Input: %{name: "Alice", email: "alice@example.com", age: 30}
  âś… SUCCESS: %NonStrictSchema{age: 30, email: "alice@example.com", name: "Alice"}

---
STRICT - Valid string keys:
  Input: %{"age" => 25, "email" => "bob@example.com", "name" => "Bob"}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [\"age\", \"email\", \"name\"]"}]

NON-STRICT - Valid string keys:
  Input: %{"age" => 25, "email" => "bob@example.com", "name" => "Bob"}
  âś… SUCCESS: %NonStrictSchema{age: 25, email: "bob@example.com", name: "Bob"}

---
STRICT - Mixed keys:
  Input: %{"age" => 35, "email" => "charlie@example.com", "name" => "Charlie"}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [\"age\", \"email\", \"name\"]"}]

NON-STRICT - Mixed keys:
  Input: %{"age" => 35, "email" => "charlie@example.com", "name" => "Charlie"}
  âś… SUCCESS: %NonStrictSchema{age: 35, email: "charlie@example.com", name: "Charlie"}

---
STRICT - Extra fields (atom keys):
  Input: %{active: true, name: "David", email: "david@example.com", age: 40, role: "admin"}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [:active, :role]"}]

NON-STRICT - Extra fields (atom keys):
  Input: %{active: true, name: "David", email: "david@example.com", age: 40, role: "admin"}
  âś… SUCCESS: %NonStrictSchema{age: 40, email: "david@example.com", name: "David"}

---
STRICT - Extra fields (string keys):
  Input: %{"age" => 28, "created_at" => "2024-01-01", "email" => "eve@example.com", "name" => "Eve", "role" => "user"}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [\"age\", \"created_at\", \"email\", \"name\", \"role\"]"}]

NON-STRICT - Extra fields (string keys):
  Input: %{"age" => 28, "created_at" => "2024-01-01", "email" => "eve@example.com", "name" => "Eve", "role" => "user"}
  âś… SUCCESS: %NonStrictSchema{age: 28, email: "eve@example.com", name: "Eve"}

---
STRICT - Missing required field:
  Input: %{name: "Frank"}
  ❌ ERROR: [%Exdantic.Error{path: [:email], code: :required, message: "field is required"}]

NON-STRICT - Missing required field:
  Input: %{name: "Frank"}
  ❌ ERROR: [%Exdantic.Error{path: [:email], code: :required, message: "field is required"}]

---
STRICT - Invalid email:
  Input: %{name: "Grace", email: "invalid-email", age: 32}
  ❌ ERROR: [%Exdantic.Error{path: [:email], code: :format, message: "failed format constraint"}]

NON-STRICT - Invalid email:
  Input: %{name: "Grace", email: "invalid-email", age: 32}
  ❌ ERROR: [%Exdantic.Error{path: [:email], code: :format, message: "failed format constraint"}]

---
STRICT - Nested extra data:
  Input: %{"age" => 45, "email" => "henry@example.com", "metadata" => %{"source" => "api", "version" => "v2"}, "name" => "Henry", "profile" => %{"bio" => "Software engineer", "skills" => ["elixir", "rust"]}}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [\"age\", \"email\", \"metadata\", \"name\", \"profile\"]"}]

NON-STRICT - Nested extra data:
  Input: %{"age" => 45, "email" => "henry@example.com", "metadata" => %{"source" => "api", "version" => "v2"}, "name" => "Henry", "profile" => %{"bio" => "Software engineer", "skills" => ["elixir", "rust"]}}
  âś… SUCCESS: %NonStrictSchema{age: 45, email: "henry@example.com", name: "Henry"}

---
=== REAL-WORLD JSON API SCENARIO ===

Typical JSON from API:
{
  "name": "API User",
  "email": "user@api.com",
  "age": 29,
  "id": "12345",
  "created_at": "2024-01-15T10:30:00Z",
  "updated_at": "2024-01-15T10:30:00Z",
  "metadata": {
    "source": "registration",
    "ip_address": "192.168.1.1"
  }
}

Parsed JSON: %{"age" => 29, "created_at" => "2024-01-15T10:30:00Z", "email" => "user@api.com", "id" => "12345", "metadata" => %{"ip_address" => "192.168.1.1", "source" => "registration"}, "name" => "API User", "updated_at" => "2024-01-15T10:30:00Z"}

STRICT - Real API JSON:
  Input: %{"age" => 29, "created_at" => "2024-01-15T10:30:00Z", "email" => "user@api.com", "id" => "12345", "metadata" => %{"ip_address" => "192.168.1.1", "source" => "registration"}, "name" => "API User", "updated_at" => "2024-01-15T10:30:00Z"}
  ❌ ERROR: [%Exdantic.Error{path: [], code: :additional_properties, message: "unknown fields: [\"age\", \"created_at\", \"email\", \"id\", \"metadata\", \"name\", \"updated_at\"]"}]

NON-STRICT - Real API JSON:
  Input: %{"age" => 29, "created_at" => "2024-01-15T10:30:00Z", "email" => "user@api.com", "id" => "12345", "metadata" => %{"ip_address" => "192.168.1.1", "source" => "registration"}, "name" => "API User", "updated_at" => "2024-01-15T10:30:00Z"}
  âś… SUCCESS: %NonStrictSchema{age: 29, email: "user@api.com", name: "API User"}

=== PERFORMANCE IMPLICATIONS ===

Performance (1000 validations):
  Strict mode (valid data): 1.59ms
  Non-strict mode (valid data): 1.26ms
  Strict mode (extra fields): 5.49ms
  Non-strict mode (extra fields): 1.56ms

=== KEY INSIGHTS ===
1. Strict mode ONLY works with atom keys in practice
2. Non-strict mode handles both atom and string keys gracefully
3. Real-world JSON APIs always have extra fields
4. String keys are the norm for external data
5. Strict mode limits interoperability significantly