Ash 3.0 Teasers!

In terms of providing a low level query/bridging that gap, it is potentially possible. We can explore it, but it wouldn’t be in the near future likely.

As for requiring to create a calculation when sorting on another field, there is a feature that was added recently to allow you to do this without calculations.

|> Ash.Query.sort([:foo, Ash.Query.expr_sort(resource_a.score)])
4 Likes

We could potentially add support for this in the api extensions so that they can allow sorting on fields for to_one relationships.

Ash 3.0 Teaser #3: Better Defaults, Less Surprises, Part 1

I meant to make this post a few days ago, but I got a bit busy handling a heisenbug that a user found when using aggregates in policies :bug:. That is all sorted now so I can focus on making progress on 3.0 again! :partying_face:

You may also notice that, from here on out, we will be exchanging the term api with domain. See teaser #2 for more :slight_smile:

A big part of the purpose of 3.0 is changing default behaviors. Ash 2.0 had a lot of “permissive” defaults that could make things extremely quick to get started but could easily bite you later down the road. Ultimately, with Ash, we care more about “year five” than “day one”, so we don’t really want to make design choices that will be a foot-gun later down the road for the sake of easy initial adoption.

There are a lot of these changes, so this will be a two parter!

domain.authorization.authorize now defaults to :by_default

The original default was :when_requested, which would cause authorization to trigger when an actor option was provided, or when authorize?: true was set. This makes it very easy to accidentally forget to authorize an action invocation. :by_default always sets authorize? true (has no effect if you are not using authorizers), unless you explicitly say authorize?: false

We’ve known for a very long time that this was not the ideal default behavior, but it was a significant breaking change and so had to wait for 3.0. You can revert to the old behavior (but should eventually update) by setting the value in each domain module back to :when_requested.

unknown action inputs now produce errors

When passing parameters to actions, we would previously ignore unknown parameters. This made it very easy to misspell an input and not realize it, or otherwise believe that an input was being used when it wasn’t.

Bulk update/destroy strategy defaults to :atomic

When calling a bulk update/destroy, it may not be able to be done atomically (i.e one single UPDATE query + after hooks). You can specify a list of allowed strategies when calling a bulk action. The strategies available are

  • atomic - Must be doable as a single operation at the data layer (plus after action/after batch logic)
  • atomic_batches - Can be a series of atomic operations (as above). This can be great for massive inputs where you want to do them in batches. What we do is stream the provided query or list into batches, and update by primary key as a single atomic operation.
  • stream - We stream each record and do the logic to update each record one at a time.

Ash will use the “best” strategy that it can use (i.e the first one that can be used in the list above starting at the top). In 2.0, the default for the strategy option is [:atomic, :atomic_batches, :stream], allowing all three by default.

In 3.0 the default for the strategy option is [:atomic]. This is in line with making defaults something that will help you choose the safest option. If an action cannot be done atomically, you will be told a reason, and you can adjust the strategy option or modify the action.

require_atomic? on update/destroy actions defaults to true

When writing actions, we want them to be concurrency safe. What this means is that no update/destroy action will be performed non atomically (in this context, to be done atomically means that all changes, validations, and attribute changes can be done atomically. You will get a warning at compile time if your action is known not to be able to be done atomically, and an error at runtime if you attempt to run them.

For example, what might happen is something like this:

update :update do
  # I add an anonymous function change
  change fn changeset, _ -> 
    Ash.Changeset.change_attribute(changeset, :attribute, :value)
  end
end

I get an error at compile time, like this:

warning: [Resource]
 actions -> update:
  update cannot be done atomically, because the changes `[Ash.Resource.Change.Function]` cannot be done atomically

So I adjust the action to use a builtin change that has an atomic implementation. I could also extract the logic into its own module and use Ash.Resource.Change and add the atomic/2 callback.

update :update do
  # I add an anonymous function change
  change set_attribute(:attribute, :value)
end

And now we’re good! I can also add require_atomic? false to the action if I know that the changes on this action are safe to run non-atomically.

update :update do
  require_atomic? false
  
  # I add an anonymous function change
  change fn changeset, _ -> 
    Ash.Changeset.change_attribute(changeset, :attribute, :value)
  end
end

Closing

This one was a lot, and there are more to come! Keep in mind that not all breaking changes will be included in these teasers, but the goal is to include all major/significant changes. Can’t wait for 3.0 to get out there.

Teaser #4: Ash 3.0 Teasers! - #28 by zachdaniel

22 Likes

This is a great candidate for an example in a guide!

I didn’t know this was possible

Hey @zachdaniel I have a greenfield project and I’m keen to try out Ash v3. I’m happy to deal with a bit of early adopter pain (and even contribute bug reports/docs/fixes if it’s helpful).

What’s the best path to take for now? Just follow the v2 guides on hexdocs and wait for the v3 rc to switch things over? Or start from the v3 branch?

Right now it’s really just not possible, and won’t be for about a week or two. Some breaking internal changes break essentially all of our associated packages. But very soon :slight_smile:

2 Likes

No worries, looking forward to it :slight_smile:

Ash 3.0 Teaser #4: Better Defaults, Less Surprises, Part 2

3.0 is coming along very well! Got a few more updates along the same vein of better defaults and less surprises!

%Ash.NotLoaded{} for not selected values

When you run an Ash.Query or an Ash.Changeset that has a select applied, anything that isn’t selected currently gets a nil value. This can be very confusing and often leads to nontrivial bugs. In Ash 3.0, you will instead get %Ash.NotLoaded{}, allowing you to distinguish between values that are actually nil and values that have just not been loaded.

Actions no longer accept all public, writable attributes by default

Thanks to @sevenseacat for bringing this to our attention originally and illustrating just how risky this can be!

In Ash 2.0, actions automatically accept all public writable attributes. This makes it very easy to accidentally include an attribute in your actions, especially when adding a new attribute. For instance:

actions do
  defaults [:create, :read]
end

If you add an attribute to the above resource, you may not realize that it is now accepted in that create action by default.

In Ash 3.0, all actions accept nothing by default. You can adopt the old behavior in your resource with

actions do
  default_accept :*
end

This will help prevent potentially leaking new attributes.

A small quality of life improvement that results from this is that you no longer need to specify attribute_writable?: true on your belongs_to relationships to modify their attribute. This is because making those attributes modifiable in a resource requires including it in the accept list (or using accept :*), and so it is no longer implicit.

private?: true is now public?: false, and public?: false is now the default

In Ash 2.0, all fields default to private?: false.

Public attributes, relationships, calculations and aggregates are meant to be exposed over public interfaces. By defaulting to public?: true, it makes it very easy to add a new field and not realize that you’ve added it to your GraphQL or JSON API, etc.

In Ash 3.0, this option has been renamed to its inverse, public?. Additionally, it now defaults to false. Where you may have seen this:

attributes do
  attribute :first_name, :string
  attribute :last_name, :string
  attribute :super_secret, :string, private?: true
end

you will now see

attributes do
  attribute :first_name, :string, public?: true
  attribute :last_name, :string, public?: true
  attribute :super_secret, :string
end

As you can see this may often be more verbose, as many resources have more public fields than private fields. But it is also much safer in general. It is much better to have an experience of “oh, how come X isn’t showing in my public interface”, then “oh, we’re showing some data over our API that we didn’t intend to show”. Often times we have to make trade offs for the sake of security and safety, and this is one of those cases.

Custom Expressions

This isn’t on theme, as it’s a new feature as opposed to a better default, but I wanted to spice things up :slight_smile:. Custom expressions will allow you to extend Ash’s expression syntax. Since an example is worth a thousand words:

  defmodule MyApp.Expressions.LevenshteinDistance do
    use Ash.CustomExpression,
      name: :levenshtein_distance,
      arguments: [
        [:string, :string]
      ]

    def expression(AshPostgres.DataLayer, [left, right]) do
      expr(fragment("levenshtein(?, ?)", left, right))
    end

    # It is good practice to always define an expression for `Ash.DataLayer.Simple`,
    # as that is what Ash will use to run your custom expression in Elixir.
    # This allows us to completely avoid communicating with the database in some cases.

    def expression(data_layer, [left, right]) when data_layer in [
      AshPostgres.DataLayer.Ets,
      AshPostgres.DataLayer.Simple
    ] do
      expr(fragment(&levenshtein/2, left, right))
    end

    # always define this fallback clause as well
    def expression(_data_layer, _args), do: :unknown

    defp levenshtein(left, right) do
      # ......
    end
  end

With the above custom expression, defined, I can configure it like so:

config :ash, :custom_expressions, [MyApp.Expressions.LevenshteinDistance]

And I can then use it in expressions:

Ash.Query.filter(User, levenshtein_distance(full_name, ^search) < 5)

This will also allow libraries and other packages to provide cross-data layer expressions for you to use with their custom values and expressions.

Thats all!

Thats all I have for you today, thanks for everyone following along :slight_smile: We are on track to have a release candidate this month, ready for the adventurous folks to give it a spin :partying_face:

Teaser #5: Ash 3.0 Teasers! - #30 by zachdaniel

22 Likes

The expressions stuff is so coooool.

4 Likes

Ash 3.0 Teaser #5: Model your domain, derive the rest.

Code Interfaces on domain modules

When building APIs with Ash extensions, like AshGraphql and AshJsonApi, you are combining resource actions into a single interaction point. The code_interface tooling is a method for you to define a similar interaction point for your code. However, the general pattern encouraged defining functions that were called on each resource module. For example:

MyApp.Accounts.User.register_with_password(email, password)

However, this pattern often encourages “reaching in” to individual resources in your domain in your code, which can often make refactoring very difficult. For API extensions this is less of an issue because each action is exposed in such a way that it can have its implementation and backing resource modified in a way that is transparent to the caller (in some cases this is easier than in others).

However, in Elixir we’re used to encapsulating interfaces in a module designed for that purpose. To that end, in Ash 3.0 we now support specifying code interfaces on the domain, and in general we encourage this over defining code interfaces on the resource. Here is what it looks like:

defmodule MyApp.Accounts do
  use Ash.Domain

  resources do
    resource MyApp.Accounts.User do
      define :register_user, action: :register_with_password, args: [:username, :password]
    end
  end
end

With this definition, you’d have

MyApp.Accounts.register_user("username", "password")

This allows for defining the code-level interface for your action in one central place, and emphasizes the role of your domains as a central element for a given group of resources.

Policies on the domain

Authorization is an example of a cross cutting concern that we often want to apply in broad strokes across our application (not always, but sometimes). To this end, you can specify policies on the domain directly. When calling a resource, the policies from the relevant domain are included at the beginning of the list of policies for that resource. Then, authorization proceeds as normal.

The reason the domain policies go first is because of bypass policies. This allows you to do things like declare an admin bypass once in each domain, instead of each resource, or to define a policy like “deny inactive users” in the same way. For example:

defmodule MyApp.Accounts do
  use Ash.Domain,
    # note that it goes in `extensions`, not `authorizers`.
    extensions: [Ash.Policy.Authorizer]

  resources do
    resource MyApp.Accounts.User
  end

  policies do
    bypass always() do
      authorize_if actor_attribute_equals(:admin, true)
    end

    policy actor_attribute_equals(:active, false) do
      forbid_if always()
    end
  end
end

With this in place, any action calls to any resources using this domain will include these policies ahead of their own :partying_face:

The beginning of long term focus on DX & docs

During the remainder of the time before the 3.0 release, and while it is in release candidacy, we will be focusing on DX improvements and documentation. Some examples of these improvements actually come from the 2.0 release of another one of our packages, spark, which Ash 3.0 has been upgrade to support. Keep in mind that you have to be using ElixirLS (as it is based on elixir_sense) to get the benefits of our custom autocomplete extension.

Autocomplete options to use Ash.Resource

This actually applies to any spark DSL, but for Ash users this will most notably show when calling use Ash.Resource. For example:

Autocomplete of options for functions in Ash and generated code interface functions

Ash functions

Code Interface Functions

Additionally, code interface documentation has been updated to include any argument and accepted attribute descriptions. All together, this should drastically help with discoverability of what your code interface offers, and give you extremely high quality, well documented functions for interacting with your resources.

Conclusion

We’re getting closer and closer to the release candidate of Ash, but we’re already looking past that point. The experience that users have with Ash is extremely important to me, and I’m ecstatic that it is finally the right time to shift my efforts to these very important areas.

Until next time!

Teaser #6: Ash 3.0 Teasers! - #36 by zachdaniel

20 Likes

One suggestion regarding the accept change.

Since now we need to be explicit, if you resource has a bunch of attributes, the actions can easily become very verbose since you will need to explicitly add the list of attributes the action accepts.

One thing that we could have to help with the situation is add an option to create an explicit except list if the user so desire.

For example, let’s say I have the attributes a, b, c, …, z in my resource and I want to create an action that will accept all attribute except atribute c and f. It would be nice if we could do something like this:

accept {:all_except, [:c, :f]}

Granted, this will automatically make that action accept new attributes by default when they are added, but IMO that is OK since we are being explicit here instead of the old behavior where it was implicit.

This can be done with a module attribute as well.

@all_attributes [:foo, :bar, :baz]


...


accept @all_attributes -- [:bar, :baz]
1 Like

We had an option called reject that did this, but that is being removed in 3.0 for the sake of clarity.

1 Like

I have a doubt about this: I’m trying this out in Ash 2.20.3 (using api instead), and indeed it works when removing define_for, but another place I’d expect this to work is to avoid defining the api option for cross-domain/api relationships, but if I remove the api option Ash complains about it being missing.

Is this intended, a bug or an Ash 2.x limitation?

In 3.0 this will not be necessary.

1 Like

Ash 3.0 Teaser #6: A cherry on top

This will be the final teaser before the 3.0 release candidates come out!

Ash.ToTenant

A common case is to have a tenant represented by a resource, like %Organization{} or %Tenant{}, but a tenant is always identified by a simple value, like a string or an integer. Because of this, there is often code that looks like this:

Ash.Changeset.for_update(%Record{}, tenant: "org_#{org.id}")

There is also complexity when you have a mix of multi tenancy strategies, like if one resource uses schema-based multi tenancy, and the rest use attribute. The Ash.ToTenant protocol simplifies this, and allows you to use the same tenant everywhere, but have a different derived tenant value. Here is an example of how you might use it:

# in Organization resource

defimpl Ash.ToTenant do
  def to_tenant(resource, %MyApp.Accounts.Organization{id: id}) do
    if Ash.Resource.Info.data_layer(resource) == AshPostgres.DataLayer
      && Ash.Resource.Info.multitenancy_strategy(resource) == :context do
      "org_#{id}"
    else
      id
    end
  end
end

Sensitive Calculations & Aggregates

In the same way that you could specify attributes as sensitive?: true, you can now specify calculations and aggregates as sensitive. These will be redacted when inspecting records, and will also be redacted when inspecting filters inside of queries.

Code interfaces support atomic & bulk actions

In 2.0, you need to look up a record before you can update or destroy it, unless you change your code to use YourApi.bulk_update or YourApi.bulk_destroy (in 3.0, Ash.bulk_update and Ash.bulk_destroy). This can be quite verbose. For example, let’s say you have the id of a thing, and you want to update it. The most idiomatic way would have been something like this:

MyApp.Blog.Post
|> Ash.get!(id)
|> Post.archive!()

Or alternatively, you could have opted not to use the code interface, and used Ash.bulk_update. For example:

MyApp.Blog.Post
|> Ash.Query.filter(id == ^id)
|> Ash.bulk_update(:archive, %{....})

But then you don’t get to use your nicely defined code interface, which acts like a context function which fills the role of a context function in Phoenix.

In Ash 3.0, code interfaces have been updated to support bulk operations, which makes cases like the above much more seamless!

For updates/destroys, you can pass identifiers, queries, and lists/streams of inputs directly instead of a record or a changeset. From here on, we’ll also be using code interface functions defined on our domain instead of our resources, which is the recommended way in 3.0. See previous teasers for more.

Update/Destroy examples:

# If the action can be done atomically (i.e without looking up the record), it will be. 
# Otherwise, we will look up the record and update it.
# => MyApp.Blog.archive_post!(post.id)

# queries can be provided, which will return an `Ash.BulkResult`
Post
|> Ash.Query.filter(author_id == ^author_id)
|> MyApp.Blog.archive_post!()
# => %Ash.BulkResult{}

# lists of records can also be provided, also returning an `Ash.BulkResult`

[%Post{}, %Post{}]
|> MyApp.Blog.archive_post!()
# => %Ash.BulkResult{}

Create

For creates, we detect if the input is a list (and not a keyword list), and opt into bulk create behavior:

# no need for the additional `define ..., bulk?: true`
Blog.create_post!([%{...inputs}, %{...inputs}]

EDIT: Below was the original section on bulk creates. Feel free to ignore it, but leaving it for posterity. @vonagam has made a good point in a discussion on discord, there is no reason that we can’t just detect a list of inputs in the input argument, and use that as a bulk create. This allows us not to need the bulk? true option, and they can function the same as the others, adapting their behavior based on the input. For example:

Create’s don’t take a first argument like updates/destroys, and so a bulk create must be explicitly defined. For example:

# in the domain

resource Post do
  define :create_posts do
    action :create
    bulk? true
end

And it can be used like so:

Blog.create_posts!([%{...inputs}, %{...inputs}]

Streaming reads

You can also now ask read action code interfaces to return streams. Keep in mind that Ash streams are not (yet) based on data-layer-native streams, but rather will use your actions pagination functionality (preferring keyset vs offset pagination). Only the raising version (!) supports the :stream option, because streams can only raise errors, not return them.

For example:

Blog.active_posts!(stream?: true) 
# => returns a lazily enumerable stream

The light at the end of the tunnel

3.0 is very close, and I’m so excited! Thanks again to everyone who has been a part of it. For those adventurous folks, the release candidates will be out soon for you to have a play with :rocket:

22 Likes

Quick question about this. Is the idea, in the long term, to only allow calling the resources actions from the code interface in the domain modules or Ash will always support both ways?

Personally I never liked that approach either with Phoenix contexts or now Ash domains, in my experience as soon as you have a more “complex” domain/context, that module starts to become a mess and hard to find what you want. So, for me at least, I would like to keep using code interfaces in resources if possible :slight_smile:

3 Likes

They will both be supported. I believe there are separate cases where both are desirable, and that it is also subject to personal taste. The only thing changing is that our recommended approach is to start w/ functions on the domain module, for users who aren’t sure what they want yet.

3 Likes