Internal & External REST APIs with Phoenix: Patterns, Pitfalls, and Your Best Lessons

I’m in the early stages of designing a new Phoenix-based API service and I’m curious about peoples experiences as a developer and consumer (if there’s a great non-phoenix api you’ve used feel free to talk about it).

At my current company we’ve slowly grown to ~40 services across ~15 teams, and it’s become pretty obvious that a big chunk of our developer friction comes from how we do internal APIs.

We’ve never really had a clear API strategy. It’s been a bit of a slow poison.

The service I’m talking about doesn’t exist yet – rather than proceed with my own biases I’d love to know how different orgs approached and solved the problems for internal APIs and public APIs.

An Example of the problem
A very common pattern today: to do “action X” you have to call 3–4 GET endpoints across different services, then a POST to glue it together. What I’d like is something closer to:

GET /x/options

POST /x

So that there is a small, obvious surface area and the complexity sits behind a clear boundary.

Rolling this out will be non-trivial, so if we’re going to push ahead, we might as well improve DX too
I’m mostly looking at improving:

  • auth & access control
  • local dev (seeding, mocking, not needing 8 services running)
  • discovery & docs (probably OpenAPI)
  • contracts & testing
  • handling multiple environments and lots of change

Scope:

  • REST/RPC APIs not GraphQL.
    What I’m looking for

If you’ve worked on APIs that felt great (or awful), I’d love to hear:

Phoenix-specific stuff:

  • What parts of Phoenix actually made your APIs better?
    (router, Plug, contexts, generators, testing tools, anything you leaned into or avoided?)
  • General DX stories (any stack):
  • What made discovery and documentation not suck?
  • Which APIs were a joy to consume and why?
  • What worked well for local development (seeding, mocking, “I don’t need the whole company running”)?
  • How did you manage multiple environments and a high rate of change (i.e. make an API gateway not a key point of contention)?
  • Was enforcing OpenAPI/Swagger worth it?
  • For non-trivial cases, how did you approach contract testing?

Any bit of feedback would be incredibly useful “we did X and it really worked / really hurt” is super useful.

I have my own ideas and some early drafts, but I’m deliberately not leading with them because I’m hoping to hear about approaches and experiences I wouldn’t think of on my own. Once there’s a bit of input, I’m happy to share where I’m leaning.

4 Likes

First, you should decide if you’re building a REST or RPC API because REST/RPC is not a thing. JSON over HTTP is RPC and not REST. If you really mean REST, with HATEOAS and all, you should first decide on a specification. I’m a fan of HAL. It’s pretty simple and easy to implement as a thin DSL over Phoenix views. If you go with HAL, you can use CURIEs as documentation and aid for automated clients.

As a general suggestion, take a look at RESTful Web APIs. It’s a short but insightful book which will clear any misconseptions.

3 Likes

Have a look at Ash Json API.

3 Likes

Apologies I should have been a little bit more specific on this part. It is not decided one way or the other. At this point I am open to both to get as much to read into as possible.

I actually mostly wanted to pre-empt that we do not want GraphQL. We are leaning to more defined apis and less about composability. I know there’s more to it and great solutions, but we’ve found it is really easy to discretely bleed bad technical debt even with it’s great DX as a front end engineer.

Interesting, I’ve never really looked at HAL before, if I had to summarise the primary benefit is that in it’s design itself you are able to embed actions as links for next options for interaction?
The main thing counting against it would be how general knowledge using it is?

Thanks, I will give this a read!

I have a similar problem, managing a lot of services that need to communicate with each other. I prefer REST with OpenAPI for documentation, contracts, mocks, etc. The tooling around OpenAPI is pretty good.

For context, I work in a monorepo setup so it’s pretty easy to create the contracts between the services. I create a shared package that contains the API request and response structures, where the consumer and the provider can depend on. This way, we do the contract between the services based on the shared package. I’m using my own library for that, which generates the types and schemas. Inside the application, I use the defined types (in the shared package) and my library will convert those types to JSONSchema, which I use generate the OpenAPI documentation. Since I’m in a monorepo, if I change the shared package, the CI pipeline triggers the tests for all the services that depend on it. You can have similar setup in single repos, just need to discover which services depend on the shared package and trigger their tests when the shared package changes.

As for access control, I didn’t touch this part yet but I’m thinking in implementing a shared package (like a plug) that will help authorizing the requests using JWT tokens. Each service will have its own secret to sign the tokens, and the shared package will help verifying them.

For local development, I never run the whole stack. I create client libraries based on the contract shared package. This way I can mock the services that I’m not working on and since the contract is well defined. The client libraries are done using Req, and I can mock things using Req.Test.

You also mentioned about “gluing” multiple requests into a single one. This can be a architecture or organizational problem. You can separate your services by domains, and each domain can abstract away the complexity of multiple services behind a single API. Each domain can have it’s own API gateway, and the caller only interacts with the high level API while the complexity is handled behind the gateway.

2 Likes

In an ideal world I would prefer something along these lines, however with our unrelated micro front-end initatives we have found the multi package bumps a big pain when testing in environments like sandbox and staging, since in our infra setup, you’d need to bump the package, and then have the consuming application bump and compile the change and then deploy to sandbox.
That’s really hit our developer experience, realistically like you mention below we would split into domains, but likely I would lean towards a mono-gateway to avoid the package challenges, but I would need to verify the developer experience to test downstream and upstream changes to see if it is preferable.
I also consider multiple domain gateways an option, but it is also at risk of enabling the same problems as before but with extra steps.

How many developers are actively working on your mono repo? Has there been any friction on your approach? I do like the consideration of JSONSchema and strong typings.

While not necessarily relevant for this topic, but we currently have 4 methods of authentication.
Service to Service → Api Keys
Front End → Cookie/JWT (Depending which front end)
Admin → OAuth

On of the big causes of our api preliferation is that in many code bases each auth has a different controller, but also the separation isn’t clean as it should be so there’s a lot of repetition of logic, but also the apis are inaccessible for those cases.
I am kind of curious how people manage multiple auths, and how to enable them for specifc end points/resources.

On the last point, it is absolutely an architecture issue mixed with organisation/conways law.
How it came to be can be a therapy session on its own, but a short version is that we have split domains and services, but didn’t invest enough time specifying how to keep your applications decoupled and what a good api should work like.
We’re going to be fixing those problems, but it will be a bonus to solve a lot of productivity constraints while we are at it

1 Like

Are there good public examples to see what a codebase looks like with it?

I’m assuming you are using it with the rest of the Ash ecosystem?
Looking at the docs, it seems to leverage the domain modelling quite well, is it well suited for an api only gateway that would be separated by domains but it will primarily be stitching and possibly state management.
I know I intuitively associate Ash resources with db relationships (which is incorrect from what I’ve heard), but curious how good of a fit it is to a gateway?

Ash Examples

Start at the main website ash-hq.org.

From there, you can find their discord and documentation. Also there is an Ash channel on this forum.

Yes!

IMO yes it’s well suited for API-centric domain modeling, and simplifies certain aspects like authentication and access control.

I think the fit would be good but YMMV - have a look and let us know!