What’s your worst experience with a REST API?

I have heard people talking about GraphQL APIs and other alternatives to a standard REST API, so that got me thinking: What is the worst experience you peoples have had with a REST API?

Also, what is the best you’ve ever had?

While doing my master (many years ago) our university forced us to work with a “successful startup”. The term successful is used rather benevolently here, since they were just people with friend in the University and were looking for fresh bodies to join the meat grinding machine they called a company.

We had to use their REST API for our project and … it was horrible. The API itself made little to no sense and half the calls didn’t even work. It was even admitted the API was the product of some guy doing extra hours at home, so I am not surprised. Never again.

On the other side, I have also studied and used GitHub’s API, a rather nice experience.

In the end I think that REST vs GraphQL experiences will largely depend on the people who made them - you can also have terrible GraphQL APIs, just as you can have really nice REST ones.

3 Likes

In my personal experience, mobile app development (or single-page client application development) tends to be very annoying with REST APIs. REST APIs often force client application developers to perform many HTTP calls, possibly with several levels of dependence, in order to fetch enough data to render a screen. Example: to render a product page, fetch the product information, but also seller information, related products, available offers, user data, recommendations, user comments, etc. From the client application point of view, it would be much more efficient and clean to perform only one request to get all the information necessary to build each specific screen. That does not map well to a resource-oriented API.

To make things worse, changing needs of client applications often result in API versioning: one cannot force all users to switch instantaneously to the newer version of the mobile app, hence the old and the new API versions both have to be maintained indefinitely, with great frustration of API developers.

There is a tension between the different requirements of client application developers and REST API developers. I have seen these kind of issues straining the relationship between backend and mobile teams a number of times.

GraphQL definitely simplifies this a lot from the point of view of the client application developer: the server simply defines what can be fetched, while the client chooses what to fetch, in one single request. This approach often also alleviates the problems that come with API versioning, that can be especially annoying in big REST APIs.

Another common solution is to implement a “backend for frontend”, that orchestrates all the HTTP requests and exposes the combined result in fewer endpoints taylor-made for the client application. That layer has to manage dependencies between requests, error handling, often also transforming the result. GraphQL can be used for this too, operating on top of a REST API.

Aspects where instead GraphQL struggles and REST shines, in my opinion, are caching (REST is very much designed for that, and can leverage cache-control headers and client-side caching too), pagination (possible with GraphQL of course, but slightly annoying because it requires wrapping each paginated list of results), and performing updates (can be done with GraphQL mutations, but in this territory REST is often simpler). With GraphQL it is also harder to protect against DoS by crafting very complex requests: with REST one can more easily assess the complexity of each separate endpoint, and rate-limit differently.

My personal preference is to use REST for external APIs, and GraphQL for read-only internal APIs used by the client application. For writes, I usually still prefer the REST way. I also prefer REST for very long paginated lists of flat simple results, where caching can be handled much more easily.

10 Likes

REST APIs often force client application developers to perform many HTTP calls.

There is no restriction to publishing resources that aggregate other resources to save the number of requests that you have to make. Exposing these aggregates would be in line with consumer-driven contracts. And by factoring out these “aggregates” into separate APIs you end up with BFFs which you mention.

Of course if you aren’t in control of the API you consume you are at the mercy of what the provider is willing to give you.

hence the old and the new API versions both have to be maintained indefinitely, with great frustration of API developers.

As far as I can tell GraphQL APIs can suffer the same sort of versioning problems.

I have seen these kind of issues straining the relationship between backend and mobile teams a number of times.

Seems to be the same type of chasm that has been historically associated which Object vs. Relational.

This approach often also alleviates the problems that come with API versioning

How? I’m not convinced this is at all true for GraphQL.

  • I’m not denying the short term gain of being able to specify against a schema exactly what you want to get back.
  • I’m also not denying that there are some ways in which a schema can be evolved without breaking existing queries.

But as soon as some major refactoring (in the general sense) needs to take place which moves types around in the relationship graph, you are going to need a new version. The issue is that the client becomes coupled to those parts of the schema that need to be traversed to get to the data it actually wants. I wouldn’t describe a schema as a narrow API because the client may need to know about intermediate types it doesn’t really care about.

In OO there is the law of demeter - similarly the more relations you have to traverse to get to your data, the more fragile (and coupled) your query becomes in the face of a changing schema.

It’s really only the BFF acting as an Anti Corruption Layer that can protect the front end from API changes - but the BFF still has to adapt to the original API. And as a BFF would have an API that is optimized for a particular client there would be very little benefit to using GraphQL - because the BFF API can focus on exactly what the client needs, so a for-purpose REST API works just as well (and in many cases is simpler).

Designing a schema that can serve the needs of multiple client communities is a non-trivial task.

To some some degree I think that “bad” APIs, REST or GraphQL, are a result of too much focus on technology and tools, while not expending enough effort on trying to understand the underlying abstractions that the technology is based on.

  • For example, to me Swagger seems to focus too much on “pretty URLs” rather than resource design REST: I don’t Think it Means What You Think it Does • Stefan Tilkov.
  • With GraphQL most resources tend to focus on tooling and technology but really don’t get into the right way to design a schema or how to design a schema for extensibility/maintainability (potentially this - still waiting, talk by the author).

Then there are cases where GraphQL APIs (or REST APIs) simply expose the underlying data model - coupling the front end all the way down to the back end data model.

7 Likes

I like the concept of REST, but with current tooling and convention it needs too much resources to make it to a good level, compared to GraphQL. There are too many gotchas, no consensus on some details, missing tools, etc.

  • Many REST API server framework/libraries makes it too easy (or seems to encourage) to expose storage model as a resource in APIs.
  • REST API returns default sets of attribute, but servers do not know which are being actually used => this makes very difficult to evolve the API interface or introduce backward-incompatible data model.

You can avoid this problem by

  • Make API resources at high-level, not tied to how you store those data (e.g. database table and column)
  • Minimize attributes returned by default, and let clients explicitly ask more fields

… which can be done more easily by GraphQL.

Also there are some additional work / challenges in using/implementing REST API. Nothing is technically “impossible”, but status quo does not give good experience to both side.

… which is already pretty well covered by the current GraphQL tools.

I believe we need a good REST API “spec” and tools around it, rather than just another REST API “guide”.

2 Likes

I’d buy that for the problem of “too many attributes”, but how does GraphQL help us create the right abstraction rather than just exposing the storage model? It seems like GraphQL could be even more prone to just mapping the storage layer, especially with tools like Prisma and Hasura and similar that explicitly do exactly that.

I personally don’t map out my storage model, rather I map out ‘actions’. I treat it very much like an RPC instead, call ‘functions’ and get just the data back that I want and nothing extra. I consider treating it like function calls (REST too) far more reliable in the long run.

3 Likes

It depends on the technology being used in the API server and Database, because fetching all this data in one call can easily take more time than if done by the client in several parallel calls, specially when expensive queries are involved.

So while using elixir wisely, with its concurrent and parallel execution capabilities, this penalty can be avoided, the same will not be true for some other programming languages.

Right, many GraphQL libraries support mapping input model directly mapped to GraphQL object.

I should have said this way

  • GraphQL makes you think more on representation at higher level, not data source, than REST API
  • GraphQL makes it easier to define structure at higher-level upfront and keep it along the evolution of API and data source

See this example:

I’m building a API for movie lists. Based on existing data source (say, CSV), I created this API

GET /movies/1

{
  "id": "1",
  "title": "Hello Movie",
  "director_name": "John Doe"
}

Then later… I need to introduce director as model!

GET /movies/1

{
  "id": "1",
  "title": "Hello Movie",
  "director_name": "John Doe", // for backward compatibility
  "director_id": "2"
}

GET /directors/2

{
  "id": "2",
  "name": "John Doe"
}

And later I found a person can be a director or an actor! Hm… :thinking: :exploding_head:


I know, it should be started with nested attribute to avoid the messy evolution.

{
  "id": "1",
  "title": "Hello Movie",
  "director": {
    "name": "John Doe"
    // then.. later I can add more attributes for this!
  }
}

However… unfortunately it’s common to use one-level resource in REST API - by using prefix to “group” attributes, which should have been extracted as “object” actually. I think this is largely because calling something “a resource” nudges REST API designer that it is “a single object” from domain model. Also some convention/specs makes hard to do that without making everything as “identifiable resource” (… having own URI)

For example, JSON:API allows nested attributes under attributes - but to be a “resource” under relationships, a resource must have ID so you cannot make “virtual” resource, which is useful for evolution as your app grows. See this:

From

{
  "data": {
    "type": "movie",
    "id": "1",
    "attributes": {
      "title": "Hello Movie",
      "director": {
        "name": "John Doe"
      }
    }
  }
}

To

{
  "data": {
    "type": "movie",
    "id": "1",
    "attributes": {
      "title": "Hello Movie"
    },
    "relationships": {
      "director": {
        "data": {
          "type": "person",
          "id": "2"
        }
      }
    }
  },
  "included": [
    {
      "type": "person",
      "id": "2",
      "attributes": {
        "name": "John Doe"
      }
    }
  ]
}

This is dramatic changes on client side, unfortunately.

(you can verify them with https://jsonapi-validator.herokuapp.com/)


So… how GraphQL is “better” to guide “better” API design? GraphQL by the nature encourages nested objects, since that’s only way to connect related objects. It nudges people split types instead of putting attributes with prefix stuck in “parent” object.

From

type Movie {
  id: ID!
  title: String!
  director: Director!
}

type Director {
  name: String!
}

type Query {
  movies: [Movie!]!
}

To

type Movie {
  id: ID!
  title: String!
  director: Person!
}

type Person {
  id: ID!
  name: String!
}

type Query {
  movies: [Movie!]!
}

And one query works for both cases:

movies {
  title
  director {
    name
  }
}

GraphQL is not silver bullet and has its own challenges. However, I think writing good GraphQL schema is much easier than writing decent REST API spec (not considering implementation part)

2 Likes

By using API first design approach, with the uses of tools like RAML or OpenAPI, than you are forced to think ahead in the design of your API and avoid lots of pitfalls, because if before you start coding you share your full API specification with who will consume it, then you will receive feedback, and changes will need to be made to the specification, and rinse and repeat until everyone is in consensus with the spec, and now is the time to start coding, but even after you start coding you will still find areas to improve, thus you will need to stop coding and go back to the cycle of changing the spec, share it, receive feedback, and optimize the spec until everyone is in consensus.

This does not solve all the issues, but improves enormously the quality of your API, and may make it last enough, to survive the need for a v2.

Undisturbed REST was the book that made me improve a lot the way I build APIs nowadays.

Just to note that I am not wanting to say that GraphQL should not be used… Bad APIs, more often then not, are just the result of developers that do not not take enough time to think on the problem, and instead they just rush to the keyboard to start coding :wink:

Well…that is just sad, lazy design. I’m not sure people who would do that, would do much better just because they use a different tool.

Pick any REST API, and you’ll see such problems in different ways. e.g. GitHub API v3 organization has plan as object attribute while many other count/stats or policy related attributes right under the organization resource.

Also… sometimes we don’t know what would change - or we may underestimate size or impact of changes (compared to initial cost). There are many reasons other than being “lazy” to end up with such APIs.

I agree that this makes overall design much much better.

However…

  • There are fundamental tension
    • API is more of “consuming”, not “exposing”
    • REST API aims to “reusable”, consistent resource, which should be “generic” interface.
  • RAML / OpenAPI is just a tool to convey API spec, so it helps communication between interested parties… but it does not provide any guidance which is “better” way.
  • Everyone need to understand REST API philosophy but anyway it will end up with different output.

For example, it reminds me of debating on code style. Luckily we can reformat code easily… but API style is not. e.g. Changing JSON:API into something different spec requires huge amount of dev work.


After working with many APIs (on both server/client sides), I prefer to pick up existing sub-optimal but clear, opinionated rule with great tooling supports rather than creating my own rule which perfectly support all my problems. I really wish there would be such things in REST API.

2 Likes

The worst that I have seen is one that takes a string and returns a string.

The strings are internally converted into xml/json, do work and return a result.

Reimplementing these can require understanding the entire system behind the API

1 Like

thanks @peerreynders for your sharing your well argued view. I might have explained myself wrong though: my point was not to argue that REST does not work in some context. The original question was about personal bad experiences with REST APIs, and my example was that of separate client application development and API development teams, and the friction created by diverging goals.

There is no doubt that it is possible to structure a REST API in a way that serves the needs of a client application perfectly. My point is more about the relative effort of building an API that serves the need of a client application. Not only factoring in coding effort for the API developers, but also the communication and design effort with client application developers, chances of misunderstanding and frustration, etc.

The relevant difference between REST and GraphQL in this case is not whether an aggregate resource is possible or not, but rather about who makes which decision about the aggregate. In the specific case of a client application (think a mobile app) fetching data from an internal API, one typically has the situation that:

  • The API developers know best how to produce data
  • The client application developers know best what data is needed where

GraphQL, in this case, leaves these decisions to the side that has the most knowledge. A REST API must either expose an aggregate “decided” by the API, or expose multiple small endpoints, leaving the choice of what to fetch to the client developers, but also the burden of orchestrating a graph of dependent queries. No doubt that a very good team can design the perfect REST API too. It’s just more effort in this case, especially if client and API developers are different people/teams.

True. I say that GraphQL alleviates the API versioning problem, but I am not arguing that versioning problem is completely removed. My view is that in this specific case GraphQL leads to fewer decisions that have to be made cross-team, and the higher flexibility from the client application side can lead to a lower chance of having to introduce backward incompatible changes (but again, depending on team, situation, trade-offs, etc.).

Finally, I am definitely not arguing against REST APIs in general. REST is a well-proven way of structuring APIs, and I hope I made it clear how I think it makes other problems simpler (caching, updates, etc.). Mine is a personal experience, and applies to the specific situation of separate API and client application teams.

1 Like

Not only factoring in coding effort for the API developers, but also the communication and design effort with client application developers, chances of misunderstanding and frustration, etc.

One thing that comes across in Marc-André Giroux’s talk is that communication between developers on the integrator and provider side is key in order to evolve an effective schema. So regardless what technology is employed, communication difficulties can only have a negative impact the quality of the API.

but rather about who makes which decision about the aggregate.

The provider team working in a vacuum without input from the consumers isn’t likely going to result in an effective API regardless of technology.

GraphQL, in this case, leaves these decisions to the side that has the most knowledge.

The provider side should have the most knowledge about their domain. That doesn’t necessarily imply that they fully comprehend the needs of their consumers.

My view is that in this specific case GraphQL leads to fewer decisions that have to be made cross-team

The anecdotal evidence I’ve come across over the years is that these types of problems are not best solved with technology but with cross-functional teams - i.e. the same people who consume the API also design it.


Marc-André Giroux published an introductory chapter of his upcoming book.

His hypothesis is that REST APIs ran into trouble because of OSFA (one size fits all) design. In the ideal case a REST API would just represent the domain in terms of resources but the consequence was that the API would be equally inconvenient for most consumers. Adding more resources to accommodate different consumers needs, only serve to make the API convoluted.

In response to the problem Soundcloud started to segregate these resources into separate BFFs.

I find the approach that Netflix took to be the most interesting.

They arranged for consumers to deploy server-side client adapters. That way the general API can be just domain oriented while the server-side client adapter can focus on the needs of one particular client managing lean payloads with an optimal shape for the client on the other side of the network.

Marc-André Giroux views client-side GraphQL as a “client-side BFF”. So just like a normal BFF is coupled to the general server-side API, client-side GraphQL is coupled to the provider schema.

In order to provide this sort of flexibility on the client-side I think that GraphQL has increased the end-to-end accidental complexity.

While “client-side BFF” can work, I have to wonder whether with the emergence of JS budgets to accomodate the anticipated increase in consumption of the web via lower end smart devices there is going to be a push for even leaner client-to-server interactions - where the content and shape of the data is customized to a fit specific clients needs in order to minimize the amount of code and processing time that is needed on the client.

1 Like

If you make your API too fine-grained then REST will necessitate too many calls. We can also send nested and related entities in REST. GraphQL is mostly providing the data needed by the client where all requests posted to a single URL and request data in a payload. Hinders readability if the QLs are kept in a different file. REST URLs is more readable. People can make both worse. I worked on a project where somebody who was new to all these REST stuff 7 years ago. He had a URL for updating every field of an entity (More readable but too fine-grained and violated REST itself).

1 Like