REST API design: cap the `limit`?

Hi there,

tl;dr: Why do paginated REST API list calls have a hard-capped limit parameter? What’s the benefit? Would you do it?

This question isn’t really Elixir specific but I’m asking it here because:

  • I find that this is an exceptionally bright community who I hope enjoys a general design discussion or two
  • Well, our backend is written in Elixir :slight_smile:

At my company TalkJS, a pluggable chat service, we expose a REST API for our customers to use. We try to make it as boring and predictable as possible, and in general we look a lot at Stripe for good ideas to steal - we sort of perceive them as the grand masters or API design.

But there’s one thing that we see many REST APIs do that we don’t understand. Most LIST calls (i.e. GET calls on a list resource) have some sort of pagination, typically with a limit parameter and then either an offset or startingAfter parameter to fetch the next set of results. So far, so good. However, most APIs we’ve seen out in the wild put a hard cap on the limit at a relatively low number, eg 10 or 100. This means that for many use cases, you’re forcing your customer to write code that fetches multiple result sets sequentially, deals with potential inconsistencies in the returned data (in case data changed between requests), and so on.

What is the benefit of this? If a customer needs 10000 items, why is it better for us if they do 100 requests for 100 items each, rather than a single request for 10000 items?

I totally understand why one would default the limit to a low value. But why not allow the customer to specify a high value, or even infinity, if they want to fetch everything there is?

The only reason I’ve been able to cook up is to make fetching many items deliberately hard for customers, so that they’ll not needlessly consume resources and only fetch a lot of data when they really really need to. That’s sensible, but at the same time it’s also a bit hostile to those customers who do really need to fetch more than a little bit, isn’t it?.

What confuses me even more is that Stripe, who hard-cap the limit at 100 items, have an “auto-pagination” feature in their official client SDKs which effectively removes this problem from customer code - doing the “send 100 litle requests for 100 items each” dance fully transparently to customer code. If they have a feature like that, which auto-hammers their backend with fully frenzy, why not just allow users to set limit at 10000? What’s the difference?

Any genius insights would be wildly appreciated :slight_smile: Thanks!

1 Like

The purpose of imposing the limit is to block the clients from killing your service. In more complex, listing resources is often much more than reading data from DB and serializing that to JSON (if you a limit smaller than 100 then you can start thinking that the devs on the other side might have some performance issues).

The other benefit of a hard limit is that you can easily set up monitoring and alerting. That’s really essential for any public API. Otherwise how would you set up an alert threshold if you can query for any number of items?

2 Likes

Part of it is just keeping an upper limit on memory use of that API service. 99%+ percent of API controllers are reading all the rows out of the db into memory and then serializing to json before sending the response. A hard upper limit on the number of items evens out the memory usage of your service. Now, if you got fancy and use Repo.stream to transform to json and stream out to the caller, that’d also give you the ability to limit your max memory use, but it has its own tradeoffs too. In reality, you also have to manage timeouts in your db connections too.

2 Likes

I don’t know much about Stripe, but there is a good alternative to Rest API in Elixir, it is GraphQL. With Relay modern, it creates a really good pagination system, here is a sample query, with some parameters I can use to query.

  {
    viewer: graphql`
      fragment gamesList_viewer on Viewer
      @argumentDefinitions(
        count: {type: "Int", defaultValue: 30}
        cursor: {type: "String"}
        filter: {type: "GameFilter", defaultValue: {}}
        order: {type: "SortOrder", defaultValue: ASC}
      ) {
        games(
          first: $count
          after: $cursor
          filter: $filter
          order: $order
        ) @connection(key: "gamesList_games") {
          edges {
            node {
              id,
              ...gameItem_game
            }
          },
          pageInfo {
            hasPreviousPage
            hasNextPage
            startCursor
            endCursor
          }
        }
      }
    `
  },

The client can request exactly what it needs, with 1 call :slight_smile:

1 Like

Thanks everyone! The thing about limiting memory usage and timeouts makes a lot of sense.

This also means that we can use the memory usage / timeout stats for the calls in question to guide how high the cap should be. That’s a much more “engineering” approach to things than just looking at what others do, so thanks for that. :slight_smile: