In fact we’re demoing a product on Monday that’s using our subscription implementation. We hope to have it packaged up later next week!
Awesome! I’ve been looking forward!
My plan for the future is to implement a REST layer over a GraphQL layer.
The writer’s statements are criticized below the article itself also.
Nice read about the bff, I’m just reading a link in that article http://samnewman.io/patterns/architectural/bff/. Thanks.
From my point of view, GraphQL is especially interesting when you have a need for a query language between your front end and backend, a good example of that would be a really versatile dataviz frontend. You do not need ressources you need a way to express queries against a backend and a way for that backend to translate that to its underlying storage.
You do not want to tie the query engine/translation to any particular thing on the front end, so GraphQL help you separate both.
On the other hand, if you are publishing posts that goes up on a newspaper, or are exposing mostly “static” data, REST make far more sense.
Also while we are at it, i would like to remind everyone that we still have to see a good HATEOS/REST implementation that is efficient in term of use of network.
Is this in Elixir? If so, (assuming your project is private) is there any chance you could throw together a small example showing how you do this? It sounds really interesting but I’m having a hard time visualizing it.
In Elixir with absinthe yeah, it is a big private project, and it is not hard to do, you just make the necessary ‘run’ absinthe calls.
In this case I’m more inclined to lay a lot of the blame on misbehaving client implementations or more to the point, API-tooling that just plasters a “RESTful” marketing sticker over what is essentially still just “Plain Old JSON over HTTP CRUD RPC”.
While REST is essentially independent of media type (as long as HATEOAS is viable) I believe that there was a reason why Leonard Richardson chose XML rather than JSON in RESTful Web Services and RESTful Web APIs - because people are more inclined to process XML as just plain text allowing you to use something like screen-scraping tactics to only pay attention to the actual bits that you are interested in, while not becoming dependent on the remainder of the representation.
For example, if you are just navigating through a resource, just find the one
link element that has the agreed upon
rel attribute value and then navigate to the next resource identified by the URI in the
href attribute. Essentially the client implementation was expected to browse through the resources (starting from the root URI) just like a user would browse through the html as rendered in a web browser. Of course that type of code isn’t easily cranked out by a wizard or code generator so many APIs just stuck with “JSON over HTTP CRUD RPC” under the RESTful flag.
IIRC there was also no prohibition to creating “aggregate resources” that could save the client a few resource loads - it’s a concession the makes sense within the consumer driven contract dynamic. The guideline I remember is that the “aggregate representations” simply have to contain links using the resources’ canonical URIs.
Then there is finally the whole PUT, GET, POST, DELETE from the uniform interface is simply CRUD nonsense - for example p. 230 of RESTful Web Services (pdf, free) describes how to expose transactions as resources - i.e. resources can be used to represent concepts that are usually not CRUD-able in a CRUD world.
Yes you can make REST apps more intelligence , but it is harder to get balance what data you need vs how many request you need to get this data especially if you have many different clients…
Yeah, this is essentially what I see in the wild and am at times guilty of. I think, but I’m not yet sure, that if you’re going to be doing RPC over http anyway, GraphQL has some nice features for that use case.
Steve Vinoski: Mythbusting Remote Procedure Calls • GOTO Aarhus 2012, Oct 1
may be of interest before getting too firmly entrenched with the RPC mindset again.
Does developer convenience really trump correctness, scalability, performance, separation of concerns, extensibility, and accidental complexity?
I look back over the history of RPC-oriented distributed computing approaches and wonder why we’ve pursued them for so long even though we’ve been aware of their fundamental flaws for many years. The answer, I believe, is that we’ve chosen convenience over correctness — we’ve treated distribution as something we can jam underneath our general-purpose programming language abstractions, rather than as something that to be done correctly requires entirely different abstractions and approaches.
I talk about issues related to mapping the artifacts from one middleware system into another. This topic gets a lot of attention today because those working with Web Services are just now discovering how thorny some middleware mapping problems can get.
I’ll file that under “No Silver Bullet”.
And yes please, stop with the RPC.
This includes an Elixir section! We hope to have subscription content up for it this week!
If you peeps think we need a wiki for GraphQL please feel free to create one