The SAM (State - Action - Model) pattern - General Discussion, Blog Posts, Wiki

I did not point out at all what you are pointing out yourself. Please do not try to reduce me to a stooge. I don’t buy the isomorphic paradigm and I don’t want to spend too much time formulating arguments. So I’ll just copy a link:
https://www.jayway.com/2016/05/23/6-reasons-isomorphic-web-apps-not-silver-bullet-youre-looking/
For me the first reason named in the article is enough. I do not want to be obliged to use node.js and js on the backend.
Edit: this is worth a read also I found http://www.slideshare.net/GustafKotte/simpler-web-architectures-now-at-the-frontend-2016

Stefan,

The reality is that it always depends, everything is a matter of tradeoff, once you step over the fallacies.

So again, Isomorphic Javascript seemed to be a distant dream, SAM makes it absolutely trivial to move elements of the pattern (Actions, Model and State) between the client and the server, at any point of the implementation (that’s probably what you seem bitter about because you see how powerful an argument it makes). Please note that this is not a binary choice and as little as a single action can be moved to the server.

It’s easy to debunk the arguments behind this article:

  • the Web server can only be written in Javascript: most organizations would see this as a big plus as developers can move (nearly) freely between Front-End, Back-End, not to mention Integration if you see something like Node as a terrific middleware.

  • Client-side javascript has access to an in-memory state machine that allows for a fine-grained level of interaction, with a very low latency (sub-milliseconds). The same in not true for the HTTP state machine, which is driven by link clicks and form submissions.”: this is possibly the biggest fallacy in the argument against Isomorphic Javascript since for most (business) applications each user event will translate into some form of API call. Who could argue that the HTTP state machine is not prevalent in Front-End architecture. I would argue even further that trying to ignore that simple fact like most Web frameworks do, is a major faux-pas, and the source of major pain and aches when developers need to stitch together pristine Web frameworks with API calls.

  • Blocking vs non-blocking data flow”: obviously this guy has not heard of Websocket

  • “The infrastructure then needs a way to make sure the smart leaf’s context doesn’t get lost”: obviously this guy has never heard of single state tree architecture, and the SAM pattern in particular.

  • Mobile devices freeze during parsing of javascript” not much to add here

  • Isomorphic Web Apps is an approach that demands a high level of development skill.”, with SAM the additional skillset is exactly zero.

A post has been edited and two others removed. Please do not make the thread personal - stick to the topic of discussion and feel free to rebut any poor arguments or put your point across without making personal remarks. Thanks.

5 Likes

Only on your first point (the webserver can only be written in javascript etc), I have other things to do. I don’t see node.js as terrific middleware, on the contrary. For those that are new to the discussion:
“Node.js’ heart of async I/O is built on a horrible concurrency model + you will experience why js is not a language to build business logic in that you can trust (error handling etc). See www.youtube.com/watch?v=q8wueg2hswA
Javascript for developing node was just a poor choice. Read here
http://bostinno.streetwise.co/2011/01/31/node-js-interview-4-questions-with-creator-ryan-dahl/ why Ryan Dahl choose javascript. What you can trust (and it is proven to be very reliable) is the actor-based concurrency model. Here about the model diffs: https://joearms.github.io/2013/04/02/Red-and-Green-Callbacks.html
At least on the server we have some choices concerning the used languages, we are not obliged to use node.js. There is an erlang webserver: cowboy. There are haskell webservers like yesod, they make a difference with node.js to a selling point:

“Asynchronous made easy
The Haskell runtime is asynchronous automatically. Instead of dealing with callbacks, you get to write normal code. By utilizing light-weight green threads and event-based system calls, your code automatically becomes non-blocking, without the pain.” (http://www.yesodweb.com/)

Ok. The last point that you make is that no additional skillset is needed. What exactly you are pointing at is not clear to me. In any case you were (in my opinion) not succesfull in react (see https://github.com/reactjs/redux/issues/1528) and elm (google yourself please) usergroups when you tried to convince people there of your ideas. They seem to be hindered and irritated by at least the jargon you use. See all the reactions you got there. Isn’t it time to scratch your head a bit? If it is all so simple why use all this jargon and in general unclear texts and answers? See for example that link copied in this message, written by Armstrong: this is clear language: aimed at readers like probably most of us.

I think it reasonable to accept that there is multitude of options and there is no “right way” and for any given project there might be different approaches (although sometimes there might not be much choice). There is no objective way to prove something in general terms just by referencing some authority figure, by same token one can point to Bryan Cantrill (creator of Dtrace, Solaris core developer, CTO of Joyent) and his positive opinion of node as a definitive proof it’s the best thing since sliced bread :). In reality it’s all project specific.

1 Like

Partly agreed. I think the choice for a webserver / architecture is often taken not too well informed. And node is quite hyped, cowboy not. Who does not want to ride the waves of success? Netflix, airbnb, together with the big boys, all that applause, champagne, bubbles (!! :wink: etc! Benchmarks can help. I have not been seeking long, but here is an example (2011, quite old): http://www.ostinelli.net/a-comparison-between-misultin-mochiweb-cowboy-nodejs-and-tornadoweb/
But there are more parameters that are hard to measure.

The core value proposition of JavaScript on the server is JSON because pretty much all you do is “mediate” requests and response with vanilla business logic (which is about 99% of use cases in IT). You can use libraries such as Strummer for validation (which is far superior to any class structure). Combine that with a convergence of skills between front-end, back-end and integration, I don’t see why any IT organization would choose anything else. One of my clients tried WSO2, then SpringBoot, then Node. Pretty much every developer when crazy about Node. Projects were delivered in the time it would take to do a PoC with WSO2 or SpringBoot.

Add AWS Lambda on top of that, and the value is even more compelling. I just finished a project for a key customer facing API. Not a glitch. The Lambda makes 6 queries to DynamoDB (including fetching 100+ records), with quite a bit of business logic to prepare the response, and the median response time is 161 ms (including network round trip). The DynamoDB data set is over 20M records.

Please bear in mind that most IT organizations (~90%) operate under the 10 TPS threshold, even in the top F50.

When you combine all these advantages, I don’t see why most IT organization would adopt anything else. I am happy to evaluate something better but it would be hard to match this kind of value.

Could you provide a source for this amazing statistic :slight_smile: especially for F50 :)?

  • Because people have needs beyond what node.js can offer from soft real time to concurrency to HA and distribution
  • Because running anything critical on AWS is very questionable idea in the first place :slight_smile:
  • Because setting up Amazon API gateway for anything beyond very trivial is time consuming nightmare
  • Because pricing of lambda is n times higher then for normal instances which are already overpriced
1 Like

Those are personal stats … :-), but they are real. We might push to 20TPS by the end of the decade, but you’d be surprised once you step out the unicorn paths. Traffic does not always equate revenue, and vice versa.

You’d also be surprised at what’s going on with AWS and IT right now, worldwide. I am assuming the same is true of Azure.

Setting up API Gateway + Lambda is trivial when you use claudia.js, literally trivial.

Citing Adrian Cockcroft, Lambda is 99% cheaper than EC2 (and I’ll second that) - for the record $0.20 per million request (100ms requests).

I am not here to sell node, I just provide feedback based on where I see my clients heading. I work mostly with G200 companies. Node.js/Angular2/AWS is a true game changer for them in terms of LoE, this is the first time I see a stack having such a dramatic impact on cost, from JEE/JBoss, to Spring(Boot). At the same time, this is the first time, I see such broad excitement in the IT dev community.

That being said, I am not disputing that some companies such as eBay, Netflix, have widely different problems and they need different solutions.

Of the G200 we only worked with Ford and JPM all our other clients are way smaller and we never had a project with less then few hundred tps and more commonly it’s into 1000s and thats very far from their core apps that they run. As far as black boxes are fun Lambda 128 is priced at about 30% premium to equivalent EC2 load with about 1/3 RAM. So if you sustained load is smaller than a few small instances then Lambda is an OK value otherwise it’s horrible value. On AWS as a whole a lot of people doing something very risky does not make it a smart move. For decision algorithm on if you should run critical apps on AWS it’s fairly simple you should ask yourself is my company called Netflix? If the answer is NO then you should not run anything critical on AWS :).

It looks like our domains do not intersect, that’s fine. It is clear that if you have high constant volumes, Lambda would not be a good fit, but again, who has constant high volumes of requests? You really think the largest insurance or (health care) payer in the country have to deal with more than 10 TPS?

Are you kidding? A much smaller insurance company like Auto Owners is running multiple high end mainframes at very decent utilization rate. The workloads for something like AIG is a few datacenters.

Again, we’ll have to agree to disagree. We simply may not work with the same clients/projects. I am envious.

All the places I’ve worked have had significantly more than 10 TPS even during slow periods. I’ve never seen that as well.

1 Like

Again, I am envious, each time I reach the sizing phase, my projects stay below this magic 10TPS number :frowning:

Heh, even my last forum I ran was well over 10TPS. The last personal database I made (a statistical recording thing) was almost a thousand TPS. I am not really sure where this 10TPS number comes from. :slight_smile:

do not hesitate to send your < 10 TPS project my way, I’ll take them…

Any I have are on my personal computer, any deployed get much bigger than 10TPS very quickly. ^.^

Time to learn elixir then.

20 tps means 20.000 transactions per seconds I suppose. That’s not self-evident to anyone in a group like this, at least not to me.

1 Like