Some short info about an emerging market where elixir based products could shine:
Some short info about an emerging market where elixir based products could shine:
A challenger appears! - https://gomix.com
(checking out the Forrester report - more later)
The return of Dreamweaver and other Wysiwyg tools.Nothing really new here. I am not sure what Elixir bring here.
It is possible to write these low code environments with elixir / phoenix on the backend. In such an environment you use not only wysiwyg form editors, also rules and workflow editors. See f.e. Do we need agile software development?. Such an environment supports agility. For some reasons see the forrester report (link in my first mail). The combination of wysiwyg and app delivery solutions made is not so old as far as I know. BPMN2.0 and DMN1.1 are relatively new also.
Oh i know. From my pov of having have to deal with custom HMI for automata and industrial stuff, i know that type of thing well. They are not really recent. And i would not say they emphasize agility. But i think we will have to agree to disagree
I do not see a reasoned rejection, Diana. Say for example you change a rule in a workflow. Normally you have to change some code for that and deliver this code. With a low code environment like you could have seen in the picture you (no coder needed, moreover DMN1.1 - as an industry standard - is portable to other interpreters) change the rule in the dmn editor, save the xml and voila, it works instantly in production. I clearly see an enhancement in agility.
We’re all humans, we have the privilege of being illogical and unreasonable when we want to
I haven’t read the article yet myself, but part of my day job is developing ETL flows in a “low code” manner… We design data flows graphically using drag-and-drop components that are then connected in useful ways. So I do agree it’s often a nice way to work.
The limitations, however, seem to reveal themselves when projects and teams grow, as at that point you often end up wanting to see the exact changes introduced by a release, or for that matter sometimes merge conflicting changes.
In our original, application native toolchain, this was a very tedious task that required exporting a job folder from prod, exporting your work on the same folder in dev, and diffing the two XML files. It was rarely done; more often you’d just have someone sit with you and review the changes graphically, with them having the only choice of trusting that you went through all the changes with them.
We’ve ended up moving to a Git repository, and I’ve added some support for more human-friendly diffing by showing changes in a YAML representation of the XML data. Still, I would much rather have eg. a DSL with round-trip support, so if I want to I can code using the DSL, see diffs in terms of the DSL, AND collaborate with someone who is using only the graphical tools.
Of course, YMMV… I’m sure there are low-code systems with better development workflow support.
Thanks for pointing me at that. I’m using an opensource bpmn2.0 modeler (see https://github.com/bpmn-io/bpmn-js-examples), there seems to be a diff solution also, which you can see running here: http://demo.bpmn.io/diff
So if you want a rejection, my rejection is build around my experience and personal thinking about all these nice GUI. It has a couple problems for me, that are deeply linked to the general problem of CS and programming.
- By definition it limits what you can do. That is not a big problem in itself, because limitations are most of the time quite great. But it limits your workflow only to what the original designer took into account. Which is completely opposite to the problem of programming, which rise from the fact that we have the biggest know field of problems and different systems interacting in the history of engineering.
- It generates code. That seems like not a big problem, until the day you have to debug. Debugging is the thing that take most of programmers time, for the same reason stated above. By generating that code, you are hampering your debugging. Mainly because human do not think like your generator at all. This is the same argument that exist against UniKernels and a lot of other things.
- It limit refactoring. As @jwarlander posted right above. When your need for refactoring grows, you begin to encounter even more the previous problematic of having that limited toolset. We use a tool like that for ETL at work too, and most of the time in refactoring is spent trying to find a way to make it work with the tool.
In general, when you combine all the point stated before, you discover that the biggest problem is that you are trying to shoving something that is continuously changing and has millions of degrees of liberty into something that is limited.
I would add that the lifetime of that type of standard is quite limited (30 years for the longest one) while software is far harder to kill. So we will have to modify, debug and deal with those far after the “generators” are dead.
Finally, i am not saying that they are a complete loss. If you have a small project, need to get up to speed fast and expect it to be a throwaway that will never be touched again, go for it.
But i spend all my days, at work, everyday, having to update stuff that was built with that type of stuff between 5 to 15 years ago, or debugging CMS stuff. And i can tell you. It may have been great at the time. But you pay in term of maintainance cost. A lot.
PS : another problem, but that is more a human problem : it push tools into the hands of peopel that have no idea of what they are doing. While it is great for car, the millions of degrees of liberty of a CS system make it not working so far.
That’s vague to me. But if you don’t care to make it clear that’s ok.
Sorry, that first post was a mistake, i edited to post my full thought
To follow up, i advise to read that rant from Dijkstra. I do not agree with everything, but the part about novelty and the big problems we encounter resonate with me.
I still remember back to my old days, with Netbeans (drag and drop swing) and Dreamweaver and also some commercial PHP code generators. Code generators are nice at the beginning, but maintaining it, is like going through hell after hell.
I’ll think a bit about your reaction and the contents of the Dijkstra (love his writings) link. In the meantime a quick reply on point 2:
It does not generate code. Maybe some tools do, but what I wrote and many others not. It interprets the models runtime. The pro’s contrasted to code generation: https://www.mendix.com/blog/the-power-of-mendix/.
On point 3: you can refactor a model also. Change model, save and again it works immediately in production. This can done much quicker than refactoring the same functionality in code.
On the PS: you can authorize. Only users that know what they do may edit.
It’s not about generators. I agree those are not the way to go.
Except it is generators. Otherwise, all you have is some mind map or some diagrams. Which are super useful mind you.
I do not understand your reply. My code reads the xml’s and interprets them runtime. No code is generated.
- You generate the XML.
- Interpreting at runtime does not mean there is nothing running. It is even worse in light of debugging. Any stacktrace would be full of impossible to understand stuff. But in the end your runtime will generate some code to run depending of your XML.
@DianaOlympos I do believe that @StefanHoutzager 's approach is the right one. If the XML is a 1:1 representation of the graphical information that a user entered, this cannot really be considered a compilation step. When then interpreting this, indeed some code is run depending on the contents of the XML documents. But the difference between generating(/compiling) and interpreting here, is that in the case of a compilation process with a bug, the resulting program will always contain the bug, while in the case of an interpreter with a bug, the resulting program will only contain the bug until the interpreter is updated.
Why it would be harder to show useful stack trace information in the case of an interpreter instead of in an compiler is something I do not understand: as there is no code transformation going when interpreting, all information that is available in the original source can be used to show where stuff went wrong.
I wonder how programs like Game Maker, Scirra Construct, Clickteam Fusion, RPG Maker fit in this comparison. While focused solely on creating (usually two-dimensional) games, these programs have been time-tested and seem to be very extensible. They are not only able to create projects with a reduced scope; for instance Undertale was created using Game Maker.
I believe most of these programs convert the user-specified graphical program flow into some kind of intermediate representation which is then ‘compiled’ in one of possibly several back-ends.
The xml yes, of course, but that is not code, that is configuration. If you get your interpretercode correct you have less debugging work than writing extra code each time you add functionality. Generic code is of course harder to debug, and harder to write. My xml interpreters definetely do not generate any code. While the code is running different execution paths may be followed, but that is not the same as codegeneration.