IMO the problem with low code is, that the hard part about solving most problems isn't usually the way you write or execute it.
The hard part is avoding weird edge case logic errors, race conditions, all these things good programmers will avoid by having an intuition of complex systems with state spread around. This is what you pay software people for, and not for the ability to crank out text in languages that look cryptic to commoners.
Now I do think that such low code environments can provide benefits in specific situations (e.g. when you use it as a highly flexible "config file" within the framework of another system that catches all those forbidden states).
Low code is using post-it notes to run your project. What "low-code" means is "someone else's code" and we already have tons of that, in the form of open source libraries and frameworks that avoid implementing tons of logic that is more or less idiomatic.
This also includes SaaS solutions, where I am not running any code at all. You are, and I'm paying your for it. All I get are a palette of decisions I can make to alter how the product runs. Just like a YAML based solution.
It's more snake oil, of an exact vintage that we already had in the late 1980's. And if you don't know what we called it then, it's because it was such a titanic flop that only computer history buffs know. An evolutionary dead end.
Being able to model a system graphically and run that graphical model is invaluable when working with non-programmer domain experts. And as often happens when drawing pictures of things, we may see new ways to organize processes that are big improvements over the formerly-obvious way they were built.
Another great use is allowing low-tech users to build their own low-code models. With a small bit of education, they can solve their own problems. And as some problems are short term situations, that low-code solution may suffice until the problem no longer exists. If the problem needs a long term professional solution, the developer can take the working low-code model and reimpliment it in code.
“Low code modules” is why everyone thinks Jira is hot garbage.
It gets set up by a manager who makes a bunch of choices that sound good but are wrong. There was just a post on Reddit the other day claiming that if a developer sets up Jira, it’s a nice tool.
It even has pretty pictures you can use for lifecycle definitions. That doesn’t help. We have spent years considering the consequences of decisions that sound nice but aren’t. Even if we only write five lines of code, that’s what you’re paying for. “Low-code” is “without programmers” and good luck with that.
It’s amazing how long you can unproductively stare at a piece of config that results in nothing happening when something was supposed to happen. Declarative code can be hell on beginners. It depends so much more than imperative or functional code on how clear the documentation is. And that was never our strong suit.
The other problem with low code is that the visual abstraction can often make it harder to grok. I've had the unfortunate experience of spending hours trying to click through different nodes of a moderately complex Salesforce flow to piece together WTH it's doing.
It's also impossible to diff two across revisions.
If the same solution were written in code using high level libraries it would be a few hundred lines.
I wish these low code solutions would build on top a traditional text based language and allow you to flip back and forth.
I’ve spent a good deal of time looking at code with other people, and one of the things I have seen first hand is how blind we are for a species with front facing eyeballs.
I spent a good chunk of my childhood doing word searches and “find seven differences” puzzles. And a good chunk of college playing text based games with a friend who would run around so fast that the screen was a blur. Somehow he could still read it, and I eventually learned to barely keep up with him. I use that skill today every time I’m tailing logs.
He wasn’t “unusual”. I was unusual, he was a genuine freak of nature. Almost nobody spots bugs as fast as I do, and getting them so see a line I’ve described verbally is a chore (line numbers are always on when I pair now) and I am far from infallible.
So when people tell me we are going to have non technical people look at a picture and determine what’s wrong with it, or two pictures and tell me the difference, well I am extremely skeptical. Every time graphical programming comes up, I assert that visual diff tools are the main thing blocking them from being feasible, and that assertion is generally well received.
Today there are a couple companies who might eventually get there, but they are focused on browser testing. I think that means commodification will have to happen first, as there’s not much money in dev tools.
I tried to do that with Salesforce flows, back when I did work in Salesforce. I started off with Scheme macros, got as far as variables and constants but it quickly got too complicated.
Right before losing interest I was exploring using BabelJS to parse JavaScript into AST's then generate Flow XML from an AST. I think this approach would have more legs.
I hope the next evolution in software development includes caring about what your stack traces look like. Pattern based development is especially noisy that way. By the time you find a value going wrong you’ve forgotten why you needed to know that.
Lately I've been seeing "A Philosophy of Software Design" by John Ousterhout making waves with more experienced developers, and I definitely think it's going in the right direction of having richer abstractions and focus on DX rather than forcing the atomization of source code.
The term low code is horrible. It's overused, misused and probably created by a marketing team to sell ads.
There are a wide variety of platforms. Some are very open and honest about their core use cases (forms, Crud apps, automations). I work for an open source platform, Budibase, and they're pretty open in re to their use cases.
Others promise the world (SaaS apps, social networks) which are outside of their capabilities and probably not a great investment imo.
Spreadsheets are still dominant, but they're also the reason why many users turn to a low code platform.
It was 2009, I think, when I found myself implementing a rudimentary Lisp reader in PHP, because while the client had proved fully to understand the concept of list syntax, the client also had proved wholly and with basaltic resolve unwilling to be parted from beloved Excel. Exporting to CSV for upload and processing was just about thinkable, and the only meaningful concession I managed to extract.
(The use of Lisp lists to represent tree-structured data wasn't a concession. See, you can generate those with Excel
formulas...)
I’ve avoided hearing war stories about modernizing excel based companies for ten years. Academically I knew it must still be happening, but emotionally I’d convinced myself everything was daisies and the world had moved on.
Next it’ll be extracting applications from notebooks, I suspect.
I mean, that story's about to be 15 years old, but I take your point.
When I'm asked to prototype something that seems to pose a real danger of the next step being "okay let's deploy this", I do it in Emacs with org-babel. That's more than enough of a programming environment to validate an idea, and like a notebook each code block's results become part of the document, so it's very observable - but thanks in no small part to Emacs's reputation as alien space magic for weirdos, nobody's ever been mad enough to consider trying to jam it in production as-is.
> Low-code enables domain experts to become citizen developers.
I have yet to see a single "citizen developer" successfully building stuff using low-code/no-code solutions.*
> At the same time, low-code platforms should also strive to make pro-developers (professionals with an education or career in software development) more productive.
This is mostly what I see.
Occasionally, I see real developers sticking with the low-code/no-code solutions far beyond what it should ever be used for, to the detriment of the product.
What I do see a lot of is people with a long history in the business aspect being brought in as program managers, QA testers, UI designers, etc. They are generally very successful in those areas.
*VB6 and Visual FoxPro I have seen business people building production apps with. You may not consider those to be "real" low-code/no-code solutions, but... I've seen business people teaching themselves those to build their own stuff.
> I have yet to see a single "citizen developer" successfully building stuff using low-code/no-code solutions.*
I've seen plenty, in billing reconciliation, marketing, fraud detection, etc. in major telcos. I've also trained non-programmers to do this, with success (even adding enough Python to create custom nodes for scenarios where no prebuilt block is suitable).
Also, plenty (millions?) of citizen developers have built complex "applications" in Excel.
I concur, people can go a long way with IFTT, Tableau, Blockly and services like Octoparse.
For the more creative, there's Cycling '74 Max and Flash back in the day, and I'm sure there are many more low code tools out there that have been stretched beyond their original domain.
Not sure if this counts, but many successful games have been made with Game-maker-style tools, which I think would be classified as "low-code". E.g. Undertale
Excel sheets and now Airtable bases are another way to build apps I see pretty often. The problem is when these things outgrow their original platforms there's a huge step in effort up from "simplify the formula/query" to "ingest all the data into a database then write a view on top". My own wish for low-code tools is to cross this chasm.
Productionisation of these things can be a chore, but it's always doable. And you have the benefit of a working model to base your "real" solution on.
I don't think this chasm will ever really be bridged without real developers.
However, a lot of problems are ephemeral, and the short term low-code solution is adequate until the problem no longer exists; or it's small enough that the low-code solution can run meet the need indefinitely.
When Oracle bought Sun Microsystems they got OpenOffice people, and they could have tried to build such a thing on top of the light version of Oracle. That’s when I learned the complaints about Larry were right. Larry only cares about Larry, money, and the Church of the Database. Fixing spreadsheets is a little too egalitarian.
Exactly the problem with low code tools. As soon as you have a small deviation from the framework (be it an issue or just a wonky requirement) you are stuck.
And we face situations like this all the time as developers building full-code solutions. We get stuck because the PoC which went into production can't scale, and we buckle down and build an appropriate V2.
You're essentially arguing that MVPs are a bad idea, but evidence suggests the contrary.
I'd certainly suggest building an MVP with a low-code platform isn't a good use of resources unless your actual devs are all flat out doing other stuff and there are spare resources on the product team capable of putting something together with such a tool.
V2 is often a surprise to management and many of us have a lot of scar tissue originating from these sagas. We are not eager for more repeats. Sunk cost fallacy begets cheapness, not frugality.
I would say “code generator” here but there’s only been one domain where I’ve seen respectable code being generated and that’s CSS. before SCSS and Less I’d never met one that didn’t horrify or even terrify me. Haven’t met many since, either.
But in theory a code generator could be evacuated to version control and forked. Of course you have no commit history, which impoverishes the value of the code, but you have something.
Just looking at code that QA or Ops people write makes me unbearably nervous. Look at all of these load bearing posters everywhere.
And those people have been thinking about software their entire career. Domain experts being embedded in Agile teams is not so they can keep an eye on us, it’s so we can keep an eye on each other.
I've long been interested in low-code, because it's one of those things where the benefits seem obvious, but become less obvious the more you think about it.
In some cases the benefits are plain to see. For instance, if you needed to represent a squiggly line in SVG, the natural movement of a user's hand guiding a digital pencil is going to be far superior than writing out each x/y position in a vector path.
But other use cases aren't so clear, and in fact, many tasks could be made far more efficient with a textual interface.
I think the reason people assume that code is time-consuming or complex, is because historically it's been used to solve complex problems that non-technical users don't understand. So in other words, they don't understand the problem domain. My theory is that if they understood the problem domain, then they could understand any formal syntax dedicated to the domain.
As an example, the following pseudo-code:
begin dishwashing:
gather(plates)
sink.load(plates)
each plate(rinse)
each plate(dishwasher.load)
dishwasher.on()
end dishwashing
Everyone understands the problem domain (how to wash dishes), so even though someone may be unfamiliar with historical programming conventions and syntax, they should be able to grasp what the code does.
It's something I'm exploring with this language I've been building called Matry: https://matry.design.
The key problem that creates "low" or "no" code products is a pattern that shows up in a lot of technical fields: non-technical people run into the first hurdle, and assume it represents the majority of the difficulty. In software, this is sourcecode syntax. Somebody who's never written a line of code in their life sees source code, can't read it straight away, and assumes "We pay the software developers lots of money because they can read the cryptic runes" and goes on to pitch some "revolutionary solution" that eliminates this very minor hurdle, assuming that it will make the whole thing easy.
You see the same thing in discourse about science and math: "If we could just eliminate all the jargon and syntax, anybody could understand a scientific paper". In all cases, they're wrong, of course, since the syntax and jargon are such negligible hurdles that by the time someone is technically capable enough at the actual problem they'll come as second nature, but the non-technical folk don't ever reach that level of understanding.
Then this solution gets sold to other non-technical people, who, falling into the same trap, integrate it into their workflows.
Interesting to note that scientific paper "trivialization" is about replacing jargon words by their meaning; i.e. by opening abstractions, making the text larger and larger, note that every scientifical concept or abstraction needs other abstractions to be explained that needs themselves trivialization because an abstraction is an elliptical and compressed way of saying something else (every abstraction is a tree of abstractions).
With this operation you are losing the possibility to even read some paper for obvious length reasons. What you lose in complexity is gained in thickness.
The problem is that low/no code doesn't consist in the same thing; abstractions it propose are meant to build a bridge between our current language and programming languages, just as UI is a bridge between our behavior and the execution of some script/function. In fact in this case, we are proposing to trivialize software development, not by reaching the initial machine code it expresses (explaining abstractions) but by inventing on top of theses abstractions another type of abstraction that is understandable to the non tech person reading the code (high level languages vs assembly is performing a similar operation).
The difference with the science paper case is that here, complexity of abstractions leads to a better understanding of the text.
Problem is that with low/no code, you are lacking the ability to cross the bridge back (because you fail to analyze abstractions into their atoms) while scientific folks can.
C is powerful because it allows you to stay in the position of someone that understand computers and assembly. Higher level languages (interpreted ones) are dealing with the problem that low code hopes to solve in a better way : you don't need to understand the C implementation of X language to use it, and still you have a lot a power and freedom to develop software, eliminating the overheads of dealing with memory and etc...
I fail to see low code
as something else than a subset of X interpreted language that allows you to write in pure english (but the complexity of software doesn't lie in its expression, rather in its conceptualization),
OR
as something else than a GUI that allows you to write software (or UI) by dragging and dropping stuff at the screen, which is the worst I can imagine because again you would have the same problem of learning to technify yourself and reading documentation to understand what is in fact a weird suboptimal programming language.
Tangentially I want to mention that rinsing dishes before running the dishwasher is a known anti-pattern. The first washing cycle is exactly that - a rinsing cycle. I know this is off-topic but saving water is always a good idea.
In my experience visual tools like RoseRT can quickly turn into an obstacle. I think it's super important to not only know what the non-code solutions may be good at but also keep the solution limited or at least be prepared to go back to code if the problem domain is complex enough.
I'm always rubbed the wrong way by folks showing up with a "well, actually" over this point about dishwashers because this advice only holds in general if you're using a recent model, which not everyone can afford -- in fact, probably most people can't, so it becomes a question of accessibility. True, it would reduce your costs over the long run to use something more efficient -- but one of the insidious realities of being poor is that you can't afford to think long-term. If it's between paying my rent this month, and maybe saving an extra $15/month over the long term, sorry, but I'm going to make sure there's a roof over my head.
So if you're like me and have always lived in rentals with older model dishwashers for much of your life, and find yourself going crazy reading advice like this all over the internet despite making sure the filter trap is clean and taking full advantage of dishwasher pre-rinse cycles and even adjusting the temperature on your water heater and experimenting with different detergents, don't feel too bad -- dishwashers become less efficient over time for the most part, and hand-rinsing beforehand becomes a necessity. So go ahead and hand-rinse beforehand if you need to. It's not an anti-pattern unless you have a newer model.
Even with a fairly recent model I’ve occasionally had food debris from un-pre-rinsed dishes redeposited (and firmly so, with a heated dry cycle) on other dishes.
I've been washing dishes by hand my whole life and just recently got a dishwasher. When washing by hand pre-rinsing and soaking is essential so I kept doing that even after getting the machine, only recently discovering that I am wasting water this way.
Soaking is definitely necessary for many sorts of baked-on food remnants. But rinsing is mostly unnecessary unless you're expecting to leave the dishes in the machine long enough before running it that food particles dry out and get stuck - in that case just run a quick rinse cycle, it'll almost certainly use less water than doing it by hand
> Tangentially I want to mention that rinsing dishes before running the dishwasher is a known anti-pattern. The first washing cycle is exactly that - a rinsing cycle.
I haven't been fortunate enough to use a dishwasher that can reliably remove all particles from a dish. And I hate getting a "clean" dish out which still has some small bits of food stuck - now especially well stuck having gone through the heat drying cycle.
Some dishes need a human rinse/scrub before going into the dishwasher. The dishwasher is really more of a bacteria reducer.
Not according to the condescending maintenance guy who was replacing my broken dishwasher at the apartment I lived at years ago. You should have been there for THAT lecture.
begin dishwashing:
gather(plates)
sink.load(plates)
each plate(rinse)
each plate(dishwasher.load)
dishwasher.on()
end dishwashing
I guess this is the problem with "low code". An actual dishwasher program would maybe look something like:
/* Setup some arcane registers and timers */
...
init_program (&setup); while (state == WASHING);
// enable_pump (78); while (state == PUMPING); // 62 too fast - Fred 2007
// enable_pump (81); while (state == PUMPING); // For Peru market - Kim 2011
enable_pump (1); enable_pump (1); enable_pump (1); // Relay maybe stuck tickle it
enable_pump (11); while (state == PUMPING); // ???
enable_pump (99); while (state == PUMPING);
...
But obviously without the comments. Sooner or later many programs need complex state machines and some order at the language level the keep our sanity.
Low or nocode is imho not the holy grail for solving business IT problems. Low code or mda like tools have since 1980 be promoted using several names. have, e.g.:
Low code tooling
No code tooling
Model Driven Engineering (MDE)
Model Driven Design (MDD)
4GL tools
Low code tools are not strong on versioning and dealing with multiple parallel development tracks and teams simultaneously. Most tools are in fact based on a big design upfront paradigm, like an overall data model. Versioning on models, meta data and data of all created and generated software assets is poorly supported, if supported at all. Common practices seen in mature agile software development tools using micro services paradigms and advanced distributed version control systems are often lacking in the new low-code MDA family of tools. In large companies it is not uncommon to encounter models with hundreds (or even thousands) of entities / classes. That kind of models not only doesn’t help with developing software faster and more cost-efficiently but even has an adverse effect.
Low code tools have a strong focus on initial productivity gain. But a continuous fast changing business context with changing requirements requires an approach and toolset that is suited for giving long term productivity benefits.
> Low code tools have a strong focus on initial productivity gain. But a continuous fast changing business context with changing requirements requires an approach and toolset that is suited for giving long term productivity benefits.
The best "low code" tools absolutely meet this requirement and best of all, don't _feel_ like programming to the users. Spreadsheets are the most obvious example.
I have been doing a lot of work with AWS StepFunctions lately, everything from replacing cron jobs to implementing HTMX backends. I think StepFunctions is an interesting case study (I'm not really familiar with other offerings, other than spending some time with the block-based programming like Scratch and MakeCode).
To build solutions I have to use the Amazon States Language, which is a learning curve and being as charitable as I can a royal pain in the ass. Ultimately I end up with a JSON file that is a "giant, flexible config file" for their runtime.
On the plus side, solutions using it are very nearly zero maintenance. No runtime updates, no package updates, no manual scaling, etc.
Another plus, it's zero cost when not in use. No VMs I have to pay for hourly or monthly.
The downsides (for me) are obvious: It's difficult to learn; It's very restrictive, and solutions often end-up needing some aspect of more flexible services like Lambda or Fargate (containers) when end-up adding cost and maintenance; It's proprietary and there is nothing I can use elsewhere (no other company support ASL as far as I know).
Overall, though, I love it. Why? I despise having to choose between unpatched systems and the drudgery of constant patching. With StepFuctions I don't have to choose.
I was so stoked to build stuff this way; I had the same sentiments about the learning curve, but overall it seemed like an amazing tool and pretty fun.
The problem is the people around me thought it was too difficult, and couldn't see long term. So we implemented a low code solution and now everything is in there and it's a mess/nightmare. I hate my work now, and everything we build is tightly coupled to this spaghetti platform that will inevitably decide to raise it's prices on us and we will have no recourse.
Job hunting has been tough too, because very few places have done this, so they ask "what have you been working on?" and I'm basically setting record times for ending interviews if I tell the truth.
I'm currently using a low-code tool for prototyping an app [1] together with a team that has no professional software engineers (although there are researchers capable of programming in Python/R).
I found the tool super easy to use and we managed to have a functional app in one afternoon, but when they tried to use it themselves it wasn't as intuitive – the tool works best if you already posses some intuition of relational model, CRUD patterns, etc. Without this intuition the other members were having a hard time understanding how to better structure the data for the app, and started trying to "hack" it to make the app work the way they expected instead of working with the abstractions available.
Is there an open source Low-Code system that works?
Where I work we currently have 2 low code tools for automating internal business processes.
Now a new executive has appeared and his answer to two of these systems is to get a third. This time it will be MS PowerApps.
It seems that Low-Code is all high cost. The costs for licencing these platforms is 1-2 developers per year for our organisation of roughly 800 people.
No doubt making applications on the web that work with SAML integration is harder than it could be, but the low-code tools lack of versioning and the ability to do automated testing is poor. Especially given the licencing costs.
Even a good web based GUI for drawing UI's would be useful. But cost really is a factor.
Visual Basic was a low code tool that did enable non-coders to create applications that were useful. It appears that the current crop of low-code tools are not near VB's ease of use or utility.
you should try windmill.dev, semi-technical folks can use the webeditor and the built-in versioning while more senior folks can keep their IDE and version directly from github/gitlab.
Budibase is pretty good and helps you build simple web based solutions that can integrate with a wide range of data sources. Definitely worth checking out.
I remember after coding in C++ for 5 years and then doing Java and thinking it was like low code as java was so easy because of the memory management and gc
I am considering/designing/PoCing implementing a low-code (visual programming like Unreal) system for a specialized ETL (algorithmic) system. Both E and L would be nodes and the target audience mostly uses math in the T.
Great advantage seems to be an ability to schedule jobs based on algorithm needs, as everything is known in the graph. I also think we can give some guarantees like infinite loop prevention by analysing the graph.
An alternative I see is a textual DSL which may be even more work to implement and user discovery becomes harder.
I’d love to talk and discuss about this with others; it is hard for me to determine whether this is the way to go with so many opinions in either camp.
ETL is one of the places that NoCode LowCode solutions and Visual Programming systems have been successful despite the amount of negative information out there. They’re a sort of silent success story in the ETL works since 1: it’s not glamorous work so there’s not a lot to brag about 2: it’s often solved by just paying for some proprietary vendor solution since they will have already done the hard work getting the E and L parts done and plenty of companies have built compelling sales pitches about how simple the T will be with their drag and drop design tools and 3: once the ETL pipeline is done, a few revisions is usually all they get in most workplaces, and then it will run in the background silently until someone questions the cost of running it or asks where the data in some system is coming from or why they get some emailed reports.
The ETL niche is a good fit for the kind of approach you’re describing, so I don’t see any reason not to build at least a prototype to see how the end users like the design … beyond the obvious complexity of getting a visual node based editor and data format up and going (and there’s some open source out there that makes that part not as much work as it would have been 5-10 years ago)
"Low-code is about creating instructions for a computer to execute or interpret. These instructions form a computer program, typically in a domain-specific language (DSL). For instance, low-code is often based on search-based program synthesis, and synthesis usually targets a DSL carefully crafted for the purpose."
If that's the definition (and I agree that it is) then 90% of my code is low code, as that's how much is XML, XSLT, and SQL. In C# .NET 8, another 5% is Linq which is a DSL that I consider low-code.
I have friends who work for a company that sells a million dollar low-code data platform. Business is good.
The problem with low-code is the same as with outsourcing code on UpWork:
Security
You get something that works. But how do you know there is no way to:
1: Alter the code from the outside
2: Access parts of the DB one is not supposed to access
Those can only be avoided by a programmer who knows about all the weird edge cases that might lead to 1 or 2 and how to write a complex system in a way that is structured to make bugs unlikely.
(2) is not a particular issue with low-code, it's just a problem for everyone. I'm in a "high-code" environment and they still managed to fuck up data access controls. You have to bring on people (or give them authority) who understand how to secure your data. Afterwards, high-code or low-code you're going to be ok (maybe not great, but ok).
I wonder if there is potential to combine generative AI with low code approaches, where the non-coder would work with the AI on a higher order description of the problem (data, states, events) and the AI could generate the MVP code. It kind of seems that is the relationship we human coders have at the moment with non-coders. Maybe all those UML diagrams can finally be used as inputs.
Low code has been a thing in ETL data integration space for a long time. Anecdotally the most consternation I have experienced lately has been supporting buggy low code implementations which seem to becoming worse and not better over the last 20 years or so.
Sure but if it's just ChatGPT spitting out code that the users don't really understand I can't see it being a workable solution either.
What I'm thinking is that low-code implies something like natural language pseudo code that LLM tech is able to accurately interpret and turn into executable code. Of course the "accurately" part is still something of an issue, but usually with a few rounds of "no that didn't work" or "that's not what I meant" you can likely get what you actually intended.
I feel like any code created from GPT can also be interpreted from GPT. Use a prompt like "Explain like I'm 5". Also DiagramGPT and just generating documentation in general from code.
Perhaps at some point you can screenshot the lowcode and paste the image into GPT for it to interpret, but will they build for that use-case? The former exists today.
The hard part is avoding weird edge case logic errors, race conditions, all these things good programmers will avoid by having an intuition of complex systems with state spread around. This is what you pay software people for, and not for the ability to crank out text in languages that look cryptic to commoners.
Now I do think that such low code environments can provide benefits in specific situations (e.g. when you use it as a highly flexible "config file" within the framework of another system that catches all those forbidden states).
As soon as you drop into customized complexity, you should be building new low-code components with actual code.
Which I think is the main reason low-code is popular in the API-to-API SaaS space: actions are limited and well-defined.
Low code is using post-it notes to run your project. What "low-code" means is "someone else's code" and we already have tons of that, in the form of open source libraries and frameworks that avoid implementing tons of logic that is more or less idiomatic.
This also includes SaaS solutions, where I am not running any code at all. You are, and I'm paying your for it. All I get are a palette of decisions I can make to alter how the product runs. Just like a YAML based solution.
It's more snake oil, of an exact vintage that we already had in the late 1980's. And if you don't know what we called it then, it's because it was such a titanic flop that only computer history buffs know. An evolutionary dead end.
Being able to model a system graphically and run that graphical model is invaluable when working with non-programmer domain experts. And as often happens when drawing pictures of things, we may see new ways to organize processes that are big improvements over the formerly-obvious way they were built.
Another great use is allowing low-tech users to build their own low-code models. With a small bit of education, they can solve their own problems. And as some problems are short term situations, that low-code solution may suffice until the problem no longer exists. If the problem needs a long term professional solution, the developer can take the working low-code model and reimpliment it in code.
It gets set up by a manager who makes a bunch of choices that sound good but are wrong. There was just a post on Reddit the other day claiming that if a developer sets up Jira, it’s a nice tool.
It even has pretty pictures you can use for lifecycle definitions. That doesn’t help. We have spent years considering the consequences of decisions that sound nice but aren’t. Even if we only write five lines of code, that’s what you’re paying for. “Low-code” is “without programmers” and good luck with that.
That works when you're SAP. Probably not for anyone else.
It's amazing how many scenarios this covers.
In the exact same way that IaaS abstracts away things a lot of people don't need to care about.
There are many things that low-code should never be used for.
But there are a lot of things it can be. E.g. IFTTT type stuff
It's also impossible to diff two across revisions.
If the same solution were written in code using high level libraries it would be a few hundred lines.
I wish these low code solutions would build on top a traditional text based language and allow you to flip back and forth.
I spent a good chunk of my childhood doing word searches and “find seven differences” puzzles. And a good chunk of college playing text based games with a friend who would run around so fast that the screen was a blur. Somehow he could still read it, and I eventually learned to barely keep up with him. I use that skill today every time I’m tailing logs.
He wasn’t “unusual”. I was unusual, he was a genuine freak of nature. Almost nobody spots bugs as fast as I do, and getting them so see a line I’ve described verbally is a chore (line numbers are always on when I pair now) and I am far from infallible.
So when people tell me we are going to have non technical people look at a picture and determine what’s wrong with it, or two pictures and tell me the difference, well I am extremely skeptical. Every time graphical programming comes up, I assert that visual diff tools are the main thing blocking them from being feasible, and that assertion is generally well received.
Today there are a couple companies who might eventually get there, but they are focused on browser testing. I think that means commodification will have to happen first, as there’s not much money in dev tools.
Right before losing interest I was exploring using BabelJS to parse JavaScript into AST's then generate Flow XML from an AST. I think this approach would have more legs.
I’m currently suffering through nested serializers and business objects that have too much custom metaprogramming.
What’s worse: some of those would have been trivial when written in a more direct manner and would benefit from no-code tools.
Lately I've been seeing "A Philosophy of Software Design" by John Ousterhout making waves with more experienced developers, and I definitely think it's going in the right direction of having richer abstractions and focus on DX rather than forcing the atomization of source code.
I'm surprised TFA implies spreadsheets aren't still dominant.
There are a wide variety of platforms. Some are very open and honest about their core use cases (forms, Crud apps, automations). I work for an open source platform, Budibase, and they're pretty open in re to their use cases.
Others promise the world (SaaS apps, social networks) which are outside of their capabilities and probably not a great investment imo.
Spreadsheets are still dominant, but they're also the reason why many users turn to a low code platform.
... which will almost always trigger Greenspun's tenth rule
https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
(The use of Lisp lists to represent tree-structured data wasn't a concession. See, you can generate those with Excel formulas...)
I’ve avoided hearing war stories about modernizing excel based companies for ten years. Academically I knew it must still be happening, but emotionally I’d convinced myself everything was daisies and the world had moved on.
Next it’ll be extracting applications from notebooks, I suspect.
When I'm asked to prototype something that seems to pose a real danger of the next step being "okay let's deploy this", I do it in Emacs with org-babel. That's more than enough of a programming environment to validate an idea, and like a notebook each code block's results become part of the document, so it's very observable - but thanks in no small part to Emacs's reputation as alien space magic for weirdos, nobody's ever been mad enough to consider trying to jam it in production as-is.
I have yet to see a single "citizen developer" successfully building stuff using low-code/no-code solutions.*
> At the same time, low-code platforms should also strive to make pro-developers (professionals with an education or career in software development) more productive.
This is mostly what I see.
Occasionally, I see real developers sticking with the low-code/no-code solutions far beyond what it should ever be used for, to the detriment of the product.
What I do see a lot of is people with a long history in the business aspect being brought in as program managers, QA testers, UI designers, etc. They are generally very successful in those areas.
*VB6 and Visual FoxPro I have seen business people building production apps with. You may not consider those to be "real" low-code/no-code solutions, but... I've seen business people teaching themselves those to build their own stuff.
I've seen plenty, in billing reconciliation, marketing, fraud detection, etc. in major telcos. I've also trained non-programmers to do this, with success (even adding enough Python to create custom nodes for scenarios where no prebuilt block is suitable).
Also, plenty (millions?) of citizen developers have built complex "applications" in Excel.
For the more creative, there's Cycling '74 Max and Flash back in the day, and I'm sure there are many more low code tools out there that have been stretched beyond their original domain.
I don't think this chasm will ever really be bridged without real developers.
However, a lot of problems are ephemeral, and the short term low-code solution is adequate until the problem no longer exists; or it's small enough that the low-code solution can run meet the need indefinitely.
As soon as something went wrong though—watch out.
You're essentially arguing that MVPs are a bad idea, but evidence suggests the contrary.
Except I am not arguing that at all!
I am arguing that there are reasons low/no code tools have not been more successful.
But in theory a code generator could be evacuated to version control and forked. Of course you have no commit history, which impoverishes the value of the code, but you have something.
And those people have been thinking about software their entire career. Domain experts being embedded in Agile teams is not so they can keep an eye on us, it’s so we can keep an eye on each other.
Tell HN: Enterprises spend 10 more to build no-code solutions than coded ones - https://news.ycombinator.com/item?id=37689282 - Sept 2023 (178 comments)
But the current article is quite substantive so we won't treat it as a follow-up the way we normally probably would (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...).
In some cases the benefits are plain to see. For instance, if you needed to represent a squiggly line in SVG, the natural movement of a user's hand guiding a digital pencil is going to be far superior than writing out each x/y position in a vector path.
But other use cases aren't so clear, and in fact, many tasks could be made far more efficient with a textual interface.
I think the reason people assume that code is time-consuming or complex, is because historically it's been used to solve complex problems that non-technical users don't understand. So in other words, they don't understand the problem domain. My theory is that if they understood the problem domain, then they could understand any formal syntax dedicated to the domain.
As an example, the following pseudo-code:
Everyone understands the problem domain (how to wash dishes), so even though someone may be unfamiliar with historical programming conventions and syntax, they should be able to grasp what the code does.It's something I'm exploring with this language I've been building called Matry: https://matry.design.
You see the same thing in discourse about science and math: "If we could just eliminate all the jargon and syntax, anybody could understand a scientific paper". In all cases, they're wrong, of course, since the syntax and jargon are such negligible hurdles that by the time someone is technically capable enough at the actual problem they'll come as second nature, but the non-technical folk don't ever reach that level of understanding.
Then this solution gets sold to other non-technical people, who, falling into the same trap, integrate it into their workflows.
With this operation you are losing the possibility to even read some paper for obvious length reasons. What you lose in complexity is gained in thickness.
The problem is that low/no code doesn't consist in the same thing; abstractions it propose are meant to build a bridge between our current language and programming languages, just as UI is a bridge between our behavior and the execution of some script/function. In fact in this case, we are proposing to trivialize software development, not by reaching the initial machine code it expresses (explaining abstractions) but by inventing on top of theses abstractions another type of abstraction that is understandable to the non tech person reading the code (high level languages vs assembly is performing a similar operation). The difference with the science paper case is that here, complexity of abstractions leads to a better understanding of the text.
Problem is that with low/no code, you are lacking the ability to cross the bridge back (because you fail to analyze abstractions into their atoms) while scientific folks can.
C is powerful because it allows you to stay in the position of someone that understand computers and assembly. Higher level languages (interpreted ones) are dealing with the problem that low code hopes to solve in a better way : you don't need to understand the C implementation of X language to use it, and still you have a lot a power and freedom to develop software, eliminating the overheads of dealing with memory and etc...
I fail to see low code
as something else than a subset of X interpreted language that allows you to write in pure english (but the complexity of software doesn't lie in its expression, rather in its conceptualization), OR as something else than a GUI that allows you to write software (or UI) by dragging and dropping stuff at the screen, which is the worst I can imagine because again you would have the same problem of learning to technify yourself and reading documentation to understand what is in fact a weird suboptimal programming language.
In my experience visual tools like RoseRT can quickly turn into an obstacle. I think it's super important to not only know what the non-code solutions may be good at but also keep the solution limited or at least be prepared to go back to code if the problem domain is complex enough.
So if you're like me and have always lived in rentals with older model dishwashers for much of your life, and find yourself going crazy reading advice like this all over the internet despite making sure the filter trap is clean and taking full advantage of dishwasher pre-rinse cycles and even adjusting the temperature on your water heater and experimenting with different detergents, don't feel too bad -- dishwashers become less efficient over time for the most part, and hand-rinsing beforehand becomes a necessity. So go ahead and hand-rinse beforehand if you need to. It's not an anti-pattern unless you have a newer model.
aka Sam Vimes "Boots" theory
https://en.m.wikipedia.org/wiki/Boots_theory
Thanks for the tip about the old dishwashers.
I haven't been fortunate enough to use a dishwasher that can reliably remove all particles from a dish. And I hate getting a "clean" dish out which still has some small bits of food stuck - now especially well stuck having gone through the heat drying cycle.
Some dishes need a human rinse/scrub before going into the dishwasher. The dishwasher is really more of a bacteria reducer.
Low code tooling No code tooling Model Driven Engineering (MDE) Model Driven Design (MDD) 4GL tools
Low code tools are not strong on versioning and dealing with multiple parallel development tracks and teams simultaneously. Most tools are in fact based on a big design upfront paradigm, like an overall data model. Versioning on models, meta data and data of all created and generated software assets is poorly supported, if supported at all. Common practices seen in mature agile software development tools using micro services paradigms and advanced distributed version control systems are often lacking in the new low-code MDA family of tools. In large companies it is not uncommon to encounter models with hundreds (or even thousands) of entities / classes. That kind of models not only doesn’t help with developing software faster and more cost-efficiently but even has an adverse effect.
Low code tools have a strong focus on initial productivity gain. But a continuous fast changing business context with changing requirements requires an approach and toolset that is suited for giving long term productivity benefits.
The best "low code" tools absolutely meet this requirement and best of all, don't _feel_ like programming to the users. Spreadsheets are the most obvious example.
To build solutions I have to use the Amazon States Language, which is a learning curve and being as charitable as I can a royal pain in the ass. Ultimately I end up with a JSON file that is a "giant, flexible config file" for their runtime.
On the plus side, solutions using it are very nearly zero maintenance. No runtime updates, no package updates, no manual scaling, etc.
Another plus, it's zero cost when not in use. No VMs I have to pay for hourly or monthly.
The downsides (for me) are obvious: It's difficult to learn; It's very restrictive, and solutions often end-up needing some aspect of more flexible services like Lambda or Fargate (containers) when end-up adding cost and maintenance; It's proprietary and there is nothing I can use elsewhere (no other company support ASL as far as I know).
Overall, though, I love it. Why? I despise having to choose between unpatched systems and the drudgery of constant patching. With StepFuctions I don't have to choose.
The problem is the people around me thought it was too difficult, and couldn't see long term. So we implemented a low code solution and now everything is in there and it's a mess/nightmare. I hate my work now, and everything we build is tightly coupled to this spaghetti platform that will inevitably decide to raise it's prices on us and we will have no recourse.
Job hunting has been tough too, because very few places have done this, so they ask "what have you been working on?" and I'm basically setting record times for ending interviews if I tell the truth.
I found the tool super easy to use and we managed to have a functional app in one afternoon, but when they tried to use it themselves it wasn't as intuitive – the tool works best if you already posses some intuition of relational model, CRUD patterns, etc. Without this intuition the other members were having a hard time understanding how to better structure the data for the app, and started trying to "hack" it to make the app work the way they expected instead of working with the abstractions available.
[1] http://appsheet.com
Where I work we currently have 2 low code tools for automating internal business processes.
Now a new executive has appeared and his answer to two of these systems is to get a third. This time it will be MS PowerApps.
It seems that Low-Code is all high cost. The costs for licencing these platforms is 1-2 developers per year for our organisation of roughly 800 people.
No doubt making applications on the web that work with SAML integration is harder than it could be, but the low-code tools lack of versioning and the ability to do automated testing is poor. Especially given the licencing costs.
Even a good web based GUI for drawing UI's would be useful. But cost really is a factor.
Visual Basic was a low code tool that did enable non-coders to create applications that were useful. It appears that the current crop of low-code tools are not near VB's ease of use or utility.
The UI part is drag-n-drop;
Great advantage seems to be an ability to schedule jobs based on algorithm needs, as everything is known in the graph. I also think we can give some guarantees like infinite loop prevention by analysing the graph. An alternative I see is a textual DSL which may be even more work to implement and user discovery becomes harder.
I’d love to talk and discuss about this with others; it is hard for me to determine whether this is the way to go with so many opinions in either camp.
The ETL niche is a good fit for the kind of approach you’re describing, so I don’t see any reason not to build at least a prototype to see how the end users like the design … beyond the obvious complexity of getting a visual node based editor and data format up and going (and there’s some open source out there that makes that part not as much work as it would have been 5-10 years ago)
If that's the definition (and I agree that it is) then 90% of my code is low code, as that's how much is XML, XSLT, and SQL. In C# .NET 8, another 5% is Linq which is a DSL that I consider low-code.
I have friends who work for a company that sells a million dollar low-code data platform. Business is good.
Security
You get something that works. But how do you know there is no way to:
1: Alter the code from the outside
2: Access parts of the DB one is not supposed to access
Those can only be avoided by a programmer who knows about all the weird edge cases that might lead to 1 or 2 and how to write a complex system in a way that is structured to make bugs unlikely.
Then you sit there like an idiot dragging blocks around when you could have just asked GPT to bust it out in code in seconds.
They're so bad for source control and documentation, too.
Perhaps at some point you can screenshot the lowcode and paste the image into GPT for it to interpret, but will they build for that use-case? The former exists today.