In Norway, one of the regional health authorities have recently replaced its old software. The replacement, Helseplattformen (The Health Platform, based on Epic Systems’ software) has been making headlines since before its introduction. Doctors and nurses first warned about bad usability and seeming instabilities which would lead to data loss and lower quality treatment. Their warnings were ignored, but have since become a reality. You can follow the news about the system on https://helvetesplattformen.no (“the hell platform”)
While there is certainly such a thing as bad software, a lot of old software actually has its merits, and its users will often tell you about them if you ask. In my experience, developing a desktop app for a point of sales/backoffice, the users were often very satisfied with the old software we were replacing. They liked how fast they got things done with it: For instance, the old software had keyboard shortcuts for everything. Its layout was more information dense.
> the users were often very satisfied with the old software we were replacing. They liked how fast they got things done with it: For instance, the old software had keyboard shortcuts for everything. Its layout was more information dense.
I can say that I am also satisfied with many old programs for these reasons, and sometimes other reasons too. When writing new software, I generally try to design it like that too, with keyboard for everything (or nearly everything), and with dense layouts.
The issue is that so many different companies and vendors say so and so, way beyond what is actually delivered for the end user behind the keyboard, that the mere words have pretty much lost all credibility to the end users.
For example I’ve personally seen >60 year old cashiers at a small pharmacy with arthiritis, trembling hands, and everything, flawlessly input over 100 complex commands per minute on 30+ year old machine with a software design probably from the 70s.
With 0 bugs, 0 human perceptible delays, and not even a single misinput.
Other than the obvious — keyboard shortcuts — this also heavily relies on being able to input ahead of time, before the next screen is even drawn. Older systems buffer inputs and apply them correctly to the interface elements as they become visible.
The modern web does none of this, which is why users have to wait for each screen before inputting the next command… if it has a keyboard shortcut. It probably doesn’t.
Not quite true. Worked at epic previously. Due to the culmination of a 10 year migration from VB to electron, the os level input buffer was discarded mid 2023 since electron doesn’t handle those the same way. The primary reason for this is that VB was pretty much synchronous. However, web technologies don’t act the same way because keeping your keyboard inout while navigating to a different site would be weird.
We added an input handler to queue inputs so that sequences of shortcuts and keypresses could be used.
Additionally, the internal framework we had allowed for shortcuts and we tried to replicate as much as could shortcut wise (as well as functionality wise). Almost everything should have a shortcut or a way to navigate to it via just keyboard — they had put in a lot of effort to ensure accessibility so that they could get the va contract that went to cerner(pre-oracle acquisition)
> The modern web does none of this, which is why users have to wait for each screen before inputting the next command…
Or wait further. Many web screens appear in a state that is initially nonresponsive, becoming responsive invisibly after an unknown and variable period of time taken for code to load.
Yeah and the worst part is, nobody, not even quite serious vendors selling stuff with all sorts of bells and whistles, do any sort of benchmarking on all these various paramters. (That I’ve seen)
So even if people wanted to believe and buy so and so product, they have no way to substantively compare who is more honest.
The United States Government is replacing the open source, internally written medical records system, https://en.wikipedia.org/wiki/VistA, with Epic. [edit] Or is it cerner?
I'm sure it's going to be way way way better.
[edits: cerner not epic?; added another way to emphasize how much better it'll be.]
Do you have any links for this? Tragic that people got killed, but even more so if the same software was at the core. And I sure hope it was cancelled, if that was the case.
There’s a link to the OIG reports themselves. It’s almost certainly not the software itself, but the way it was configured and rolled out. Cerner is the market leader for EHRs, so I highly doubt it’s intrinsically flawed; just so configurable it’s easy to cut yourself on the edges.
Sweden had a series of disasters like that in recent years. A few months ago one region tried to switch from their old healthcare system to something delivered by Oracle, but quickly had to roll back to the old system.
In 2021 an expensive system for schools in Stockholm was so bad that some parents got together and wrote an open source app to not have to use the bad official UI.
A lot of modern day UX designers would be aghast at the sheer quantity of data and speed at which a user can handle it IFF the UI is adequately designed.
PoS software really plateaued in the 90s. The most common flows were so simple an average user can hold a mental map of the program and always know exactly how to get where they need to be. Pros can fly through ten levels of menus in literally half a second to access some obscure report.
In modern UI we've totally given up and just give the user a search box on every screen. Instead of a clean indexed list of menu items you can hit a key to access, we have to fumble around with tooltips.
I don't have the time for it, but if I were going to make a new web front-end framework, it would be focused on this sort of expert-level interface.
The key thing would be some sort of input-buffering system, where you can have lots of things bound to keys, and the framework will actually buffer things. Generally web browsers and modern UIs in general throw away input on reloads and such, on the generally-correct but very novice-biased theory that you didn't mean to be inputting into a screen you can't see yet. If you undo that and allow buffering keystrokes across loading actions you could start recovering some of that expert-level speed, where you can be using screens that aren't even loaded yet.
"Expert-focused" UIs don't have to be bound to the terminal. That was all an accidental thing.
Expert-focused UIs are nearly dead because all our UI frameworks actively block them. That is also not a fundamental fact of computing and can be undone.
I propose not stopping at input-buffering. Keep going, how fast could we get the REPL times if that were the KPI?
I doubt we'd be doing full page redraws and slinging json of all things. I think what you'd end up with is something that looked a little more like a game engine than react. That would be a fun project to work on.
A REPL might be an option, you do have Javascript sitting right there, but is not generally the sort of "expert" I'm thinking of.
I certainly wouldn't stop at input buffering. Another I'd want is some sort of systematic keypress discoverability. You still probably want a conventional GUI at least mostly available but using the GUI ought to be constantly teaching the user how to do whatever it is they just did with the keyboard instead. This might as well get wrapped into the framework, to standardize it and to make it easy to use.
It's worth recalling that in the early days, GUIs always had underlined letters showing what the keyboard shortcuts are. That's been lost now, I suppose because it looked messy or confused people or something. Some tools show you the underlines only once you push the modifier key that can activate them.
Unfortunately, one of the many shortcomings of the web as an application and GUI platform is (a) no menu bars, (b) no context menus, (c) no accelerator key framework, (d) keyboard focus is often nowhere in particular or somewhere stupid that will do something bad if you touch the keys.
For me, the most obvious example of input-buffering in modern tools is in the terminal, where I'll often type in subsequent command(s) while the first is still executing. Sure, it's possible to make a mistake and have to Ctrl+C the current process to stop the botched command from executing—but for "safe" commands like `cd`, it's a super convenient feature.
(The other tool that comes to mind is Emacs, but that's appreciably less mainstream than the command line in general.)
Thanks for this. I've a passing interest in those older expert-level interfaces and input buffering wasn't on my radar but is completely vital. I remember relying heavily on that functionality back in the day and yet it's non-existent in most modern interfaces.
This is bang on. I am shocked by how poor UX really is these days in most modern applications.
I worked at an old grocery store running the front office as a teenager through college maaaany years ago, and everything was a keyboard shortcut, non-touchscreen driven interface from probably the early 90’s. It took some time to get used to and train others on, but I could fly through those interfaces in a way a modern UX simply does not allow by focusing on the lowest common denominator for usability instead of any attempt to cater to ‘experts’: aka the people who have to use those interfaces day in and day out.
Sure, the modern interfaces look great (usually) and in the ideal state anyone can pick it up and use it without much instruction. But there’s no attempt to focus on the poor soul who has to use it in their day to day and just wants to get it done as quickly as possible.
Hardware, too. I worked in broadcast television in the 1990s and we used these very large Sony three-quarter inch videotape decks for editing. Two of them paired together and a monitor.
Once I learned how to edit using the buttons and the scroll wheels, I could fly through putting together a news segment with all kinds of cuts, fades and overdubs. I remember thinking, I couldn’t believe how fast my fingers were flying.
The closest thing I can compare it to is learning how to play a guitar; after a certain point muscle memory takes over.
Every time I need to copy paste an email address from someone’s name in outlook and I have to wait a whole second for an on-hover tooltip to appear with the address and the “copy to clipboard” button I want to die.
I would use the desktop client but it doesn’t have an interface to hyperlink to files in sharepoint.
>A lot of modern day UX designers would be aghast at the sheer quantity of data and speed at which a user can handle it IFF the UI is adequately designed.
Would they even know? There was a time when there was such a thing as domain expertise and when usability studies were an integral facet of many SDLC. Those days seem long gone now. The industry seems far more content to haphazardly adapt OTS solutions or create something that's "pretty" or "modern" at the expense of being functional.
BS, in the food service industry quickness is better for prompt service. When I waited tables I could click ahead of the UI to get what I needed and the POS would catch up.
It needs to be easy and quick. Doesn’t matter if they are pros or people doing college jobs.
What he means is that GUIs won over TUIs because they're intuitive and learnable. They might be slower to use (mostly; I'm faster in IntelliJ than vim) but that is counterbalanced by the lower training costs.
In industries with high staff turnover, it's probably better to have slower workers but with much lower training costs.
There is no inherent conflict between GUIs and TUI-like speed. It’s the web tech stack and cloud-based apps, and piles upon piles of leaky abstractions in other environments, who have made things slow. Open Microsoft Word or Excel from 25 years ago, and it will be incredibly fast to operate.
Sure I agree, but getting fast at using mainframe style UI takes a lot longer and until you mastered the commands you would be slower. GUIs let people self teach which is valuable.
Web tech definitely makes it harder to give a really snappy experience.
I think it's because GP mentioned "PoS software" which is often used by cashiers, but is not limited to cashiers. Anywhere a user might swipe or tap their credit card could be considered a PoS.
As a related aside, I recently visited Microcenter where they run their own PCI-compliant PoS system that's all text based. The staff all knew how to zip through various text menus to apply discounts, warranties, etc.
They are certainly paid and treated that way, but only because corps really want to buy their own bullshit.
There have never been enough highschoolers to fulfill all the """not real""" jobs that people keep insisting shouldn't pay enough to live. Hell, you used to be able to live on flipping burgers. In plenty of modern and democratic countries you actually still can live while flipping burgers.
I actually worked at Epic for a while. By EMR software standards Epic is very good overall. The database it is built on called InterSystems Cache is one of the best in the industry. They employ thousands of QA testers. But as a very complex software system it can be implemented very badly. I wonder if there is a language barrier.
Interesting. Does Epic come with any pre-built UI for its products? Some of the bad UX complaints have been explained away as bad reuse of interfaces adapted to the realities of US private health care (which is very different from Norway’s public health system).
I do not think a language barrier is at play; Norwegians and the other Scandinavians are among the most fluent English speakers of English as a second language in Europe, surpassed perhaps only by the Dutch.
Epic runs most US hospitals. It's a monopoly with significant lock-in and control over it's customers. Also, we do manage to get things done. They will too.
I have a friend who works on COBOL for banking. He said he saw someone working at a bank with a fancy new GUI, painfully poking around and trying to get something done. "Ah forget it," they said, closed the modern GUI and went to the old terminal interface. In a couple of seconds, it was done.
I also worked on an ancient terminal interface for a complex service business. It was amazing. Everything was instant, and after a short learning curve, we had incredible power at our fingertips. If I had to do that job on a modern web interface with 4-5 seconds of spinners on every page, productivity would plummet! How often do I stand in front a desk with someone eeking through a more modern system, wondering, what is taking so long?
It's damn near a crime against humanity that so many talented people are forced to deal with garbage software. Given the orders of magnitude hardware advances of the last few decades, we should be that much faster and more capable. But that's not what we see at all - it takes a special kind of incompetence to make software do even less while increasing the resource burn. You really have to try to do that bad. And we're trying hard. I weep for our industry.
Microsoft Windows 3.1. I remember the moment very well.
We went from a focused instantaneous UI in DOS to a paradigm that most people cannot handle...multitasking.
Some have mentioned density but DOS business applications weren't terribly dense. There was only so much screen area and we had conventions for how to use it. The apps would flow between screens much like modern Windows UI or mobile apps. The move to Windows presented users with a slow dense mess. My grandmother could set the screen on fire using Lotus 123 keyboard commands and bang out financial statements all day long. I was maintaining Lotus on her machine for decades after the Windows switch so she could use her keyboard and get shit done.
Quicken was awesome on DOS. We ran our bookkeeping business on Lotus 123 and Quicken for years (1980s-1990s). Productivity crashed after that and never recovered.
This. COBOL is actually quite performant given what it has to deal with. To answer the OP’s question: when you have 2M lines of COBOL that processes thousands of medical records or financial transactions per second, is critically important to the business, and essentially never fails, there is absolutely no incentive to ever replace that code. It doesn’t matter if it’s “out of date” or in an uncool language.
Sometimes asking, "When will we..." is just a turn of phrase, but sometimes it's a serious misframing of the issue. There is no such thing as "we" that meaningfully acts or makes decisions. Thinking about "we" as if it's real because the individuals in "we" are capable of choosing is the fallacy of composition.
If you want collective outcomes that are different than the sum of uncoordinated individual action, you have to design them. Don't talk about "we." Who specifically is doing what, and for what purpose, and why would they do that, when they could just not?
Answering those questions often shows you that the problem isn't what you thought—because the mental model of a "we" that does things is so harmful.
Or you end up with a plan to solve the problem, so win-win.
This also applies quite well to "they", with the added bonus that the speaker doesn't even assume fractional responsibility. This is the root of a lot of moralist / ideological "solutions" to issues: e.g. "the problem with teen pregnancy is that teens are choosing to have sex, they should be more chaste!"
It neatly sidelines all of the systemic factors that actually produce the outcome they're looking to change.
Fine - for each industry "X", when will a CEO of a company who is given oligopolistic control over software which is deeply entrenched in a stranglehold over "X" decide to fund desperately needed improvements to said software?
Presumably the CEO would believe that improving the quality of their company's products will lead to increased profits/revenue.
The tools that the world runs best on were built over a period of man-years that were not mythical either.
Plus if you go back far enough, you hit a point where often the key people involved were 10x more suitable than the average or below-average candidates who would be most likely to come under consideration today.
To have the most realistic chance of success, you just have to allocate 10x the resources that you think it would take on first blush. Especially the amount of calendar time before deployment. Simple ;)
Plus if the world in question is really already running without the replacement tool, when are you willing for the world to come to a halt during a pitstop for the proven tools to be retired while the new replacement is inaugurated?
Otherwise, the most talented and productive tool-building team in the world would further need a much more-rare capability that was not even required of the original builders, the knack for servicing airplane engines in flight.
I’m not sure I buy the argument that tools necessarily need to be fixed.
> It feels like we created these tools when software became a thing. Then, we forgot them.
Sometimes the best tool is the hammer you already have. A lowly, simple, reliable hammer. I know software isn’t the only field where we like to chase the new shiny thing and call it “modern”. But, sometimes it really seems like that feeling is deeply embedded in programmer culture.
I’m not against building new tools, but they have to solve legitimate problems. And an old GUI from the 90s doesn’t seem like a legitimate problem for me. Have you ever seen how quickly a trained worker can move around on a TUI from the 80s?
Scaling? Data interoperability? Data models and new use cases? This is not an exhaustive list, but these all seem like good reasons to revamp tools. But, if the problem is attaching one piece of wood to another with a bit of metal… just use the hammer.
"how quickly a trained worker can move around on a TUI from the 80s". I worked at Ticketek many, many years ago and witnessed the move from dumb terminals running a TUI, to a new browser based system. The old timers were seriously pissed and I could totally understand why. You could navigate the old TUI entirely via a keyboard and they had memorised all of the shortcuts: want to buy a ticket to the latest show? That's 6 key presses. These people could smash out the sales at a phenomenal rate.
The new app was browser-based, looked really pretty and basically forced you to use the mouse. I think we lost most of the TUI-guns, but the replacement staff were cheap and the training was simple. Not sure if it was an economic success, but that's progress.
I dunno .. a lot of the medical software like the one referred to in the software really were (are?) written on a shoestring budget by crap programmers. The last one I set up in the 90's used a MS Jet database backend on a CIFS networked drive ... as you can imagine, with more than one or two clients the thing was constantly freezing up due to CIFS clients issuing oplock breaks etc. trying to get exclusive access to the Jet database.
Right. I agree with the author on principle, but introducing 'modern' tools to an existing industry, even the act of replacing 'antiquated' software, must be met with skepticism [0] and fear.
This is still funny and true about the software profession at many levels, but specifically about electronic voting, I recently listened to this podcast about Venezuela's last election [0], which highlights how computer voting can be done so safely that people were able to (clandestinely!) use the system to double-check the election results.
Why is there a sudden increase of low quality articles on HN?
Claiming the tools are bad, without any examples of specific tools that the author actually think is bad, and of course any rationale leading to such a conclusion is non-existent. This is basically content farm quality.
The hospital software is not updated in a frequent base because it doesn't generate more money. That's why the hospitals, bakeries, hotels and many other business doesn't have UI/UX top tier profissionais as YouTube, iOS, NetFlix, Facebook, TikTok and many many other websites, systems, companies, etc have.
Simply put, because it would not make money! We leave in an world of attention economy where as much time people spend on screen as much companies make money.
Another barrier to replacement is that the existing software embodies a ton of domain-specific knowledge and handles many arcane exceptions. To replace it you would have to be an expert in that domain, which is a huge blocker for all but a few players.
I disagree, many businesses that put their software in maintenance mode (fix/upgrade on breakage) will be losing money in the long run.
Consider a hospital, many statistics can be collected o provide insights and make immediate decisions, faster algorithms and new ones to problems we couldn't solve back then have been discovered, the UI/UX can always be improved for productivity, etc. All of that makes money.
Software customers aren't, and shouldn't be, one-time shoppers; there's always room for improvement and new needs pop up all the time.
None of that really makes money for a hospital. Most of what hospitals do is direct, hands-on patient care. Software improvements can at best deliver some small cost savings or slight reductions in clinical errors. And many hospitals are non-profit or government run, so there's not even a direct management incentive to improve financial performance.
Do you know what makes money for a hospital? Buying a new MRI machine. They can directly charge customers for scans. In budget planning cycles any proposed IT upgrades have to compete against stuff like that.
The McMaster-Carr website is about as close as you can get to the peak functionality of classic terminal systems: point of sale, hotel, flight booking, etc.
The only thing annoying about this sort of blog post is that of course we will get a million posts pointing out that the old UI is faster, exposes information, and is more compatible with an expert keyboard driven workflow.
And this is all completely correct. UIs have gotten worse and there’s no hope in sight.
But it is repetitive. And kinda annoying that every individual programmer understands and acknowledges this, but the industry continues this inane march “forward.”
> every individual programmer understands and acknowledges this
I think you underestimate how many programmers nowadays either have never experienced the UX of an “expert keyboard driven workflow” for any significant length of time, or dismiss it as dispensable.
Independent of the question whether new tools are really better (they sometimes are, but often aren't):
Fixing some very special such tool in a specialized industry is basically my job (in my case the replacement is better in bascially any sense).
The problem is: while the users like the improvements that I implement for them, management (who knows next to nothing about software development) has a strong tendency to keep me on the short leash, believing there exist far more important stuff that I should do than improving the users' life and workflows, considering this a luxury that it does not want to afford.
So, I can tell you when the (bad) tools that run the world will become fixed: when such managers die off. :-(
Sleek isn’t what matters in most domains, fast is more important but suffers from what definition of fast and how you measure it, in my experience working on a lot of enterprise systems what users really want is software that fits the domain and is predictable and consistent.
Have in mind that when the Crowdstrike debacle happened, the only remaining airlines were so because they still relied in Windows 3.1 and 95 for running their systems. In 2024.
That doesn't mean that they were safer or right keeping that running. Just that things are more complex than saying "it is outdated"
In hospitals and healthcare new software is designed around accounting and billing requirements. The result is that the new software system makes the putative primary function, the delivery of healthcare much, much worse.
The cause of this is that for the managers who commission and pay for the systems and to the developers who create the systems, the primary healthcare function is irrelevant.
To the managers, it is about billing insurance companies and monitoring productivity. They then move on to another management gig at another corporate.
To the software developers it is about getting paid for the software contract and burnishing their CV. They then move on to another software gig at another corporate.
Neither the software developers or the managers have any interest or understanding of the healthcare problem domain. Since this does not affect getting paid, nobody cares about this situation.
That's partially true, but employees at health IT software vendors often have long tenures. Some of the key developers at Epic have been there for decades and understand the domain quite well. It's just a fundamentally hard problem to solve because they have to balance so many conflicting priorities.
The software is also highly configurable, so many of the usability and workflow problems that customers experience are self inflicted. The leaders at every provider organization wrongly believe that their way of doing things is the best so they incur high costs in customizing the software to match their processes instead of just changing their processes to match industry best practices.
Perhaps because many of such systems are privately owned and managed systems run by faceless corporate overlords who are ok paying some low wage for a human to work with such outdated systems, and less ok paying for some other humans to build a new system?
New systems and rewrites also require you to reconsider bloat and how workflows should function, when upper management just wants you to recreate things the exact same way, obviating any potential benefits achieved...
Governmental systems will be subject to complex regulations and specifications etc
Very difficult to address
No thanks, we dont need fancy "fluent UI", "Material Me", "scalable" web based UIs for serious, professional software. The examples given by the author are exactly peak GUI, what came after is the gammification of it.
The problem with schools and libraries is that network effects meant that there were lots of people who could dork with Windows until it sort of worked. With Linux, those people were much fewer and further between.
Appearances can be deceiving. Using graphical elements from the late 90s or early 2000s isn't indicative of poor quality or stagnant design. As some mentioned, modern design trends can actually be much worse. Software that looks (and is) new is more of a risk in these areas because reliability and consistency are important. New software comes with new bugs. It often starts by oversimplifying and requires kludgy additions to fill in the gaps.
On the other hand, a lot of ERP is poorly designed regardless of its appearances. Most look like forms directly tied to an SAP system or SQL database. Field requirements are unnecessarily strict, from limited minimum and maximum length for names and need for single word first and last names, to making all fields required, to having some fields only allow selecting one of a handful of values. These interfaces are thin veneers over a table in a database whose requirements are built on a set of management reporting functions rather than on how the front line staff collect and use the data.
Digital forms and databases offer a lot of benefits. Digital data can be copied and transmitted instantly. It can be replicated and backed up with ease. It takes up far less space. It can be searched and retrieved instantaneously. It doesn't degrade over time. It's not subject to the quality of one's writing skills or pen. And it allows for easy batch processing and reporting.
But current system designs throw away what paper forms and filing systems used to give us. Paper forms have few limitations on when and how the data is filled in. You can leave fields blank, cross them out, write in the margins or even attach arbitrary other papers to it. It can be stamped, stapled, and clipped to other stuff. You can put them on desks, in boxes, in folders, or fold them up and put them in your pocket. Current digital designs lack all of this flexibility, instead insisting on rigid requirements, workflows, and compartmentalization.
We can have the best of both worlds. You can suggest that data be entered with specific categories while letting users put in whatever makes sense to them. You can let fields be left blank. You can allow changes to be made in any order at any time. You can allow room for notes and attachments. You can remove arbitrary limits on length and format.
What makes humans so good at using tools is our ability to adapt them to any situation. A sharp rock can be a knife or a scraper or a hammer. A spear can be used to stab but it can also be a make shift flag pole, lever, a spit, or a way to pull things out of a narrow hole. We can turn them over and play with them to see what they can do and how we might use them. Most modern software and digital systems don't allow for this kind of exploration and adaptation. But they could.
There are many industries in industrial design that deal with making machines that are intuitive, safe, flexible, and repairable. They also streamline common processes but account for the need to bypass process sometimes. The software industry could learn a lot from them.
A lot of the issues with our current software systems are because of optimizing for the wrong people and the wrong needs while ignoring or forgetting existing design knowledge. We can fix this, but we need to think differently about who and what we are designing these systems for.
ERP platforms that both run the world AND that have modern UX do exist. Take a look at Fusion, which covers all kinds of things from supply chain management, risk management, oil and gas industry specific stuff, banking specific stuff, you name it.
It's a bunch of web apps, basically. So the trite answer is, "when you go apply for a job at Oracle or SAP and upgrade some old screens to modern standards".
If you want software to be higher quality in general then you need to work on frameworks. Either GUI frameworks, or stuff that helps people get things right at a deeper level without a big lift.
I've been doing this sort of thing for quite a few years now. As one example, a big issue identified in this thread is GUIs that are slow and throw away keyboard input vs the often more productive (but harder to learn) TUIs of yesteryear. You can get both easily enough but it means going outside the web to use a GUI toolkit designed for the desktop, JavaFX would be a good choice. Although you can make these sorts of UIs using the web, it'd require a ton of sketchily maintained React libraries and the like. What you'll find if you try this is that although the desktop GUI toolkits themselves are alright, distribution is much harder than on the web. The tooling all sucks and the experience is painful, especially if you don't want to enforce an OS requirement on your users.
So after I identified that problem some years ago, I sat down and wrote a tool that makes desktop/CLI app distribution way easier:
Now it's as easy to ship a desktop app as a web app (more or less, once you have bought signing keys). Want three levels of nested menus that can be navigated with a few keypresses? Want a nested table view with easily resized and reordered columns? Want an ultra-low UI, all with modern languages and capabilities? Well now you can do it and deployment won't kill you. That's a little bit of boulder-pushing progress towards balancing "functional" and "sleek", for you.
This pattern also opens up a bunch of other simplifications. For instance, you can use your database as an app server. Just connect to it directly using JDBC drivers or similar, and give every user a DB account. Use the DB's security mechanisms (views, stored procedures etc) to implement access control and now you don't need to develop and run a web server layer. Just run queries and map the results directly into your UI toolkit. This works better if you are using a feature rich database. I've tried it with Postgres and the results were unsatisfactory, I think it'd work better with something like Oracle or MS SQL Server. And you wouldn't build a social network that way. But in an enterprise context it can be a nice simplification and enable much lower latency UI than in a typical web app.
It is a horrible mess. The systems we use have no way to safely delegate control over some portion of a computer to a given task. We have to trust that things work as intended and just blindly trust code we didn't write.
Imagine if electric power happened the same way. We'd be hiring electricians every time we wanted to "install" a new device, like a light or fan. It would require an inspection by the local power plant, and a state inspector. We wouldn't have outlets or fuses. Houses burning down would be commonplace.
It was the same, but worse, with steam power. Horrific accidents were an occupational hazard.
I've got a bit of a problem with the example of hospital management software. Given the complexity, particularly in the US, of the healthcare system, why expect the software to be "high-quality"? (Whatever that means).
Agreed on the airline management example though. It's insane how patchy that all is.
The US hospital system could do far better simplifying its billing procedures rather than attempting to implement equally complex software to line pockets on both sides.
I have worked with passenger rail transport. Knowing the quirks of that domain, I am quite impressed with air travel software. Anecdotally, I have rarely heard of cases of «Computer says no» in air travel, which are abundant in most other domains.
I was referring to the competing GDS platforms that airlines like Southwest opted not to use. I was trying to be charitable to the author here, because I am partial to how the terminal booking interface just works and allows for almost universal interoperability. You just have to learn how to use it. Of course, there is still greater functionality when using the airline's website directly (dynamic pricing, seat choice), which is "patchy".
One could see it as patchy, when you access an airline's website and before you can do anything useful, you have to load scripts that do who knows what, from 20 third party domains, that have nothing to do with you searching or booking a flight. Basically cobbled together slow as molasses ton of scripts, that probably barely work at all.
They aren’t referring to the client side of this question. Airline websites generally have the quality of any generic corporate IT project…
I assume what the parent was referring to is the backend systems like Sabre that coordinates travel and ticketing between airlines, travel agents, etc. It is a truly ancient system by today’s standards, with origins in the 1960s and mainframes. Systems like this actually have started to limit what airlines can do and how many flights they can manage.
Those user-side scripts have nothing to do with booking the flight. The reservation software is running in a data center somewhere, quite possibly on a mainframe they've been trying to retire for 30 years.
> Some have access to fast software that looks sleek. Others rely on tools that look like they were created in the early 2000s and left frozen in time.
In my own experience, software that "looks sleek" usually means "unmaintainable pile of Angular/React/flavor-of-the-month garbage".
Don't judge software based on its appearance. Judge it based on its utility, usability, and reliability.
it's pretty clear by healthcare outcomes & economics, especially the share spent on IT, that software and IT has been a hindrance on the healthcare industry . We should concede that some industries, like voting, are better served with paper records.
While there is certainly such a thing as bad software, a lot of old software actually has its merits, and its users will often tell you about them if you ask. In my experience, developing a desktop app for a point of sales/backoffice, the users were often very satisfied with the old software we were replacing. They liked how fast they got things done with it: For instance, the old software had keyboard shortcuts for everything. Its layout was more information dense.
I can say that I am also satisfied with many old programs for these reasons, and sometimes other reasons too. When writing new software, I generally try to design it like that too, with keyboard for everything (or nearly everything), and with dense layouts.
For example I’ve personally seen >60 year old cashiers at a small pharmacy with arthiritis, trembling hands, and everything, flawlessly input over 100 complex commands per minute on 30+ year old machine with a software design probably from the 70s.
With 0 bugs, 0 human perceptible delays, and not even a single misinput.
The modern web does none of this, which is why users have to wait for each screen before inputting the next command… if it has a keyboard shortcut. It probably doesn’t.
We added an input handler to queue inputs so that sequences of shortcuts and keypresses could be used.
Additionally, the internal framework we had allowed for shortcuts and we tried to replicate as much as could shortcut wise (as well as functionality wise). Almost everything should have a shortcut or a way to navigate to it via just keyboard — they had put in a lot of effort to ensure accessibility so that they could get the va contract that went to cerner(pre-oracle acquisition)
Depends. It is not the place of the underlying tech to impose such limits.
You must be one of the few (or any) companies that did this.
This definitely doesn't work in the general case of SPA web apps.
Or wait further. Many web screens appear in a state that is initially nonresponsive, becoming responsive invisibly after an unknown and variable period of time taken for code to load.
So even if people wanted to believe and buy so and so product, they have no way to substantively compare who is more honest.
I'm sure it's going to be way way way better.
[edits: cerner not epic?; added another way to emphasize how much better it'll be.]
https://www.military.com/daily-news/2025/01/02/va-sets-sight...
There’s a link to the OIG reports themselves. It’s almost certainly not the software itself, but the way it was configured and rolled out. Cerner is the market leader for EHRs, so I highly doubt it’s intrinsically flawed; just so configurable it’s easy to cut yourself on the edges.
https://www.theregister.com/2024/11/27/oracle_cerner_project...
In 2021 an expensive system for schools in Stockholm was so bad that some parents got together and wrote an open source app to not have to use the bad official UI.
https://www.wired.com/story/sweden-stockholm-school-app-open...
PoS software really plateaued in the 90s. The most common flows were so simple an average user can hold a mental map of the program and always know exactly how to get where they need to be. Pros can fly through ten levels of menus in literally half a second to access some obscure report.
In modern UI we've totally given up and just give the user a search box on every screen. Instead of a clean indexed list of menu items you can hit a key to access, we have to fumble around with tooltips.
The key thing would be some sort of input-buffering system, where you can have lots of things bound to keys, and the framework will actually buffer things. Generally web browsers and modern UIs in general throw away input on reloads and such, on the generally-correct but very novice-biased theory that you didn't mean to be inputting into a screen you can't see yet. If you undo that and allow buffering keystrokes across loading actions you could start recovering some of that expert-level speed, where you can be using screens that aren't even loaded yet.
"Expert-focused" UIs don't have to be bound to the terminal. That was all an accidental thing.
Expert-focused UIs are nearly dead because all our UI frameworks actively block them. That is also not a fundamental fact of computing and can be undone.
I doubt we'd be doing full page redraws and slinging json of all things. I think what you'd end up with is something that looked a little more like a game engine than react. That would be a fun project to work on.
I certainly wouldn't stop at input buffering. Another I'd want is some sort of systematic keypress discoverability. You still probably want a conventional GUI at least mostly available but using the GUI ought to be constantly teaching the user how to do whatever it is they just did with the keyboard instead. This might as well get wrapped into the framework, to standardize it and to make it easy to use.
Unfortunately, one of the many shortcomings of the web as an application and GUI platform is (a) no menu bars, (b) no context menus, (c) no accelerator key framework, (d) keyboard focus is often nowhere in particular or somewhere stupid that will do something bad if you touch the keys.
(The other tool that comes to mind is Emacs, but that's appreciably less mainstream than the command line in general.)
I worked at an old grocery store running the front office as a teenager through college maaaany years ago, and everything was a keyboard shortcut, non-touchscreen driven interface from probably the early 90’s. It took some time to get used to and train others on, but I could fly through those interfaces in a way a modern UX simply does not allow by focusing on the lowest common denominator for usability instead of any attempt to cater to ‘experts’: aka the people who have to use those interfaces day in and day out.
Sure, the modern interfaces look great (usually) and in the ideal state anyone can pick it up and use it without much instruction. But there’s no attempt to focus on the poor soul who has to use it in their day to day and just wants to get it done as quickly as possible.
Once I learned how to edit using the buttons and the scroll wheels, I could fly through putting together a news segment with all kinds of cuts, fades and overdubs. I remember thinking, I couldn’t believe how fast my fingers were flying.
The closest thing I can compare it to is learning how to play a guitar; after a certain point muscle memory takes over.
You’re lucky to get any tooltips nowadays.
I would use the desktop client but it doesn’t have an interface to hyperlink to files in sharepoint.
Would they even know? There was a time when there was such a thing as domain expertise and when usability studies were an integral facet of many SDLC. Those days seem long gone now. The industry seems far more content to haphazardly adapt OTS solutions or create something that's "pretty" or "modern" at the expense of being functional.
Key point. We are now in a UI idiocracy.
It needs to be easy and quick. Doesn’t matter if they are pros or people doing college jobs.
In industries with high staff turnover, it's probably better to have slower workers but with much lower training costs.
Web tech definitely makes it harder to give a really snappy experience.
As a related aside, I recently visited Microcenter where they run their own PCI-compliant PoS system that's all text based. The staff all knew how to zip through various text menus to apply discounts, warranties, etc.
There have never been enough highschoolers to fulfill all the """not real""" jobs that people keep insisting shouldn't pay enough to live. Hell, you used to be able to live on flipping burgers. In plenty of modern and democratic countries you actually still can live while flipping burgers.
I do not think a language barrier is at play; Norwegians and the other Scandinavians are among the most fluent English speakers of English as a second language in Europe, surpassed perhaps only by the Dutch.
Huh. Any etymological connection between "helvetes" and "Helvetii"/"Helvetia"?
I also worked on an ancient terminal interface for a complex service business. It was amazing. Everything was instant, and after a short learning curve, we had incredible power at our fingertips. If I had to do that job on a modern web interface with 4-5 seconds of spinners on every page, productivity would plummet! How often do I stand in front a desk with someone eeking through a more modern system, wondering, what is taking so long?
I believe it was Ted Nelson who stated, as a commandment, A computer must never make a human wait.
We went from a focused instantaneous UI in DOS to a paradigm that most people cannot handle...multitasking.
Some have mentioned density but DOS business applications weren't terribly dense. There was only so much screen area and we had conventions for how to use it. The apps would flow between screens much like modern Windows UI or mobile apps. The move to Windows presented users with a slow dense mess. My grandmother could set the screen on fire using Lotus 123 keyboard commands and bang out financial statements all day long. I was maintaining Lotus on her machine for decades after the Windows switch so she could use her keyboard and get shit done.
Quicken was awesome on DOS. We ran our bookkeeping business on Lotus 123 and Quicken for years (1980s-1990s). Productivity crashed after that and never recovered.
If you want collective outcomes that are different than the sum of uncoordinated individual action, you have to design them. Don't talk about "we." Who specifically is doing what, and for what purpose, and why would they do that, when they could just not?
Answering those questions often shows you that the problem isn't what you thought—because the mental model of a "we" that does things is so harmful.
Or you end up with a plan to solve the problem, so win-win.
When combined you get a fundamental metaproblem that:
1. You can't solve a problem you don't understand.
2. Moralizing is more satisfying than understanding.
3. This is a problem, which can't be solved if people choose to moralize instead of understanding.
Presumably the CEO would believe that improving the quality of their company's products will lead to increased profits/revenue.
The tools that the world runs best on were built over a period of man-years that were not mythical either.
Plus if you go back far enough, you hit a point where often the key people involved were 10x more suitable than the average or below-average candidates who would be most likely to come under consideration today.
To have the most realistic chance of success, you just have to allocate 10x the resources that you think it would take on first blush. Especially the amount of calendar time before deployment. Simple ;)
Plus if the world in question is really already running without the replacement tool, when are you willing for the world to come to a halt during a pitstop for the proven tools to be retired while the new replacement is inaugurated?
Otherwise, the most talented and productive tool-building team in the world would further need a much more-rare capability that was not even required of the original builders, the knack for servicing airplane engines in flight.
I figure that would be when.
(by him I am referring to the public identity of the process that generated the message above)
> It feels like we created these tools when software became a thing. Then, we forgot them.
Sometimes the best tool is the hammer you already have. A lowly, simple, reliable hammer. I know software isn’t the only field where we like to chase the new shiny thing and call it “modern”. But, sometimes it really seems like that feeling is deeply embedded in programmer culture.
I’m not against building new tools, but they have to solve legitimate problems. And an old GUI from the 90s doesn’t seem like a legitimate problem for me. Have you ever seen how quickly a trained worker can move around on a TUI from the 80s?
Scaling? Data interoperability? Data models and new use cases? This is not an exhaustive list, but these all seem like good reasons to revamp tools. But, if the problem is attaching one piece of wood to another with a bit of metal… just use the hammer.
The new app was browser-based, looked really pretty and basically forced you to use the mouse. I think we lost most of the TUI-guns, but the replacement staff were cheap and the training was simple. Not sure if it was an economic success, but that's progress.
No doubt because the newer stuff is even worse.
[0] https://xkcd.com/2030/
--
1: https://www.thisamericanlife.org/848/transcript
Claiming the tools are bad, without any examples of specific tools that the author actually think is bad, and of course any rationale leading to such a conclusion is non-existent. This is basically content farm quality.
Low-quality punctuation too! :)
The hospital software is not updated in a frequent base because it doesn't generate more money. That's why the hospitals, bakeries, hotels and many other business doesn't have UI/UX top tier profissionais as YouTube, iOS, NetFlix, Facebook, TikTok and many many other websites, systems, companies, etc have.
Simply put, because it would not make money! We leave in an world of attention economy where as much time people spend on screen as much companies make money.
It's no secret at all.
Consider a hospital, many statistics can be collected o provide insights and make immediate decisions, faster algorithms and new ones to problems we couldn't solve back then have been discovered, the UI/UX can always be improved for productivity, etc. All of that makes money.
Software customers aren't, and shouldn't be, one-time shoppers; there's always room for improvement and new needs pop up all the time.
Do you know what makes money for a hospital? Buying a new MRI machine. They can directly charge customers for scans. In budget planning cycles any proposed IT upgrades have to compete against stuff like that.
And this is all completely correct. UIs have gotten worse and there’s no hope in sight.
But it is repetitive. And kinda annoying that every individual programmer understands and acknowledges this, but the industry continues this inane march “forward.”
I think you underestimate how many programmers nowadays either have never experienced the UX of an “expert keyboard driven workflow” for any significant length of time, or dismiss it as dispensable.
Fixing some very special such tool in a specialized industry is basically my job (in my case the replacement is better in bascially any sense).
The problem is: while the users like the improvements that I implement for them, management (who knows next to nothing about software development) has a strong tendency to keep me on the short leash, believing there exist far more important stuff that I should do than improving the users' life and workflows, considering this a luxury that it does not want to afford.
So, I can tell you when the (bad) tools that run the world will become fixed: when such managers die off. :-(
Sleek isn’t what matters in most domains, fast is more important but suffers from what definition of fast and how you measure it, in my experience working on a lot of enterprise systems what users really want is software that fits the domain and is predictable and consistent.
Everything else is gravy.
That doesn't mean that they were safer or right keeping that running. Just that things are more complex than saying "it is outdated"
The cause of this is that for the managers who commission and pay for the systems and to the developers who create the systems, the primary healthcare function is irrelevant.
To the managers, it is about billing insurance companies and monitoring productivity. They then move on to another management gig at another corporate.
To the software developers it is about getting paid for the software contract and burnishing their CV. They then move on to another software gig at another corporate.
Neither the software developers or the managers have any interest or understanding of the healthcare problem domain. Since this does not affect getting paid, nobody cares about this situation.
https://www-nrk-no.translate.goog/mr/helseplattformen-i-hels...
https://www-nrk-no.translate.goog/trondelag/steinkjer-vraker...
https://www-helse--mr-no.translate.goog/fag-og-forsking/samh...
The software is also highly configurable, so many of the usability and workflow problems that customers experience are self inflicted. The leaders at every provider organization wrongly believe that their way of doing things is the best so they incur high costs in customizing the software to match their processes instead of just changing their processes to match industry best practices.
New systems and rewrites also require you to reconsider bloat and how workflows should function, when upper management just wants you to recreate things the exact same way, obviating any potential benefits achieved...
Governmental systems will be subject to complex regulations and specifications etc Very difficult to address
The problem with schools and libraries is that network effects meant that there were lots of people who could dork with Windows until it sort of worked. With Linux, those people were much fewer and further between.
On the other hand, a lot of ERP is poorly designed regardless of its appearances. Most look like forms directly tied to an SAP system or SQL database. Field requirements are unnecessarily strict, from limited minimum and maximum length for names and need for single word first and last names, to making all fields required, to having some fields only allow selecting one of a handful of values. These interfaces are thin veneers over a table in a database whose requirements are built on a set of management reporting functions rather than on how the front line staff collect and use the data.
Digital forms and databases offer a lot of benefits. Digital data can be copied and transmitted instantly. It can be replicated and backed up with ease. It takes up far less space. It can be searched and retrieved instantaneously. It doesn't degrade over time. It's not subject to the quality of one's writing skills or pen. And it allows for easy batch processing and reporting.
But current system designs throw away what paper forms and filing systems used to give us. Paper forms have few limitations on when and how the data is filled in. You can leave fields blank, cross them out, write in the margins or even attach arbitrary other papers to it. It can be stamped, stapled, and clipped to other stuff. You can put them on desks, in boxes, in folders, or fold them up and put them in your pocket. Current digital designs lack all of this flexibility, instead insisting on rigid requirements, workflows, and compartmentalization.
We can have the best of both worlds. You can suggest that data be entered with specific categories while letting users put in whatever makes sense to them. You can let fields be left blank. You can allow changes to be made in any order at any time. You can allow room for notes and attachments. You can remove arbitrary limits on length and format.
What makes humans so good at using tools is our ability to adapt them to any situation. A sharp rock can be a knife or a scraper or a hammer. A spear can be used to stab but it can also be a make shift flag pole, lever, a spit, or a way to pull things out of a narrow hole. We can turn them over and play with them to see what they can do and how we might use them. Most modern software and digital systems don't allow for this kind of exploration and adaptation. But they could.
There are many industries in industrial design that deal with making machines that are intuitive, safe, flexible, and repairable. They also streamline common processes but account for the need to bypass process sometimes. The software industry could learn a lot from them.
A lot of the issues with our current software systems are because of optimizing for the wrong people and the wrong needs while ignoring or forgetting existing design knowledge. We can fix this, but we need to think differently about who and what we are designing these systems for.
https://www.oracle.com/erp/financial-close-product-tour/
https://www.oracle.com/erp/risk-management/#tour
It's a bunch of web apps, basically. So the trite answer is, "when you go apply for a job at Oracle or SAP and upgrade some old screens to modern standards".
If you want software to be higher quality in general then you need to work on frameworks. Either GUI frameworks, or stuff that helps people get things right at a deeper level without a big lift.
I've been doing this sort of thing for quite a few years now. As one example, a big issue identified in this thread is GUIs that are slow and throw away keyboard input vs the often more productive (but harder to learn) TUIs of yesteryear. You can get both easily enough but it means going outside the web to use a GUI toolkit designed for the desktop, JavaFX would be a good choice. Although you can make these sorts of UIs using the web, it'd require a ton of sketchily maintained React libraries and the like. What you'll find if you try this is that although the desktop GUI toolkits themselves are alright, distribution is much harder than on the web. The tooling all sucks and the experience is painful, especially if you don't want to enforce an OS requirement on your users.
So after I identified that problem some years ago, I sat down and wrote a tool that makes desktop/CLI app distribution way easier:
http://hydraulic.dev
Now it's as easy to ship a desktop app as a web app (more or less, once you have bought signing keys). Want three levels of nested menus that can be navigated with a few keypresses? Want a nested table view with easily resized and reordered columns? Want an ultra-low UI, all with modern languages and capabilities? Well now you can do it and deployment won't kill you. That's a little bit of boulder-pushing progress towards balancing "functional" and "sleek", for you.
This pattern also opens up a bunch of other simplifications. For instance, you can use your database as an app server. Just connect to it directly using JDBC drivers or similar, and give every user a DB account. Use the DB's security mechanisms (views, stored procedures etc) to implement access control and now you don't need to develop and run a web server layer. Just run queries and map the results directly into your UI toolkit. This works better if you are using a feature rich database. I've tried it with Postgres and the results were unsatisfactory, I think it'd work better with something like Oracle or MS SQL Server. And you wouldn't build a social network that way. But in an enterprise context it can be a nice simplification and enable much lower latency UI than in a typical web app.
Imagine if electric power happened the same way. We'd be hiring electricians every time we wanted to "install" a new device, like a light or fan. It would require an inspection by the local power plant, and a state inspector. We wouldn't have outlets or fuses. Houses burning down would be commonplace.
It was the same, but worse, with steam power. Horrific accidents were an occupational hazard.
Agreed on the airline management example though. It's insane how patchy that all is.
I have worked with passenger rail transport. Knowing the quirks of that domain, I am quite impressed with air travel software. Anecdotally, I have rarely heard of cases of «Computer says no» in air travel, which are abundant in most other domains.
I was referring to the competing GDS platforms that airlines like Southwest opted not to use. I was trying to be charitable to the author here, because I am partial to how the terminal booking interface just works and allows for almost universal interoperability. You just have to learn how to use it. Of course, there is still greater functionality when using the airline's website directly (dynamic pricing, seat choice), which is "patchy".
Opted not to use /until recently/
I assume what the parent was referring to is the backend systems like Sabre that coordinates travel and ticketing between airlines, travel agents, etc. It is a truly ancient system by today’s standards, with origins in the 1960s and mainframes. Systems like this actually have started to limit what airlines can do and how many flights they can manage.
See: https://viewfromthewing.com/airlines-are-running-out-of-flig...
I mean apart from classic roguelikes and vim.
What can I download and use right now on linux?
In my own experience, software that "looks sleek" usually means "unmaintainable pile of Angular/React/flavor-of-the-month garbage".
Don't judge software based on its appearance. Judge it based on its utility, usability, and reliability.