>Also, I think today we’re kind of overburdened by choice. I mean, I just had Fortran. I don’t think we even had shell scripts. We just had batch files so you could run things, a compiler, and Fortran. And assembler possibly, if you really needed it. So there wasn’t this agony of choice. Being a young programmer today must be awful—you can choose 20 different programming languages, dozens of framework and operating systems and you’re paralyzed by choice. There was no paralysis of choice then. You just start doing it because the decision as to which language and things is just made—there’s no thinking about what you should do, you just go and do it.
For context this book is copyrighted 2009 so this interview is more than a decade old, and I'm sure many things have changed since then.
To add to that, you could learn to read/write a serial port and poll a mouse to find out where the pointer was and whether a button had been clicked, and that was cutting edge. At that point you were doing things that much commercial software didn't even do yet.
Just a few simple I/O things. All the rest was whatever logic you coded up.
You ran it and it either worked or didn't, and if it didn't you knew that was a bug in your code. Code that you knew because you'd written it.
No stack of components, no frameworks and libraries and dependency manager configurations and VMs and container configurations and network connections and other programs that might interact with it. It was just you and your code.
And sure, the IDEs today are technically better. But so much more complex that you could spend a lifetime studying them and still not understand all their functions. Turbo Pascal's IDE, though much simpler, well, was much simpler. You could easily grok it entirely within a week's normal usage.
So without all the cognitive overhead of a modern code ecosystem, you could just focus fully on solving the problem, doing the logic, figuring out the best way to do it.
Nowadays you spend most of your time figuring out all the tools and dependencies and configurations and systems and how to glue them all together and get them actually working together properly. There's relatively little time left for the actual problem you're trying to solve and the actual code that you're writing. Actually doing the thing that achieves the objective is kind of just an afterthought done in your spare time when you're not busy babysitting the ecosystem.
[1] That Upton Sinclair quote about salary.
The choice of languages is wider, but normally it's the same choice of two, C or assembly.
You can attach more interesting peripherals.
I reckon cos of the constraints it's a really good learning environment.
And one's own wonderful was within reach because of the things you point out. Of course context matters here with having nothing to compare too, but every learning point seemed like magic to me.
Out of necessity, learning how the hardware worked and relating that to software was such a big part of the culture. Books and magazines wrote about CPU architecture, address and data busses, video programming etc.
Being into electronics at the time I constructed external address decoders and data line drivers (7400 series) to make lights and relays turn on as well as being able to sample and store an external voltage in memory via homemade R2R ADCs.
Years later, I was writing control and logging software in Topspeed Module-2 running on DOS as part of my postgrad work. I'd arrive, somewhat stressed, in the lab after a fairly lengthy two train commute then relax to the sound of the hard disk as the PC booted up. Then it was me, the machine, a single language and a couple of RS232 serial ports for I/O. Bliss!
This is because you lived in the microcomputer world back then. Mainframe and minicomputer programmers lived in a very different world, and had many of the same problems: OS, permissions, concurrency, other users on the same machine, scalar vs vector processors, etc.
That "No" (sort of) went out the window as soon as you were running a BBS on that thing. :)
However, everything feels vastly more complicated. My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome. Now I see young people spending months to get the basics up and running in their React front ends just to be able to think independently of hand-holding tutorials for the most basic operations.
Even business software felt simpler. The scope was smaller and you didn’t have to set up complicated cloud services architectures to accomplish everything.
I won’t say the old ways were better, because the modern tools do have their place. However, it’s easy to look back with rose-tinted glasses on the vastly simpler business requirements and lower expectations that allowed us to get away with really simple things.
I enjoy working with teams on complex projects using modern tools and frameworks, but I admit I do have a lot of nostalgia for the days past when a single programmer could understand and handle entire systems by themselves because the scope and requirements were just so much simpler.
Frontend devs who were present before the advent of the major web frameworks, and worked with the simplicity of js script + DOM (or perhaps jquery as a somewhat transparent wrapper) benefited from seeing the evolution of these frameworks, understanding the motivations behind the problem they solve, and knowing what DOM operations must be going on behind the curtain of these libraries. Approaching it today not from the 'ground up' but from high level down is imo responsible for a lot of jr web devs have surprising lack of knowledge on basic website features. Some, probably a minority, of student web devs may get conditioned to reach for libraries for every problem they encounter, until the kludge of libraries starts to cause bugs in and of itself or they reach a problem that no library is solving for them. I feel like this is particularly bad outcome for web devs because web I feel is uniquely accessible for aspiring developers. You can achieve a ton just piggybacking off the browser and DOM and it's API, the developer tools in the browser etc. But not if you are convinced or otherwise forced to only approach it from the other side -- running before you crawl, or trying to setup a webpack config before you even understand script loading, etc.
Be right back, writing a new design doc for a messaging service and protocol spec to go with it that I can use to pad my next review cycle.
About half the time you can’t because it’s not actually a form, and they forgot to add handler for enter.
When doing a take home exercise the candidate is desperate to figure out what you are judging on. The more you make that explicit the better your outcomes will be.
Right now your process biases for enter handler adders. Is that your intent?
When I give candidates prompts / questions / scenarios, I try to include some specific instructions, but also leave some details out. The idea is to come up with a prompt that junior developers can still work with by going head-on, but which has some open-ended nature to it, to see how people respond to things which are open-ended or even ambiguous. It's not a trick or trap; candidates are explicitly told what's up, and that I'm evaluating not only their ability to solve a problem, but to define the problem in the first place.
If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise. Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test. That's both disrespectful and un-rigorous.
Junior developers get near-real-time feedback from senior developers who are supervising them. Senior developers have to be able to give that feedback, have to be able to anticipate user needs, and have to be able to run long-term projects where you may have to work for days, weeks, or months before receiving critical pieces of feedback.
> If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise.
This is a common mistake that I see inexperienced interviewers make. Trying to throw more detailed and specific exercises at candidates is a fool's errand at best, and at worst it means that you're putting candidates through additional tests (and your acceptance rate will suffer).
The main problem with ambiguity is that it appears unexpectedly. If you give someone an ambiguous problem and say, "Tell me what is ambiguous about this problem," then you're not testing what you want to know. What you actually want to know is whether candidates can recognize ambiguous problems without being prompted to recognize them--and the reason for this is that ambiguous are extremely common in real-world scenarios.
An ambiguous problem is not a trick or a trap. It is explicitly part of the interview process, and interviewees are given guidance that the problems they are given may not be precisely defined.
> Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test.
Why do you say that the coding test is shoddy?
My observation is that a large percentage of candidates will succeed at coding tests if you give them an ambiguous prompt. In practice, they will either ask questions to resolve the ambiguity, or just pick a way to resolve the ambiguity for the purposes of the test. This matches real-world scenarios--you are going to often encounter ambiguous or incomplete problems in the real world.
If you want a precisely-specified coding problem, then go to Hacker Rank or Project Euler or something like that, or join a competitive programming team.
Have you tested this assertion with data? Because I’ve built interview pipelines several times now and the data I collected showed the exact opposite. The more specific a test was for the trait you wanted to select for the better results you’d get across all metrics, interviewers and candidates. It’s almost my defining characteristic of a good selection criteria after 2 decades of interviewing.
My personal experience is that people new to interviewing are the ones who think that making individual interviews more precise will improve the process, but my experience is that improvements to the overall interview process aren’t done by improving how good individual interviews are.
But, the format is not really the point. The point is to have a specific thing you are trying to discern from your filter and to focus your efforts on making that the only thing you are judging on.
Sometimes it makes sense to do something else instead, but if so, you should handle it in a sane way and actually do that something else. Not just suppress it.
Desktop frameworks “committed” input on enter and most often focused the next field, so you could skip them by enter enter enter. This worked correctly since FoxPro/TurboVision/Norton times. Only Ctrl-Enter would press a “default” button out of order.
But web has its own ways as usual.
Browsers have a standard, default, expected behavior. Sometimes, in rare cases, it makes sense to break that in order to do something else instead. But you shouldn't just silently break it for no reason other than to confuse the user.
[0] https://www.theworldsworstwebsiteever.com/
So perhaps that isn’t the best litmus test? I wonder if it’s written down that should work as part of the assignment.
Breaking default browser behavior because you find it unintuitive is generally a bad idea.
Now if you’re dealing with something that isn’t really a web form—as in you’re overloading input fields for some interactive non form like behavior—then I can see it.
In my case, if the person had some well thought out reason for doing it, I might let it slide. But the vast majority of times I’ve seen it, it’s because the person doesn’t even know how to use forms. Not understanding the underlying technology at at least a very basic level is a strike against you in my book.
Again, it’s not that I disagree with most of your premise, but in fact, as the comments (not just mine) show - people have issues with submitting too early, so it can be a good decision to break that depending on context. Fewer errors. Which is why I brought up how the problem is presented to the candidate. You might be filtering out people based on something less universally accepted and understood than you think.
If you accidentally break it because you don’t understand it, that’s a strike against you in my book. If you consciously break it for a good reason and can coherently defend that reason, that’s a different story.
At least you'd have to get a sample big enough to make a statistically significant conclusion.
And the burden of proof is on you, because you are changing the default, not the other way around.
Think of any other products, they all have established default behaviours, yet it's also very easy to find someone who's never used a product before and finds it "unintuitive".
And intuitiveness is only one part of usability anyway.
Newbies drop directly to webpack/react and are overwhelmed or having a lot of trouble getting some details right.
Unfortunately a lot of details that are lost in time are accessible if you go through "build stuff basically from ground" so understanding first "why" we needed these frameworks and what were problems to solve.
There is also bunch of people who go to rediscover basic stuff and they claim "you don't need a framework - vanilla js is enough" - but they also miss context and did not run into problems that were pain points before we had frameworks.
I teach an online web development course for a university as a side gig. Our students are forbidden from using any third party code, libraries, frameworks, etc. They have to do everything with native html, css, and JS.
On a somewhat related but tangential note: I also now have clients demanding certain programs and ecosystems, which is absolutely ridiculous to me. The software I use to give you a final cut should in no way be determined by you. Yet somehow we have ceded that ground!
The litmus test for abstractions getting out of control is if a KPMG consultant brings it up while sipping Gin and tonic in business class.
Another problem is explosion of DSLs. Everything is yaml and you spend ages learning things like Terraform and Docker compose yaml syntax.
As far as Terraform/HCL and CloudFormation/Yaml, the alternative is writing code to do the same thing in your language of choice using the CDK with either CloudFormation or the recently ported Terraform/CDK.
But if you do "a little bit of everything" and infrastructure on the side, you're bound to become a master of none.
One of the worst experiences in programming is writing CI/CD pipelines. One wonders why...
Like I wrote in my comment, how much specific things you need to know will depend on how many specific tasks you perform. If you specialise in nothing, everything will have depths unknown. It will also dilute attention/focus which in turn means you'll never be able to fully understand a specific domain or application. This was reflected in https://news.ycombinator.com/item?id=33056052 where the path it took for many developers and engineers in general to find a fitting solution is unknown to newcomers and also simply not taught in favour of delivering "reviewables" in hopes of a positive review (https://news.ycombinator.com/item?id=33056705) .
Sidenote: mapping ports used to be rather verbose, you'd have to include the address family, the address and the port, on both sides of the mapping. That's 6 elements (excluding separators). So most applications including docker made various parts optional. You can map two ports to any interface, or opt to specify one interface but not the other one etc. A novice user of a new application might be best served by not using any shorthand forms and only using the fully qualified names everywhere. By spelling out every option explicitly (including optional values) there is no more guessing what may or may not have been configured.
Agreed, LAMP was just so damn fun in how you could go from zero to a fully functioning site in a day or two. Having to manage all the statefulness of fetching & displaying data asynchronously from the client adds an incredible amount of complication both in theory and in practice.
Also agree that the old tech wasn't necessarily better either - but it sure would be cool if someone could replicate the developer experience from back then and produce a result that's up to modern engineering and UX standards.
Something I am appreciating about Svelte/Kit and Phoenix is that they are admitting what was right about PHP and server-rendered webpages. I'm a frontend engineer and think that SPAs have their place, but frontend JS represents a kind of tyranny seen nowhere else in tech.
The more complexity we can now handle, the more complexity we will create.
This made me remember how acceptable a developer designed utility used to be. Now, you can barely launch an MVP without finding or paying for design and high quality UX/UI. If you do it’s likely going nowhere in terms of traction. I’m sure this has only seemed to be the new rule and there are a few exceptions. But not many.
Even Stripe all those years ago really took off after investing in design. They’ve remained rather polished. But they’re also an exceedingly well funded operation.
Gave me flashbacks to when I was younger. I would work on a problem until hitting a wall I couldn't get around then go into Books A Million with pencil and paper and copy concepts/algos out of CS books. I was too poor to spend $50+ on a book at the time. Now, anything a new programmer wants to know is just a Google search away.
But, there is so much more that just diving in can be hard. Even simple things are complicated I think mainly because expectations are so much higher.
I agree the compensation is higher. I don’t agree the respect is any higher. Software engineer is highly associated with terms like neckbeard, redditor, incel, autistic, etc.
I’m extremely hesitant to tell anyone I’m a software engineer. If anything - I’ll lie and say I do product management just to avoid the association. People treat me way better when I say I’m a PM instead of an eng.
As much as people decry the internet and it’s role in modern society, I’m glad that I generally have easy access to massive archives of knowledge.
We have 1,000 more solutions to 1,000 more problems. We have extensive documentation on all the new things. Documentation is mostly focused on nouns, sometimes on verbs. The "what" and the "how" are easy to find.
What we don't have is clarity on how they fit together. The overwhelming majority of work done in software is fitting the pieces together in a way that works. The "why" and the "when" are really difficult to pin down.
The biggest overhead is trying to conceptualize the systems that make up the foundations of software: operating system libraries, compiler toolchains, shell environments, dependencies, etc.
Because there is so much mysticism involved, people who have spent years or decades treating operating systems, package managers, development environments, etc. as playgrounds to explore have an advantage that is difficult to articulate, let alone teach.
Anyone learning software today is presented with a lot of exciting opportunities to explore programming itself. There are handy web-based editors where you can write programs that do input and output all in the browser itself. No need to learn about shells or packages or git...
But those things we factored out of the learning experience are probably the most meaningful subjects to learn about, if you want to actually create something. It's really tricky to find direction to go from vague concepts to working projects.
This is precisely why we've succeeded, so far, at hosting conferences [0] dedicated to this exploration. Enough people realize "something's amiss" and there's something lacking. Once they're aware of what it is they search for it hungrily.
[0] https://handmade-seattle.com
Today, tools are incredibly better; compilers, debuggers, profilers. I'll take something from JetBrains or Visual Studio any day over what I had available in the 1990's. There were some gems back then, but today, tools are uniformly good.
What has gotten difficult is the complexity of the systems which we build. Say I'm building a simple web app in JS with a Go backend and I want users to have some kind of authentication. I have to deal with something like Oauth2, therefore CORS, and auth flows, and to iterate on it, I have some code open in Goland, some code open in VS Code and my browser, and as a third component, I have something like Auth0 or Cognito in a third window. It's all nasty.
If I'm writing a desktop application, I have to deal with code signing, I can't just give it to a friend to try. It's doubly as annoying if it's for a cell phone. If I need to touch 3D hardware, I now have to deal with different API's on different platforms.
It's all, tedious, and it's an awful lot of work to get to a decent starting point. All these example apps in the wild are always missing a lot of the hard, tedious stuff.
Edit:
All that being said, today, I can spin up a scalable server in some VM's in the cloud and have something available to users in a week. In the 1990s, if there was a server component, I'd be looking for colo facilities, installing racks, having to set up software, provision network connections, and it would take me ten times as long to the first prototype. I'd have to write more from scratch myself. Much as some things today are more tedious, on net, I'm more productive, but part of that is more than 25 years of experience.
Been programming since late 70s. Graphics in many ways WAY easier than they were in the 70s, 80s, 90s. Sure there's Metal, DirectX, Vulkan, and OpenGL but you can still use OpenGL on pretty much all platform or use something like ANGLE.
Back in the 80s, 90s, you didn't even have APIs, you just manipulated the hardware directly and every hardware was entirely different. Apple II was different than Atari 800, which was different than C64, which was different than Amiga, which was different than CGA, and different than EGA, and different than Tandy, and different than VGA, and different than MCGA. NES was different than Sega Master System which was different the SNES which was different than Genesis which was different than 3DO which was different than Saturn which was different than PS1 etc... It's only about 2005-2010 that it all kind of settled down into various APIs that execute shaders and everything basically became the mostly the same. The data and shaders at an algorithmic level that you're using for your PC game are the same or close to it On Xbox 360, PS4, PS5, Xbox One, Mac, Linux. Where as all those previous systems were so different you had to redo all a ton more stuff.
On top of which there are now various libraries that will hide most of the differences from you. ANGLE, Dawn, wgpu, WebGL, Canvas, Skia, Unreal, Unity, etc...
I can still program mode 13h from memory, and I can only imagine the cool, arcane witchcraft that you have acquired having started much before me.
To me, at least, OpenGL was my favorite, I miss its death, but Metal, Vulkan and DX are close enough. What drives me up a wall is 3D in the browser, since the difficulties of dispatch cost from JS land are profound.
Can I ask maybe a personal question? Do you worry about becoming obsolete? I’m at the 16 year mark, and I was wondering when’s the appropriate time to panic. You seem to have made it about a decade longer than I have, so I figured I’d ask some tips.
The hardest thing about programming now vs then seems to be staying relevant.
25+ years in, though, while I'm still an engineer in the org chart, I'm in charge of mentoring a lot of younger engineers, and my job has turned more into keeping them from making big mistakes and helping them grow as engineers, versus producing code myself. Through them, I can get much more done in aggregate, than if I sat down and did it myself, even though I'm probably faster than any engineer that I mentor.
Given what I have done, startups are a great place to learn via trial by fire, while big companies are good places to earn some big bucks while in between cool startup jobs.
As an example, I've recently shifted from manager back to hands-on tech, and then from platform work (building tools for devs) to security - and knowing the engineering space makes me more valuable in the security space. I'll do this for a few years, then look for the next interesting jump. Nothing I've learnt is ever wasted - and I started when everything had to be patched to work on linux and sendmail.cf files were state of the art ;)
Also, flipping between the big three types of workplace - startup, enterprise, and consulting - adds to your understanding of the world and overall value.
When I started out, old engineers weren't a thing - it used to be accepted wisdom that this was a young person's field. That's not true any more, and I doubt it'll ever be - just keep learning, pace yourself (marathon, not sprint), enjoy yourself, and keep expanding your awareness/knowledge.
When I first started programming all of my tools had a simple workflow:
* Write a single text file
* run a single command to build (cc thing.c)
* Run the resulting file as a standalone command
People learning to program are often new in general. They're figuring out their text editors. Figuring out how to run programs. Figuring out so many basic things seasoned developers take for granted.
I became quite fluent in C, writing many, many useful programs with just a single text file. By the time I had a need to learn about linking multiple files in large projects I was already fluent and comfortable with the language basics.
Contrast this with modern environments: I need to learn whole sets of tools for managing development environments (venv, bundle, cargo, etc etc etc). These development harnesses all change rapidly and I am constantly googling various sets of commands and starter configs to get things running. These are all things that a seasoned developer will be constantly dealing with on a complex project, but it seems like little effort has been put into creating basic defaults to simplify things for beginners.
Yes, most commercial software packages were written in C. They certainly weren't in one file. They were large systems that took hundreds, sometimes thousands of files and 100k's to millions of lines of code. If anything, we had to write more code to do things because pre-packaged libraries weren't as comprehensive back then. I still remember waiting hours and hours for our application to build. And the old timers told us that that was blazingly fast, lol.
I would agree with a previous poster that there are many more choices today. And I guess if you suffer from a fear of making the wrong choice, that is a problem. But the other side of that is that literally thousands and thousands of examples and even robust code libraries are now available for free that you can drop in and use. That is a HUGE plus.
The interface for beginners scaled all the way down to a very basic single text file, and most beginners would program for months or even years without using those things. It wasn't necessary to teach these tools in school - you could complete an entire degree writing single-file C programs without ever using an IDE.
Many utilities were distributed as a .c file and a Makefile and that's it (before the rise of autoconf)
all: cc myprogram.c
clean: rm -f myprogram.o myprogram
They're extremely simple in simple scenarios. It's just a simple format for writing down the commands you run while working.
But I think you're right in many ways though. For seasoned devs, cli tools give a lot of flexibility and allow dev tasks to be automated in pipelines more easily, but it requires one to read documentation or use the help subcommand to discover what else you can do. Which is not a big deal when you are experienced, but I definitely remember struggling early on with things like that when I was self-teaching how to program.
Back in my day, being on the path was sufficient. And it is for all the other Java projects. I spent too long sorting that one out.
Programming is programming, and the language doesn't make a ton of difference to me, though I do have strong preferences.
But that is the -easy- part. The hard part is everything else. Jenkins? Gitlab runners? Github Actions? There are at least 5 people regularly use, all have their own way of doing things and syntax.
To Docker or not to Docker. Kubernetes? ECS? RPM? How about just lambda functions?
What about your config management.. Ansible? Chef? Puppet? Salt?
It's the worst part of switching jobs to me. Especially when feeling like you have to invest time learning a system that's falling(or even has already fallen) out of favor.
I used to love the idea of owning the whole pipeline. Now I just want to sit in a hole writing solid code, and let someone else handle all the CI and systems parts.
Then things changed around me quickly, and one day I woke up and realized that I had hardly any idea what was going on any more. Cloud ops, docker, kubernetes, etc. It was exhausting trying to keep up and it made me lose interest in getting better.
I always knew I wasn't going to be a programmer forever, but modern devops stuff accelerated the transition out.
I like the idea, but people keep talking about "full-stack developers" and "DevOps" (or "DevSecOps").
Also, many people seem to interpret Agile as "people are completely interchangeable", which means that you need to know all technologies used in the project, because any ticket can be assigned to you.
There is simply too much shit, too much stupid and not enough time to understand things without sacrificing family, relationships, or health.
World is messed up, and it's not worth understanding it anymore, I don't think the payout is worth the cost.
Interesting career move! Are you using your software skills in any way (ie does it involve software patents?) or was it just a hard gear shift because of other interests/factors?
Also, even if you understand and agree with Werner Vogels' mantra "everything fails all the time", it's incredibly challenging to make a truly robust distributed system. There's just so much happening so rapidly, low-probability problems become consistent failures as you scale, and the wrong recovery approach can have non-obvious second-order effects leading to bigger problems.
Even UI is so damn hard these days because it's all JavaScript with a truly pitiful standard library and needs to handle 50 different screen sizes and browsers and run on both gaming desktops and 30 year old phones running leaked android pre-releases.
I'll say it. I miss my days of programming in Java Swing. May GridBagLayout forever bless your pack()s.
I usually call it TREASURE_MAP so that it won't get skipped over like README is sometimes.
I’m finding at work with the mass reliance on AWS stuff that this skill is super necessary for a lot of code thee days, because if you have to wait 10 minutes for your code to deploy before you can see if it’s right, you need to be able to see bugs before you run it.
Even recently, an engineer on my team spotted a retry technique, where the system tried to check if the DB's replication mechanism was slowing down, then would just re-enqueue the message for later. This ended up doing a bonkers amount of wasted effort because when DB replication started getting overwhelmed, messages would just constantly be re-enqueued and not ever processed. So the queue system ended up just having a bonkers amount of messages that were just popping off the queue, doing nothing, and hopping right back on.
1. Concurrency. Multiple cores are a completely normal thing now, so having to think about how different threads may interact went from a theoretical concern to a very practical one.
2. Dependencies. Back then you could just turn on the computer and start coding. Today many things have large amounts of dependencies that need installing, compiling or setting up. Weird problems can happen. I have an issue where VS Code just refuses to autocomplete in the test section of my project. Why? I have no clue, and VS Code is a giant of a thing. It's quite easy to spend days or even weeks trying to set things up and work out issues with things that aren't even the thing you were trying to write.
3. Teamwork. Modern computers allow for large programs, which require teams to develop. A lot of the work in building modern successful software is in organization, record keeping, documentation and working with other people.
4. Security. Pretty much everything interacts with outside untrusted inputs, and so it's far more important than before to treat every input correctly. Anything from image loaders to parsers to APIs may be exploited.
That experience is why I relentlessly bang on the drum of "do not swallow errors" because it makes troubleshooting indescribably hard
I would guess the more links in the call/dependency chain, the more opportunities for misplaced assumptions or laziness to sneak in, leading to your cited outcome
Distributing native apps has gotten harder in some ways with code signing required in order to share binaries without scary pop ups or the OS blocking outright.
In 4 days worth of work now, I couldn't even do an evaluation of which UI tech is still going to be around in 2 years...
These tools continue to get better every day. The target is only "good enough" and once reached it presents an outsized advantage over custom builds.
I fully expect no-code/low-code to grow in nearly permanent ways within many organizations.
https://apex.oracle.com/en/
The demo video takes you through creating a full blown database backed app with maps and other geo features, from nothing more than an initial CSV file. The code backing the app is represented as database tables, so you can use queries to explore the app.
The core issue is really the same one the no-code/low-code platforms have always had, or even that VB6 had - the ramp isn't smooth. Eventually you hit the limits of the tool or there's a business requirement the tool can't meet and you get stuck. Often that requirement may be something indirect and non-obvious, like growing the team to the point where you start needing 'real' abstractions, or keeping up with some new feature the underlying platforms added that competitors are exploiting but which aren't exposed. Hence why so many companies have mobile apps that are just ordinary Android/iOS apps instead of written using low-code tools.
It's really hard to build something generic enough. MS Access was as near to it as was feasible I suspect.
BUT the performance is _very bad_ if you don't throw money at oracle. The final nail in the coffin was when customers want you to build something that the blackbox doesn't offer. You are in for a wild wild ride, because whatever happens in the backend is undocumented, you cannot debug it properly.
The bigger pain, and one reason why desktop apps became less popular, is that with the rise of macOS and (to some extent) Linux, you need to distribute your app to three platforms all of which use different code signing technologies and approaches, none of which are portable/standards based or convenient. Also, software update was ignored by platform vendors.
Nowadays things are a bit different. Windows got MSIX, which is a real package manager and which can silently upgrade apps in the background on a schedule even if they're currently running. macOS has the widely used Sparkle framework for updates and of course Linux has had updating package managers for a long time.
Up until recently it was still a pain to actually use all those technologies, even though maybe developing your {JVM,Electron,Flutter,native,etc} app was itself quite pleasant and easy. My company has made a tool to fix that [1] and so you can now build self-updating Windows/Mac/Linux packages from your app files and all the signing is handled for you locally. It's an abstraction over the platform-native distribution technologies designed with an obsessive focus on being as simple as web app distribution is.
Making this stuff easy in turn opens up possibilities for (re)simplifying the dev stack. For example, in some cases you could now make an app that just logs in directly to your RDBMS. No need for a backend/frontend, REST, JSON, web server frameworks, giant JS transpiler pipeline etc. Just use a real GUI toolkit and connect it directly to the output of queries. Any privacy or business logic can be implemented this way using a mix of row-level security [2], security-definer stored procedures [3] and RDBMS server plugins (e.g. [4] or [5]). There are lots of nice things about this, for example, it eliminates a lot of nasty security bugs that are otherwise hard to get rid of (XSS, XSRF, SQLi etc).
[1] https://www.hydraulic.software
[2] https://www.postgresql.org/docs/current/ddl-rowsecurity.html
[3] https://www.postgresql.org/docs/current/sql-createprocedure....
[4] https://tada.github.io/pljava/
[5] https://pgxn.org/dist/plv8/doc/plv8.html
In the past, when we had no Internet, or when the resources for studying were scarce, we would intently stare at the screen, trying to understand what the program does. You'd have to disassemble and debug the program and even try to explain it to your cat, dog or a rubber duck. These days programmers are often not even familiar with the term "rubber duck debugging". We would study and read actual RFCs; programming wasn't "fish for that stuff on Stackoverflow or Google" kind of thing.
We used to let our brains wonder. Thinking about past and present. That allowed us to generate awesome, great ideas.
Today we have tons of tools and techniques that seem to be vastly better than the tools we had in the sixties, eighties, and nineties. But if you compare that with how much computational power, memory and storage capacity have expanded since then, and compare with how we utilize that power, somehow it feels like our applications are getting worse, not better.
It's nearly impossible today to perform any programming without the Internet. And that's why programming today is much more difficult than in the past. There's too much information. Too many ways to succeed, and even more to fail.
It's like trying to change the way the world works due to climate change. There's a massive amount of inertia and resistance to make the changes required for a better future.
I think the problem is continually compounded by the shorter and shorter attention spans that also allow the bigger and bigger longer term problems to be unaddressed.
“We should rewrite it all,” said Pham.
“It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.
“It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”
https://www.goodreads.com/quotes/9427225-pham-nuwen-spent-ye...
When I was a kid, my first computer had 4k of RAM; my next one had 48k. I learned a lot about how everything worked, because there was very little to it. Over the years things have grown, but I've had a chance to grow along with it.
But somebody starting out today start in the middle of vast layers of complexity. From processors and hardware that are hugely more complicated up through virtualization, containerization, rich OSes, languages and standard libraries with decades of history, tons of add-on libraries, and then out to user platforms with their own tangled ecosystems and decades of history. It's a lot!
I think it's so much harder now for developers to develop "mechanical sympathy", that intuitive understanding of the rhythms of machinery. Apparently slight shifts in code can be 3 orders of magnitude difference in performance. So much of what they deal with is historically determined. (E.g., what percentage of working developers has every actually seen a carriage returning or a line feeding?) And it's all running on much shorter cycles with an ever-updating mass of tools, libraries, frameworks, and operating systems.
On the one hand, people can do some amazing¸valuable stuff with very little training. That's great! But I think it's a lot harder to truly master the craft, and I'm concerned that a lot of programmers spend their time in professional contexts where the feedback loops are long or broken such that they are encouraged to be less analytical than superstitious, cargo-culting their way to from one overly-aggressive sprint deadline to the next.
This has always been something that’s fascinated me, how some people don’t have this feeling of “mechanical sympathy”.
Feeling “bad” (not the exact right term but I don’t know how best to describe this very intangible feeling) for a high-revving engine, or a CPU wasting cycles, or a structural component being under too much stress. Even if all of those things are within spec.
It feels like the drive to avoid that feeling ends up creating better solutions: more efficient code, a better distribution of stress across a structure, etc.
But I wonder if it’s something learned, or something that people just “have” to a certain degree. Not trying to pathologise everything, but it feels like the sort of thing that would be correlated with ASD
However, we have to set ourselves up so that it's easy to learn. With my early gear, I could hear it. Hear the disk seeks. Hear the bits streaming down the wire. I could see it via the blinkenlights.
In recent years, I've had to activate it, even build it. E.g., at the top of my screen I've got mini graphs of CPU, RAM, net, and disk activity. I'm constantly re-confirming and re-challenging my intuition of what's going on. And with distributed systems, I find various ways to keep feeding myself the sort of data that, over time, turns into intuition.
And that's so hard these days! Not sure how fresh-out-of-college developers are developing those intuitions, but I hope they're finding ways.
Whatever challenging "science" aspect I was expecting from my "computer science" skills just doesn't seem to exist. The concerns of logic and program efficiency/structure are still important, just not what I have to do most of the time when gluing stuff together. I'm trying to accept my fate these days and just be thankful I work in a well-paid field with good people, but it's tough as I feel it does affect my overall performance and personal enjoyment as flow is so hard to achieve anymore.
I am guilty as the next guy of throwing stuff that "just works" out there.
It is also important what kind of software you write - if you write frameworks, you better get unit tests and full blown process in place to make sure your framework will work in 10 years time.
If you work on a business application - in 6 months time your application might be gone or requirements change in a way that everything you wrote does not matter anymore and future proofing was just a waste of time.
I would like that more devs/software engineers understood which one they are writing.
I'd say if you write code with the assumption this might be true, you're almost guaranteeing it will be - business requirements do often change but typically that means having to modify/refactor existing code so that core components continue to work the same way, because those core requirements haven't changed. While you can make occasional assumptions about which requirements are more likely to change than others I'd much rather write all my code on the assumption the requirements won't change too significantly and that the tools/automated testing and other processes are in place to support that. There's far too much "throwaway/POC" code that ends up getting used in production, often for years, then mysteriously stops working down the line because nobody ever assumed it would need to survive that long. Whereas I don't believe I've seen a project fail because too much time was spent ensuring code was well written and well tested.
This does indeed happen. Kent Beck tells a story of being called in to help a European insurance project where, sick of all the legacy code, they spun off a new company to start fresh. Teams of bright people spent years building The Right Thing and doing it The Right Way. But they never delivered anything actually useful, and by the time the sponsors were entirely fed up, they still couldn't promise anything soon. So they fired everybody, hired back one team's worth, and then started fresh on well-written code that actually did something immediately useful.
I think there's a middle path between "write garbage that succeeds" and "write perfect code that never gets used". I think it's narrow but doable, and it requires dogged attention to both "ship early and often" and "build in a way that's sustainable over the long term".
"write perfect code that gets used a lot, frameworks, libraries"
"write throw away glue code with use of libraries and frameworks"
Because for me "write garbage that succeeds" vs "perfect code that never gets used" is false dichotomy.
Where most devs want to write libraries and frameworks because these are places where "eternal fame" is - even better if you can write your own programming language that is the highest echelon of software engineering.
Ugly truth is most devs are driving beat up honda - where Linus Torvalds, Anders Hejlsberg, Bjarne Stroustrup are F1 drivers - you can rev up your honda on street lights all you want - but that is not the same league.
So most devs write business apps and many of them overshoot quality.
I think this can be true for sufficiently bad definitions of quality, which I agree are very common. To me, though, overengineering something doesn't really increase quality, because actual need puts an upper bound on quality.
But if your notion is that there's no place for throwaway code, I disagree. The trick with throwaway code is to actually throw it away. Temporary and permanent code, used appropriately, are both vital to projects that are resilient in the face of real-world circumstances like lack of certainty and changing needs. It's the third kind you have to watch out for: https://web.archive.org/web/20190709091156/http://agilefocus...
Oh I definitely have. In fact I'd wager that it's the leading cause of failure for programmer lead startups. It's far too easy to spend time worrying about code and ignoring the business when your skillset lies in code and not in business.
For the most part, those programs written with those tools still work.
Today everything is update on the fly, and subject to random breakage. GIT is better, everything else has gone downhill.
Today's new developers have to deal with a landscape that is, in my opinion, discouraging. You have to spend a huge amount of time setting up environments, creating sane default configurations, and learning about hundreds of dependencies, before you can even start creating something that would be considered semi-useful.
It makes little sense to violate everything we know about language design (esp. regarding simplicity and orthogonality) and the cognitive limitations of humans (esp. developers) and keeping the Frankenstein language alive.
It got hashtables in its standard library only when everyone and their dog had already been forced to implement their own for 15 years!
And STL is so complicated that Stroustroup joked that he couldn't have done it if Alex Stepanov hadn't been able to pull it off. That may be a compliment to Stepanov's intellect, but it isn't a compliment for C++'s design.
The evolution of any language inevitably adds reams of extensions, variations, and libraries. This makes the tool not only a lot more heavyweight, but much slower and harder to master, and personally, a lot less fun to use. Give me a tiny simple language (e.g. C) any time over a giant language that requires me to navigate multiple programming paradigms and layers of abstraction (e.g. C++).
Modern languages are like having to speak in Latin. You spend all your time trying to please an nazi grammarian, rather than speaking simply and naturally, as the language was originally conceived.
Fast forward to today and the most painful parts of my day are dealing with shitty tools. git’s incomprehensible interface. Jenkins CI/build system that’s barely more than a log of every damn line the compiler outputs, but split up in a way that somehow makes it even harder to figure out what went wrong when something does go wrong. JFrog’s Artifactory that looks like it’s having seizures when you search for the thing that Jenkins built. And then when you find it, it lists the path, but you can’t click on the path to download it. There’s a separate button in a different place for that. These tools feel like they did a user study and whenever something was easy for the user, they threw that out and figured out a way to make it harder. Interacting with this shit is infuriating, especially when you’re on a deadline to get something out the door. I feel like I’m taking crazy pills when I bring up these problems and other people just shrug. As if that’s the way it’s always been and it can’t be changed.
Indeed.
I'm old school green screen hack, but I think that user interfaces are a black hole of time and bike shedding.
Back in the day it was a green screen. If you were lucky, you might have line drawing characters. Mind, we're talking smart terminals and curses level work here, vs block mode IBM displays, but still. When options are limited, there's less time spent on discussing and implementing options.
Did people settle? Yes, they did. They made do, they made it work, and work got done. I recall a tire store running a linux desktop with their work order system some green screen app in a terminal. Yet, tires still got mounted and sold.
I think they've changed recently, but Lowes used a terminal based UI for all of their order taking and work orders. Whether it was a stove or carpet installation or buying a couple of chaptiks and a 30 pack of batteries. The employees were adept at navigating it and got the job done.
Did they require training? Of course they did. But training was required no matter what the UI was, as they UIs simple encase process. Process unique to the company, and those processes always have to be trained. Just raw truth.
I'm no artist, I have no color sense, my layouts are a struggle, and putting 3 fields on a 5K screen is challenging no matter what. But I think the real values of the capabilities of modern UIs are marginal at best for most routine use cases, specifically in the back office (where the vast majority of software is designed and written).
I dunno why this place has an obsession with 'le good old ways', but either way it's not the elephant in the room.
2. And the pace of development, not because one cannot learn fast enough, but what one has learned will quickly be made obsolete because of "developer fashion" (favourite front end JS frameworks, anyone?). This happens because communities are important, e.g. for answering dev questions and fixing bugs, so one cannot adopt a "dead" framework. Each new language suffers from lacking essential libraries, components (and for managers: talent for hire), so it is risky to adopt one that has not yet accrued critical mass, yet it is also risky to miss a trend.
3. Computer security for many systems is critically important yet very difficult, and attackers are everywhere.
On the upside, in the past, machines were more heterogeneous, however today there are just a few survivors: Windows, Mac OS X, Linux and in the mobile space Android and iOS. Because many apps are Web apps, cross-platform has become easy, although the user experience of a Web app in no way is comparable to a native app. WASM is likely to address some of that, and - I hope - will remove much of the ugliness of abusing the HTML+HTTP paradigm intended for technical documentation for distributed application.
I would offer that a lot of the attack surface came from the intersection of the Internet and shipping faster than testers could keep up. The Internet means a mistake's blast radius is no longer just limited to insider threats or downloading shady software onto your own machine, now every computer in the world is a potential threat. The shipping "faster than thinking" means years worth of best practices about SQLi or CSRF or IDOR or or or get swept under the rug
In some ways, this is the same as 2 in your list, and I guess some of 1 also given that some frameworks and tools help that problem and some are "welp, good luck, don't screw up"
Years ago, a curious individual could solve some basic problem, maybe its a better file system, or a better kernel, or a new scripting language. Your program might take off, and you could build eventually find an entire industry around the problem you tackled.
Today, that's unlikely to happen. Gone are the days of some people hacking in their room creating a solution to some problem. Most of the "low hanging" fruit is now in very niche areas, you will have to do some extensive study just to understand the problem well enough to approach it.
Instead it means "here are three or five industry-standard solutions, each of them working differently, each of them bringing different problems, you need to study how to use them, but some of the important things you will only learn from experience; by the way, five years later most of this will be obsolete".
The application you are making is built on ten or twenty "solutions" of this kind.
Not just theoretically, but literally. I started an internal dashboard with just flask and bootstrap. When I needed some nice interactive graphs, I found dC.js and added it.
Then my forms got more and more complex and I just added Vue.js to the same pages. I now got extremely interactive and intuitive static html pages that interacted with the server side only when they needed to. All in one file.
Now my team is starting to take over the project and our form has 100s of fields, so we are migrating to a proper build system with svelte.
I have done amateur web dev from 1997 till now. It has never been easier to choose the system with the correct amount of complexity and boilerplate you want to be maximally productive.
In practice, a lot of these choices are already made of us. When you join a new project, the space of choice is limited. But when choice is too be made, what I find difficult is that you need to reach a consensus with your colleagues. When you're introverted, it's taxing. There's always some person that needs extra convincing.
Code review wasn't as pervasive and can be tiring too. Sometimes you need to explain again and again why you made that decision (it was in a design document, it was discussed in a meeting, and then the question is brought again in the code review).
But the worse part for me is the accumulation of abstraction layers and dependencies. I'm working on a project with tons of internal dependencies that are loosely specified and documented, all of them introduce some unreliability. The whole edifice is fragile, and yet is expected to work 24/7. This causes a lot of stress.
Yes, open source has a lot of positives, but the bad part is how it has created a culture that incentivizes such a huge amount of dependencies for every project. It seems that for any new requirement the only possible solution is to add a new dependency on yet another library. This results in fragile builds and daily changes caused by dependency updates. In some cases it causes as much troubles as the perceived benefits.
30 years ago, nearly all projects I worked on were waterfall and my development team was far more relaxed. Spending days tinkering and thinking of a good solution was valued over sprints and rapid commits. Of course, we got far less done back then, but it was also less stressful (in my experience, at least).
Typing this one magic word brought up an IDE, including an editor with highlighting, an interactive help system, samples, an in-editor REPL, and single-key shortcut to run the program. I can't remember if it also came with a debugger and a way to create stand-alone executable, or if that came later.
It had built in commands for drawing, input and sound, all well documented. And the UI was straightforward and intuitive.
This doesn't really exist anymore.
It didn’t, it came earlier: QBasic was (and is, it stopped being part of the default install with Win2k but is still available for current Windows OSs) a stripped down interpreter-only version of QuickBASIC, a compiled BASIC.
Typing this command brings up an IDE, including an editor with highlighting, an interactive help system, samples, an in-editor REPL, and single-key shortcut to run the program, tabs, step-through debugger, breakpoints, intellisense, snippets, extension system.
It has access to the .NET framework that C# uses such as System.Windows.Forms, System.Drawing, System.Console.Beep, System.Media.SoundPlayer, and the UI is straighforward with an editor and a console pane. Library is well enough documented if you can use MSDN website, but certainly not as simple as BASIC and SCREEN 12, LINE (10,10)-(20,20).
This does really exist, but it's deprecated despite being powerful, simple, convenient and useful. Instead the recommended path is to download and install a new PowerShell, VS Code and some VS Code extensions, to get a less integrated, more complex, not-bundled setup.
A plug-in or add on for the tutorial on how to make HTML/JS pages right there inside the browser.
That's not quite like QBASIC was though. Making webpages is just a dim shadow of what QBASIC and controlling your computer was.
20 years ago, if I needed to convert a json file to xml (that's a very bad example because 20 years ago JSON was just born and nobody used it, but no other example as simple as that one comes to mind, sorry ), that would have taken me days of coding just for that single utility function. And probably lots of headaches with regular expressions or syntaxic/semantic parsing.
Today, I type "npm install json2csv" and I'm done.
The only thing that might be harder for people starting today is that the level of abstraction is so high today that we almost don't even need to understand the underlying computer science to be a developer, and so the few rare times when you do need to understand what's going on behind the scenes, that might be complicated. Other than that it's only advantages.
Progress and history always go this way, things always get easier and cheaper to build. I don't see why programming would be different.
But whether it was free or not? Maybe another matter entirely.
And worst comes to worst there was always some awful code to cut-and-paste on ExpertSexChange
Except they're not few or rare. Got a bug? Good luck figuring that out. Performance problem? Who knows. Most of the stuff you're using, you didn't write, you don't understand, it's all just black boxes, or as someone else said, mystery meat, glued together. And when it starts to smell, and oh it will, you're just looking at a mountain of mysterious complexity.
It's not like building stuff anymore. It's sticking stuff together. "Hey, it's just like playing with Legos!" Except quite often you end up with it feeling like stepping on Legos.
You have to do a little bit more: You have to search for it and decide which of the n converters you will use (licence, other dependecies, ..)
But I get your point.
My parent was an electrical engineer. While their skillset was varied, and heavily weighted towards physics and math, their day job was mostly about knowing how to pick parts out of a catalogue, which catalogue to use when, and maintaining good relations with the suppliers behind those catalogues. I always saw this as a weird disjunction.
Until now.
I've just realized that while my core skillset is in math, logic, computer science, and system design, I do spend a lot more of my time researching repositories, packages, licenses, etc, than actually coding. But with the new perspective your comment gave me, I finally saw that as totally in line with other, older engineering disciplines.
So, to address the original point of this thread, it seems as if programming has finally evolved into a semi-mature engineering discipline, wherein our ability to gauge the quality of existing supplies is more important than our ability to produce new supplies.
When I read that section, I wasn't really convinced. Not that programming is not quite different than it was way back when, but the difficulty that we face is that there is so much more software around. Some bad, some good. Many more useful tools.
For example, since I started programming the following innovations have significantly added to tools that we have to apply to programming:
The illusion that we are hampered by lots of choice is a bit illusory. Usually our programming environment is set not by our own individual choice, but often by the environment we plug into--by what resources we are going to access, and other team decisions.Artifact of a rewrite? Still, undeniably true!
Maybe about 20 years ago, if you had a website that allowed users to post a comment and upload a small jpeg, it was considered crazy bonkers cool.
Today, you probably need some advanced 3d UI that communicates with your phone and millions of users in real-time with geotracking and 100 other features to get a pizza to your door in under 5 minutes to barely raise an eyebrow about the technology.
Devices have become more complex and interconnected. Expectations for software are also higher, while there’s more resources poured into finding and exploiting vulnerabilities.
Keeping on top of all of that, isolating your memory access and permissions, etc… is a lot harder today than it was even a few years ago.
In every other way I think programming is easier now. Languages are more ergonomic, even the ones that are decades old. Libraries are more easily available and there are tons more resources today than ever before for learning.
In summary, apparent simplicity that causes actual complexity is a big problem now.
And those things, frameworks and platforms, are the biggest technical burdens for programmers today. The ability of the web to create what we used to call "interactive apps" (but are today just apps) has lead to the desire and expectation that all web content will take on this level of polish and appearance. While that's possible, it's also arduous. In today's world, you must also learn the frameworks, tooling, and CI/CD processes that lead to your work making it onto someone else's screen. That's a whole lot harder than what we used to do -- like when publishing meant copying a floppy and putting it in an envelope, for example.
The ability of platforms to change their specs and rules all the time (and their insistence on doing so) is another new programmer burden. In the old days, one could write to a piece of hardware or OS and expect that code to run for a long time. Not so anymore. Now the hardware is virtualized and many layers of middleware and SaaS are required to make your code do anything. All of those are moving targets and will change out from under you, without your desire or permission, while you try to continue to deliver new code and service your old code.
Finally, and this is the final nail in the coffin for some more experienced programmers, the misunderstanding of the concept of "agile" or "XP" and how it became scrum -- a series of micromanagement theatrics and paperwork pushing -- has made programming a lot more difficult and a lot less fun. One of the best parts of software development was the unpredictability and experimentation that led to innovative results and small, incremental improvements. Close contact with customers was also a hallmark of early software development. Today's top-down, "How long will it take you to write and debug a feature that doesn't exist?" management mentality cannot and does not lead to quality software, as everyone can tell. It does lead to programmer burnout, quiet quitting, and a lot of wasted time in dev shops.
This sounds precisely correct from my experiences lately. At some version of scale, one simply cannot run the application on just a workstation. The complexity of distributed systems has gone way up, primarily in my opinion, because it solves an organization’s problem, not an individual programmer’s.
Today, every single business application is somehow supposed to have its own design language and can't use any standard UI library.
Also, deeply ironically, I think now certain web applications are now heavier than Swing applications (obviously not IDEs, but ordinary applications).
And I assume most people are considering web applications rather than native. If so, it’s like a full circle from xterms…
Only half a day? I've seen these take, cumulatively, more than a day.
Then add in the requirement that changes/proposals needs a technical document review (and a followup if necessary), that any planned feature needs input from everyone involved, that some proposals which may touch some other teams work needs their buy-in (and so you have to schedule proposal presentation with them), and you can easily see a sustained 25+ hours per week used only for meetings.
I think the difference is maybe one of perception. We can do more so the baseline expectations of users/customers are higher. I also think that there has been a higher growth of people specializing in one area rather than every programmer sort of being a full stack generalist by default. So for a front end specialist, databases and server side may seem like a black box and more complicated than ever. Or someone specializing in kernel programming might think front end is more complicated than ever.
Generally you can still do things the way they were done in the past if you want to. New tools might make things easier and learning them might seem complex, but you don't have to.
It’s great to have quick solutions to small problems, but too often newbs stop with the SO answer and move on, which means while they’ve solved their problem in the moment they never spend the time to actually learn the underlying reasons why the solution worked or explore the issue deeper to actually achieve understanding.
The good software engineers I know are all yak shavers. They’re willing to spend extra time learning how something really works and ensuring they fully understand an issue and its solution, even if the issue is just a small part of what they need to accomplish.
Every choice we make as developers today not only has to be vetted by the team, it has to fight against all of the other options and the opinions created by people who are paid to convince others to use their solution.
I’ve had to push back against both my peers and managers who suggest technology that’s been marketed to them. And by push back, I mean spending days researching these new technologies to see if they’re just an old technology with new wrappings, a VC backed promise that’s still a few years from being usable, or doesn’t solve the problem at all.
I won’t go into too much detail, as it will just turn into a vent. And the current industry direction in general doesn’t make it easy (containers, industry leading black-box “solutions”, system admin and security).
In my immediate experience; cowboys that just “do” no matter the risk, informed or not; are really thriving right now. Not sure if this is a post-pandemic large lockdown results driven thing..
But if you follow good/great practice, are a responsible, policy and process following individual - constructive and critical thinker; you’re in for a hard time. Smaller firms seem to handle this better, in my past experience.
Theres also a group of individuals in between these two that seem to be struggling.
Of course, that's situationally-dependent. If you've got the constructive and critical thinking bit, hopefully you're questioning those practices. If cowboys are thriving, there's a reason why. They may be wrong sometimes, but sometimes they might be right.
My choices of words may not be ideal. I guess more soo fundamental good practices than similarly above criticised buzzword-like.
Not sure how you feel about the rest of the post?
Either way; thanks.
Concurrency and multi-threading. To get good performance you used to care about a single thread on a single CPU core that you had exclusive access to. Now you have to utilize multiple cores, care about context switches and in even more extreme cases handle NUMA memory architecture. It's hard and the fact it's slightly less hard with go is one of the reasons of the language popularity.
My tongue is in my cheek, but I often wonder.
Look at an Android project: there are maybe 12 different kinds of files! For a 'Hello World'. You have manifests, gradle (which is yet another programming language), snippets in Kotlin and Java, and entirely different xml 'language' for view definition, massive APIs, massive complexity and 'weight' in the simulators.
It's hard for people to focus on the problems space when were are overwhelmed with layers of tooling and abstractions.
The idea of developing and running this application with a single person would have been laughable.
Today, because tools are soooo accessible, I can, and do, routinely spin up my own VM, apply Puppet, drop my MySQL and Django containers onto that VM, and pull https certificates in addition to doing the front and back end software development.
Life would be waaay simpler if I could just write server side web application code and wait around for database developers, sys admins, and front-end folks to do their thing. Imagine not having to learn a testing harness because there are actually people testing the software!
On the other hand, now a regular run-of-the-mill developer is also supposed to be knowledgeable about sooo much stuff that is not so directly related to their code. As you say, sometimes it’s nice to be able to hand that off to specialists - a person can’t be good at everything.
I have heard on internet explanations like "it means that developers and operations are communicating more and working together as one big team", but so far every company I worked at has interpreted it like "it means that in addition to developing the application, you are also supposed to do the operations".
I started in the mainframe/COBOL days. There, the programs could be sophisticated but the systems documentation was excellent. Just a little complex and you really had to understand algorithms and such.
Then came client/server. That brought transaction monitors (like Tuxedo) and took away the benevolent dictator model (IBM) that gave you transactions practically for free. New stuff to learn, pitfalls to avoid.
Then came J2EE. A ton of complexity and false starts on fledgling technology. XML processing, etc. Finally REST came along and made things somewhat understandable-- but still you had to manage transactions finally.
And now we're in the Kubernetes age. Once again, tons of infrastructure to learn, but a strong framework. (So somewhat like the mainframe days.)
It's all a big circle. You have to constantly learn. I really think it is harder to get on board now than it was in the past.
Also if you start today, you gotta learn some part of the pyramid your abstractions stand on, and that's growthing work as time passes.
Now, write distributed micro service. Many many programs. Many databases and queues. How do I debug that? How do I monitor that. We get scalability minus L3 cache access. Now deployment orchestration. Win zero down time deployment.
Now, getting a decent toolchain takes some (usually small, but nonzero) effort, and there is a flood of conflicting information on every decision, including that first step.
Information overload and analysis paralysis bite hard.
(If you can get past that initial hurdle, things are immensely better than any time else in history in most ways, though the trap of getting overwhelmed by info and options is persistent.)
What passed for project management before XP and Agile wasn't great. But at least it didn't impose ridiculous administrative burden onto devs.
Gods, we used to complain about doing weekly status reports. Now it sounds like heaven.
That solution always had some scripting capabilities, which were a subset of JS with some software-specific extensions. Nothing really fancy, a few useful things were missing but overall there always was a way to reach your goal.
As I'm not doing developments daily this "low-tech" approach was nice: One file that would be copied to your production system and linked there and that's it. For debugging you had the integrated logging interface, a web-based thing, not too fancy.
One or two major release ago, they switched to a TypeScript/NodeJS based scripting system.
While I can now "glue parts together" by using npm, I also have to transpile my code after every change and have to run an extra step to "package" everything for deploying it. Creating a new script requires a specific console command which will then prepare the file & folder structure.
Debugging can, in theory, still be done through the web-based interface, but the recommendation is to set up a launch.json file, so VS Code can directly connect to the software on a pre-defined port and you can set breakpoints and step over your code while it's being run. I'm sure that's cool for hardcore programmers, but maaaaaaaan, ain't nobody got time for that if you've got business stuff to run.
Somewhat anecdotal, but I spent 4 hours on Friday with trying to extract numbers from a txt file and write them to another file. Reading the file, running a regex, that's no big deal.
But creating a temporary file is a HUGE pain with the new system. In the old system it was something like: var fh = job.createNewFile('test.txt', 'UTF-8'); and you could then work with your file handle.
Now I' fiddling around with the third npm package to create temporary files and it seems like the problems is not me using the packages wrong but how those packages try to create a file in a Windows environment which fails.
To be honest: I'm still guessing that's the problem and need to ask some smarter people, but thing like that "just worked" before.
Oh: And now everything is async, which really gives me headaches, or you have to create a function which can then be called with an await so everything else waits for it...
Swap? What's that? Network stuff? Huh? I have wifi.
They have a laptop, a.good one, and that's just it.
And while security should always have been a thing, the chance that you write an application accessable on the internet is much higher today than 10 or 20 years ago.
Beside that, there a much better tools available and easier to use than ever before. Building a whole platform is possible with a small team.
Building a highly scalable self healing zero downtime system is basically gifted to you when following modern practices like good java frameworks (magic) and k8s
It used to be transfer your source files via FTP, now it’s setting up Docker and Kubernetes and I don’t know what.
Ofc the latter is better for teams, but now you must learn a whole stack just to deploy code.
At those companies fixing a bug that would be relatively straightforward with a monolithic architecture is a humongous pain in the ass.
Technically, programming is not more difficult than it used to be. But in practice it is, due to how expensive and complicated it is to know what is going on, and how little the average manager cares about it.
You used to know and a few basic APIs (your OS, your stdlib…). You’d probably spend more time implementing basic utilities. However now you have to manage a complex software supply chain of dependencies of varying quality and security risk.
My first language was GW-BASIC on an IBM compatible. Turn on the machine, type GWbasic and hit enter. You get the IDE, the editor and everything else in a single shot. Zero barrier. You could write decent programs in it and gave you that ongoing sense of achievement. Nowadays, you have a long series of things to do (clone a template, setup your editor/environment, download this and that etc.) just to get started.
Learning programming today is difficult because of context. Some kids don't even know what a file system is. The idea of an interpreter evaluating a text file doesn't click because what even is a text "file"?
Being an intermediate application developer I would argue is easier than before and possibly the best place to be in your career. Web Development with React and Angular turns front end development into a more complex excel-like experience. It's still programming, obviously, but the tools take a lot of the complexity out of it and a lack of experience has you blissfully ignorant of anti-patterns and a lack of testing.
Back end web developers have the best life (technologically speaking). Complete control of the runtime, any language they want to use and normally pretty straightforward implementations. Security and testing are often overlooked but that's okay.
Senior application developers hold so much context that they are often frustrated by the design decisions of the tools they use. "Why can't we all write web applications with Rust, except using JavaScript modules because Rust modules are terrible, but only once Web Assembly has access to the DOM - actually the web sucks and we should write native applications. I can't wait to retire."
Devops is both harder and easier. It's easier to build a scalable reliable system, but is harder to get started because, while a simple cloud VM is still available, you feel dirty if you haven't provisioned everything using some form of orchestration.
Desktop application developers are in the worst place possible. Microsoft doesn't even know what GUI API it wants to use. No one uses Apple's native Desktop API, Linux is.... anyway (don't flame, I love GTK4). The best option is Electron until everyone is using a single platform.
Mobile development is hard af, there are almost no engineers in the field and you need a PHD just to install Android studio and pick the right Android SDK.
All in all, I love it
We have way better resources for learning but also much higher expectations. You need to be able to write good code without much thought so that you can maintain a higher level of context while programming. Sometimes you get to write just a nice little isolated bit of code, but usually there are a lot of moving pieces. (no matter how functional and immutable we try to make it.)
lot of people with large followings espousing opinions and practices when they hardly spend time shipping product. this makes it hard, for even someone senior and experienced, to know what they should pay attention to. sometimes it feels like you should pay attention to things you fundamentally know doesn't make sense. and a lot of energy goes into unwinding mainstream rhetoric
I cite this as difficult because while most of the other hard parts of software are enjoyable, this one is just an energy suck
Ensuring that your software will still function in 20 years. Previously you had shrink-wrapped software packages on physical media with explicit releases and versions. Now everything depends on cloud services and APIs which could change or disappear at any moment.
Imagine being a firefighter and your boss tells you that you can't use a water hose to put out the fire; instead you must use either a portable fan or a flamethrower. That's what programming feels like these days.
It is very difficult to justify taking 10x more time. People have cute arguments as to why, but practically every tool common to development lacks significant evidence. I'd argue that's why the endless discussions are so easy to have, too.
qBasic was installed on windows machines and didn't require the internet.
People on mobile don't have a keyboard that makes programming fun.
You needed to program to get your computer to fully work before.
It was easier to learn before because you had to and programming was part of operating a computer.
It's not a bad thing. It just means that knowing one language and some algorithms - formerly the exact skillset of a typical CS grad - isn't really "enough" to do anything. Instead it's whole stacks of software that an organization has staked their own project on. At an old company, they built most of the stack, but nobody who built it is around anymore. At a new company, it's open source, but they don't have anyone to spare to maintain it. Eventually there is some threshold that gets crossed where the stack is the problem, and that drives an attempt at solving things better.
But it's no longer the case that most orgs have to build most of their own software; it's nearly always glue and one piece of special sauce.
Users had much lower expectations.
Programming was slow and intentful. Programming today is much more stressful, I think.
The likelihood that the next thing you do sees you scurrying about Stackoverflow even after 20 years in the job, is quite high now.
Advances in automation and tooling make it easier than ever to develop complex projects with lean teams but it can't be done if your org opts to inherit a brittle dependency chain every time "I don't want to reinvent the wheel" surfaces.
It's so much worse today when it comes to things like paralysis of choice and configuration.
I've been working on a new project for Boot.dev students and the goal is to get them a simple but professional dev environment on their machine. It's a very hard problem.
As always, the core issue is complexity. We expect our programs to do much more than before, and that requires additional complexity.
But we already know how to deal with complexity: we add a layer of abstraction. The problem, in my view, is that current abstraction layers are either too low-level (e.g., React) or tackling only part of the problem (e.g., AWS Lambda).
With GridWhale, I'm working on creating a layer of abstraction that appears as a single, unified machine that you have full control over, but is actually running on a distributed, scalable micro-services architecture.
I've got a long way to go, but I think this is the right direction.
But that's a completely separate topic: "structure and interpretation of other people's mud balls".
Don't screw up "structure and interpretation of computer programs" just because most dev work is making changes to existing code. Teach that in a different course, if you have to.
The coding that the book is based on was never common in relation to daily grunt work: business or scientific data processing and whatnot.
I sure wouldn’t want to write the backend of a crud app in Fortran, nor the front end. Or do anything with Fortran besides scientific computing (Fortran = Formula Translator…it was built with a limited use case in mind!)
Companies that adapt to newer frameworks and not write everything in C++ are more efficient, but only if they control how much variation of tooling there is within a discipline.
So today, you do have to specialize in a discipline a bit more (front end, backend, data) but each discipline has a sensible set of tools IMO. A developer can and should get some exposure to a secondary discipline to be well rounded and “T-shaped”, but should also appreciate the value of specialization.
More like 4 or 5 distinct sets of tools and that collection changes every few years.
Evolution is generally good.
So, yes, within a discipline there is more than one tool, but often only 1 or 2 current tools that are mature and worth using if you’re starting a new project
The reliance on Design Patterns (Gang of Four) went from nice to use, to mostly required.
So many developers now go right in to coding without learning other skills. The result is software to automate or help a system that the coders don't really know how to do manually and it shows. These coders know one language, one framework and maybe very good at that but they don't actually know how a computer works or why what they are doing work.
Fast growing businesses that require more workforce than market can provide. This result in more inexperienced developers writing code and frustration of the experienced developers (if they have to work with the former ones).
Too much of dubious quality blog posts (we now have too much information we need to verify with opposite to having too little information that was hard to find years ago).
Once upon a time, "UI" was "print the output to a file".
Then it was some interconnected CICS screens.
Then it was some interconnected web pages.
Now it's some interconnected web pages that also display properly on mobile devices, that display in the user's chosen language (and numeric format), that displays properly no matter the user's screen size, that hopefully allows blind users to use a screen reader, that respects Europe's privacy laws, that has good security...
It's easier now because we have better tools. It's harder now because the baseline expectations are so much higher.
In the 90s you could count on one hand the number of large scale systems that could support thousands of users concurrently.
Any half-decent mobile app these days could easily get to a few hundred thousand concurrent users.
Always start at the bottom and work up. You’ll avoid much of the frustration of confusion, at the expense of a slow start. Most importantly, when you’re done you’ll be useful at every level.
The business processes covered by systems have grown much more complexer. The organizations have grown larger because software is doing more (more safely and easily).
The modern programmer faces thus larger organizations, larger/more-sided discussions and far a complexer social, political and business-related landscape than 20 years ago.
While the technical problems have gotten much easier, in general.
Now with dynamic languages, interpreters, docker, kubernetes, AWS and layers of dev tools and frameworks it can be harder to know what your code is actually doing. But those abstractions can also give you superpowers.
The tools more often than not want me to pull my hair out (coded mostly in Tcl...), the technical forums and support are often non-existent, and the technologies quite often haven't been update with QoL changes in 20+ years.
Yes, like you said the massive amount of code and options, often each with their own trad-offs and advantages
The hardest thing is sometimes not always chasing the newest and latest thing to gain a bit of performance, or improving an f score, etc.
Everything else has gotten better: the machines are almost-inconceivably faster and larger (in capacity, logical size; in physical size they are of course ever smaller!), the compilers are smarter than ever, the languages more ergonomic and safer, etc.
The only downside, the great undertow, is burgeoning complexity. A "thundering herd" attack on the human mind.
Sure microservices are faster to elevate and more reusable, but the systems have become a birds nest of dependencies. A lot of the time our troubleshooting relies on the other teams that own those services. Coordinating and communicating takes significant overhead and inevitably leads to occasional issues.
There's also the Rick Cook quote:
> Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Also society's pace. Things moved more seasonally.. nowadays there's <lang> fatigue in many places.
But getting to that point where you feel like you're accomplishing something, where what you've produces feels like it measures up to the standards you've been taught to expect, that barrier's gone up so much faster.
For example, there’s still a lot of python 2 code snippets that won’t work now that python 3 is the default interpreter.
And all that has to work, kindof. It’s a bit crazy.
That comes with a lot of problems. You end up dealing with user accounts, authentication, request retries, what to do if the server is down, etc. Plus all the security and abuse issues.
Testing, architectures, patterns, principles, software life cycle, building abstractions, system modeling etc.
- really worry about disk space or memory size
- check if something would work on IE
- worry about version control
- manually scale something
You can not worry about these things, sure, but then your software will be slow and expensive to run.
But I understand that in the context of large teams.
Coding itself is the exact same thing as 40 years ago, input, processing, output
Today code is written for humans.
Mostly by people who aren't great communicators.
Weirdly, I think the biggest difficulty is that we are much better served. A huge amount of what we do is well paved. React, bundlers (which are fairly isomorphic to each other webpack/rollup/esbuild/parcel/snowpack/vite), Systemd, Kubernetes, React, gRPC/protobuf, npm/node, even uring or eBPF... the list of well entrenched technologies is high. There's better set ways to do things, well served, than there used to be.
The difficulties show up in a number of fashions. First, it precludes innovation, is highly stasist, when there is a humming along underbelly that maintains life. In the Matrix, the Elders of Xion admitted it was just highly automated machines that kept life alive, and in some ways our ascent upwards has decoupled us similarly; we're rarely liable to go back & reconsider the value of trying other ways. We're kind of "stuck" with a working set of system layers, that we don't innovate on or make much progress on. Our system layer is pretty old and mature. Everything is ingrown & interlocked, depends on the other things.
When we do try to break out, rarely is it precisely targetted reconsiderations: often the offshoot efforts are from iconoclasts, smashing the scene & building something wildly different or aggressively retro. Iconoclasts seek pre-modern times, rather than a new post-modern alterations or rejiggerings.
Another difficulty with having vastly more assumed is that there's less adventurers in the world, less find-out-the-truth/roll-up-your-sleeves/dig-in/read-through-the-source mentality & experience (and more looking only on the surface for easy "solutions"). Being generally well served means we rarely go off the beaten path. So our wilderness survival skills/resourcefulnesses are attrophied, and newcomers are less likely to have developed these deep hunting skills that used to be both simpler (because our systems back then were simpler, had less code) and more essential. A lot of these modern works aren't even that hard to dig into. But there's shockingly few guides for how to run gdb or a debugger on systemd, few guides to debgging kube's api-server or it's operators, few people who can talk to implementing gRPC.
I don't think we're at crisis levels at all, but i think the industrial stratification & expectations of being well served will start to haunt us more and more across decades, that we'll lose appreciation & comprehension (alike the Matrix problem), we'll fail to make real progress.
GraphQL is an interesting case-study to me. It rejected almost all common web practices & went back to dumb SOAP like all-purpose endpoints. The advantage of just getting the data you ask for was good, of not having to think about assembling entities, of having a schema system. But so many of these things are things the actual web can and should be good at. We spent a long time having schema systems battle each other, but that higher-level usable web just kept failing to get built out, so total disruption made sense. We still haven't a lot of good replacements for GraphQL, still haven't made strong gains, but still, it feels like GraphQL is somewhat fading, that we're less intimidated by making calls & pulling data than we used to be.
A lot of the abstraction then was simply mind boggling, and while the patterns all still exist today a lot of them have been simplified by language features and put on a diet.
Sure we have some frameworks and such to make this easier but it really puts a damper on experimentation and raises the barrier of entry significantly.
I miss the days where you could hate your users and the code didn't have to like have a SLA.
/s
The effect is guessing. Everybody guesses on whether a candidate can potentially do the job. Some of those new hiring start ups might be slightly better at guessing, but its really still just job boards or head hunters with a large margin of error.
The way other industries solve this problem is to establish baselines of accepted practice. If you exceed the baseline you may or may not be employable, but you do at least exceed the minimal technical qualifications to practice. This is true of professions like: teacher, truck driver, lawyer, doctor, nurse, accountant, real estate, fork lift operator, and really just about everything else. Unfortunately, most software employers spend all their candidate selection effort attempting to determine minimally acceptable technical competence instead of more import things, such as soft skills, and even still its often just guessing.
Other industries apply this solution in one of two ways: education plus a required internship that may result in a license or education plus a license followed by an agent/broker relationship. Education may refer to a university education or a specific technical school depending upon the industry and/or profession.
To mitigate hiring risks, since everybody is just guessing anyways, employers turn to things like tools and frameworks to ease requirements around education and/or training. This is problematic as it frequently results in vendor lock-in, leaky abstraction problems, and catastrophic maintenance dead-ends when a dependency or tool reaches end of life. Even still potentially having to rewrite the entire product from scratch every few years is generally assumed to be less costly than waiting for talent in candidate selection since there is no agreed upon definition of talent and hiring is largely just guessing anyways.
All of this makes programming both easier and more difficult, depending upon which side of a bell curve your capabilities reside. Reliance upon tools to solve a very human competence problem is designed to broaden bell curves which allows more people to participate but also eliminates outliers. This means if you, as a developer, lack the experience and confidence to write original software then you might perceive that programming today is much easier. If, on the other hand, you have no problem writing original software without a bunch of tools and dependencies you may find software employment dreadfully slow and far more challenging than it should be for even the most simple and elementary of tasks.
Every single ticket I get, I just immediately see for my mind's eye: a couple of tables, and a couple of queries. And sometimes if it's something really weird, I see 5-10 lines of shell script.
But I'm "not allowed" to use those tools, and instead I have to use ORM, Object Oriented Data Model, different JSON translation operations, grotesque frameworks and in the end spend three weeks of very frustrating wasteful work, trying to coerce these humongous tools to achieve the simple feature, that would have just been three days of easy pleasant work if I was "allowed" to use a database.
I worked in a bank in 2006 and we used VB6 to create CRUD based apps in no time, which were super stable, fast and responsive. Just a simple MVC stack, query the database, get recordsets, render a view. This was a really simple task back then.
Now it takes ages and so so much work and effort to create an equivalent web based app, which is always slow and buggy, and the work is mainly incredibly frustrating trying to reverse-engineer and figure out some inane "smart" logic in a framework, that tries (and fails) to automate a task that was already very simple and quick, and didn't need automating at all.