Ask HN: In what ways is programming more difficult today than it was years ago?

Reading Peter Seibel's Coders at Work, and this is Joe Armstrong on the issue:

>Also, I think today we’re kind of overburdened by choice. I mean, I just had Fortran. I don’t think we even had shell scripts. We just had batch files so you could run things, a compiler, and Fortran. And assembler possibly, if you really needed it. So there wasn’t this agony of choice. Being a young programmer today must be awful—you can choose 20 different programming languages, dozens of framework and operating systems and you’re paralyzed by choice. There was no paralysis of choice then. You just start doing it because the decision as to which language and things is just made—there’s no thinking about what you should do, you just go and do it.

For context this book is copyrighted 2009 so this interview is more than a decade old, and I'm sure many things have changed since then.

237 points | by luuuzeta 571 days ago

132 comments

  • OliverM 571 days ago
    Knowledge of the entire machine's activity. When I was a young teenager the home machine of choice was the Commodore 64 and its competitors. When you turned it on you usually had an interactive programming language REPL start up, probably some variant of Basic, and that was it. Off you went:

        No OS to worry about (the machine probably had a very basic BIOS to handle peripherals but that was it)
    
        No permissions model 
    
        A tiny API to interact with whatever graphics/sounds facilities were available
    
        No need to worry about what resources simultaneously-running programs could be using (there weren't any)
    
        Actually no need to worry about concurrency at all (you just couldn't do it)
    
        No need to worry about what language to use (either the one blinking at you when you turned on the machine, or assembly language for the processor)
    
        No need to worry about how to restructure computations to use a computation shader or SIMD
    
    _You_ were the owner of every resource in the machine and it all danced to your tune. And, best of all, you could come to a complete understanding of how every part of those machines worked, in just a few weeks of practice. Who today knows the intricacies of their laptops to the same extent?
    • Falkon1313 570 days ago
      Yeah, those were the days. Learn to read input from the keyboard, display characters to the screen, and to read and save files and then you could do almost anything that any professional software did.

      To add to that, you could learn to read/write a serial port and poll a mouse to find out where the pointer was and whether a button had been clicked, and that was cutting edge. At that point you were doing things that much commercial software didn't even do yet.

      Just a few simple I/O things. All the rest was whatever logic you coded up.

      You ran it and it either worked or didn't, and if it didn't you knew that was a bug in your code. Code that you knew because you'd written it.

      No stack of components, no frameworks and libraries and dependency manager configurations and VMs and container configurations and network connections and other programs that might interact with it. It was just you and your code.

      And sure, the IDEs today are technically better. But so much more complex that you could spend a lifetime studying them and still not understand all their functions. Turbo Pascal's IDE, though much simpler, well, was much simpler. You could easily grok it entirely within a week's normal usage.

      So without all the cognitive overhead of a modern code ecosystem, you could just focus fully on solving the problem, doing the logic, figuring out the best way to do it.

      Nowadays you spend most of your time figuring out all the tools and dependencies and configurations and systems and how to glue them all together and get them actually working together properly. There's relatively little time left for the actual problem you're trying to solve and the actual code that you're writing. Actually doing the thing that achieves the objective is kind of just an afterthought done in your spare time when you're not busy babysitting the ecosystem.

      • vram22 570 days ago
        Well said. "The Emperor's New Clothes" parable kind of applies here. Almost no one wants to openly say how ridiculous the situation has become, for fear of being discredited by others with an axe to grind [1], or by those who just do not get the point, or by those who are into unnecessary/accidental complexity or RDD (Resume-Driven Development). A sad state of affairs, more so since it is among a group who think of themselves as, and claim to be, smarter than "normals". Even that last word is ugh, and revealing of the mentality.

        [1] That Upton Sinclair quote about salary.

    • nine_k 571 days ago
      Pick an Arduino, enjoy basically the same full control.

      The choice of languages is wider, but normally it's the same choice of two, C or assembly.

      You can attach more interesting peripherals.

      • singingfish 571 days ago
        After doing a little bit of arduino programming, I really liked how the resources were really limited. It reminded me of installing and running linux back in the early 00s, and messing with sinclair spectrums and apple computers even further back.

        I reckon cos of the constraints it's a really good learning environment.

    • digitalsankhara 570 days ago
      I was going to say the same thing. I feel so lucky to have started programming in the era of 8-bit home computers. Looking back, this was a Zen like experience. Switch the machine on and, literally, within a few seconds you were faced with a blank screen and a blinking cursor. It is as if the machine was saying "Go on then...do something wonderful".

      And one's own wonderful was within reach because of the things you point out. Of course context matters here with having nothing to compare too, but every learning point seemed like magic to me.

      Out of necessity, learning how the hardware worked and relating that to software was such a big part of the culture. Books and magazines wrote about CPU architecture, address and data busses, video programming etc.

      Being into electronics at the time I constructed external address decoders and data line drivers (7400 series) to make lights and relays turn on as well as being able to sample and store an external voltage in memory via homemade R2R ADCs.

      Years later, I was writing control and logging software in Topspeed Module-2 running on DOS as part of my postgrad work. I'd arrive, somewhat stressed, in the lab after a fairly lengthy two train commute then relax to the sound of the hard disk as the PC booted up. Then it was me, the machine, a single language and a couple of RS232 serial ports for I/O. Bliss!

    • midoridensha 570 days ago
      >_You_ were the owner of every resource in the machine and it all danced to your tune. And, best of all, you could come to a complete understanding of how every part of those machines worked, in just a few weeks of practice. Who today knows the intricacies of their laptops to the same extent?

      This is because you lived in the microcomputer world back then. Mainframe and minicomputer programmers lived in a very different world, and had many of the same problems: OS, permissions, concurrency, other users on the same machine, scalar vs vector processors, etc.

    • kazinator 571 days ago
      > No permissions model

      That "No" (sort of) went out the window as soon as you were running a BBS on that thing. :)

  • PragmaticPulp 571 days ago
    Programming today is easier in many ways: Information is readily available for free (I recall saving up a lot of money for a kid to buy specific programming books at the book store after exhausting my library’s offerings). Compilers and tooling are free. Salaries are much higher and developers are a respected career that isn’t just “IT”. Online programming communities are more abundant and welcoming than impenetrable IRC cliques of years past. We have a lot that makes programming today more comfortable and accessible than it was in the past.

    However, everything feels vastly more complicated. My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome. Now I see young people spending months to get the basics up and running in their React front ends just to be able to think independently of hand-holding tutorials for the most basic operations.

    Even business software felt simpler. The scope was smaller and you didn’t have to set up complicated cloud services architectures to accomplish everything.

    I won’t say the old ways were better, because the modern tools do have their place. However, it’s easy to look back with rose-tinted glasses on the vastly simpler business requirements and lower expectations that allowed us to get away with really simple things.

    I enjoy working with teams on complex projects using modern tools and frameworks, but I admit I do have a lot of nostalgia for the days past when a single programmer could understand and handle entire systems by themselves because the scope and requirements were just so much simpler.

    • darepublic 571 days ago
      > Spending months to get the basics up and running in their React frontends just to be able to think independently of hand-holding tutorials for the most basic operations.

      Frontend devs who were present before the advent of the major web frameworks, and worked with the simplicity of js script + DOM (or perhaps jquery as a somewhat transparent wrapper) benefited from seeing the evolution of these frameworks, understanding the motivations behind the problem they solve, and knowing what DOM operations must be going on behind the curtain of these libraries. Approaching it today not from the 'ground up' but from high level down is imo responsible for a lot of jr web devs have surprising lack of knowledge on basic website features. Some, probably a minority, of student web devs may get conditioned to reach for libraries for every problem they encounter, until the kludge of libraries starts to cause bugs in and of itself or they reach a problem that no library is solving for them. I feel like this is particularly bad outcome for web devs because web I feel is uniquely accessible for aspiring developers. You can achieve a ton just piggybacking off the browser and DOM and it's API, the developer tools in the browser etc. But not if you are convinced or otherwise forced to only approach it from the other side -- running before you crawl, or trying to setup a webpack config before you even understand script loading, etc.

      • oldge 571 days ago
        Looking back over my relatively short 30 year career across an assortment of tech companies (hp, google, microsoft, apple, etc) I would add that this mostly changed due to what was rewarded. Around 15 years ago when the first vestiges of OKR review processes and the idea of "impact" started to form; We shifted our designs from things that made our long term outlook better (simple) to things that were easy to explain to management how they had HUGE impact (complex).

        Be right back, writing a new design doc for a messaging service and protocol spec to go with it that I can use to pad my next review cycle.

      • sarchertech 571 days ago
        At my current company we have a take home assignment for some roles. When I have to grade one the first thing I do is check to see whether you can submit the input by pressing enter.

        About half the time you can’t because it’s not actually a form, and they forgot to add handler for enter.

        • kasey_junk 571 days ago
          I’d suggest adding instructions to handle enter if that is your criteria.

          When doing a take home exercise the candidate is desperate to figure out what you are judging on. The more you make that explicit the better your outcomes will be.

          Right now your process biases for enter handler adders. Is that your intent?

          • klodolph 571 days ago
            Part of the difference between a junior and a senior role is that people in a senior role are expected to fill in missing details independently, without detailed directions. If you only test people on their ability to follow detailed directions, then you will only be able to test for junior roles.

            When I give candidates prompts / questions / scenarios, I try to include some specific instructions, but also leave some details out. The idea is to come up with a prompt that junior developers can still work with by going head-on, but which has some open-ended nature to it, to see how people respond to things which are open-ended or even ambiguous. It's not a trick or trap; candidates are explicitly told what's up, and that I'm evaluating not only their ability to solve a problem, but to define the problem in the first place.

            • tptacek 570 days ago
              This is kind of silly. Actual senior developers get near-real-time feedback from the rest of the organization, and if their interpretation of a request is off, they get multiple attempts to satisfy the request. None of this is true of a coding exercise.

              If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise. Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test. That's both disrespectful and un-rigorous.

              • klodolph 570 days ago
                > Actual senior developers get near-real-time feedback from the rest of the organization, and if their interpretation of a request is off, they get multiple attempts to satisfy the request.

                Junior developers get near-real-time feedback from senior developers who are supervising them. Senior developers have to be able to give that feedback, have to be able to anticipate user needs, and have to be able to run long-term projects where you may have to work for days, weeks, or months before receiving critical pieces of feedback.

                > If you want to evaluate a candidate's ability to cut through ambiguity or manage sprawling scope, evaluate that and only that in a specific exercise.

                This is a common mistake that I see inexperienced interviewers make. Trying to throw more detailed and specific exercises at candidates is a fool's errand at best, and at worst it means that you're putting candidates through additional tests (and your acceptance rate will suffer).

                The main problem with ambiguity is that it appears unexpectedly. If you give someone an ambiguous problem and say, "Tell me what is ambiguous about this problem," then you're not testing what you want to know. What you actually want to know is whether candidates can recognize ambiguous problems without being prompted to recognize them--and the reason for this is that ambiguous are extremely common in real-world scenarios.

                An ambiguous problem is not a trick or a trap. It is explicitly part of the interview process, and interviewees are given guidance that the problems they are given may not be precisely defined.

                > Don't just build a shoddy coding test and then rationalize it's weaknesses by saying that good candidates will succeed despite the flaws of the test.

                Why do you say that the coding test is shoddy?

                My observation is that a large percentage of candidates will succeed at coding tests if you give them an ambiguous prompt. In practice, they will either ask questions to resolve the ambiguity, or just pick a way to resolve the ambiguity for the purposes of the test. This matches real-world scenarios--you are going to often encounter ambiguous or incomplete problems in the real world.

                If you want a precisely-specified coding problem, then go to Hacker Rank or Project Euler or something like that, or join a competitive programming team.

                • kasey_junk 570 days ago
                  > This is a common mistake that I see inexperienced interviewers make. Trying to throw more detailed and specific exercises at candidates is a fool's errand at best, and at worst it means that you're putting candidates through additional tests

                  Have you tested this assertion with data? Because I’ve built interview pipelines several times now and the data I collected showed the exact opposite. The more specific a test was for the trait you wanted to select for the better results you’d get across all metrics, interviewers and candidates. It’s almost my defining characteristic of a good selection criteria after 2 decades of interviewing.

                  • klodolph 569 days ago
                    How are you giving these additional, more specific tests? I would think that your acceptance rates would start dropping once you get past five rounds or so.

                    My personal experience is that people new to interviewing are the ones who think that making individual interviews more precise will improve the process, but my experience is that improvements to the overall interview process aren’t done by improving how good individual interviews are.

                    • kasey_junk 568 days ago
                      You have between 10-16 hours of time with a candidate depending on the desire ability of your job. I like to break it into: 1 hour of pitch/expectation setting where the only screening is for ability to complete the process (language, appropriate background, etc) and to catch obviously fraudulent candidates, 4 hours of take home technical assessment (programming project), 4 hours of soft skill assessment (3 hours of prep and 1 hour of presentation is my favorite format) and 1 hour of meeting with the hiring manager.

                      But, the format is not really the point. The point is to have a specific thing you are trying to discern from your filter and to focus your efforts on making that the only thing you are judging on.

                    • tptacek 569 days ago
                      You have a budget for the number of hours you can make a candidate spend on work samples; it's the amount of time they'd spend in the interviews your tests are offsetting. This isn't complicated.
                  • tptacek 570 days ago
                    Unsurprisingly, I can report the same thing about the interview pipelines we're running at Fly.io. Not a week goes by where someone in our leadership team doesn't remark about how valuable the exercise we run specifically for this junior/senior scope-management/question stuff is.
          • Falkon1313 570 days ago
            You do know that handling enter by submitting the form is the default behavior if you do absolutely nothing? You actually have to go out of your way to override that and suppress it.

            Sometimes it makes sense to do something else instead, but if so, you should handle it in a sane way and actually do that something else. Not just suppress it.

        • wruza 571 days ago
          I always found this behavior strange. Tab often doesn’t work correctly in a browser, cycling through elements that shouldn’t be focused, sometimes in a strange order. Enter submits a form. How does one cycle through “fields” then? I have also seen premature form sends when you hit enter to autocomplete etc. It also creates multiline textarea vs input inconsistency.

          Desktop frameworks “committed” input on enter and most often focused the next field, so you could skip them by enter enter enter. This worked correctly since FoxPro/TurboVision/Norton times. Only Ctrl-Enter would press a “default” button out of order.

          But web has its own ways as usual.

        • 7speter 571 days ago
          As someone whose been learning the last few years, a lot of learning materials suggest you don’t make a proper form element and just make your own button instead of using using a form element and all the built in benefits it brings for reasons I don’t really know. I’ve probably wondered about this sort of thing for few good hours total, but I guess this comment steers me in the right direction.
          • Falkon1313 570 days ago
            Yeah, don't do that unless you have good reason to do so.

            Browsers have a standard, default, expected behavior. Sometimes, in rare cases, it makes sense to break that in order to do something else instead. But you shouldn't just silently break it for no reason other than to confuse the user.

          • Mezzie 570 days ago
            In addition to what others have mentioned, using elements for their stated/official purpose helps with accessibility. Yes, you can use the role attribute, but some accessibility technology works more completely if you use proper elements. (And just can be easier to use; I guess the best way to compare it is using VO on a site where everything is just divs is like the blind version of this[0] Or Comic Sans. Just visceral 'ugh'.).

            [0] https://www.theworldsworstwebsiteever.com/

        • swid 571 days ago
          At my current company, we changed enter to stop submitting because it’s not intuitive to users. I think there are exceptions for forms with only one field.

          So perhaps that isn’t the best litmus test? I wonder if it’s written down that should work as part of the assignment.

          • sarchertech 571 days ago
            It’s intuitive to any user who has used a web form anytime for the last 2 decades.

            Breaking default browser behavior because you find it unintuitive is generally a bad idea.

            Now if you’re dealing with something that isn’t really a web form—as in you’re overloading input fields for some interactive non form like behavior—then I can see it.

            In my case, if the person had some well thought out reason for doing it, I might let it slide. But the vast majority of times I’ve seen it, it’s because the person doesn’t even know how to use forms. Not understanding the underlying technology at at least a very basic level is a strike against you in my book.

            • swid 571 days ago
              fwiw, in my industry, I should not assume anything about technical literacy. We’ve had honest debates over whether we can even rely on people to have an email address.

              Again, it’s not that I disagree with most of your premise, but in fact, as the comments (not just mine) show - people have issues with submitting too early, so it can be a good decision to break that depending on context. Fewer errors. Which is why I brought up how the problem is presented to the candidate. You might be filtering out people based on something less universally accepted and understood than you think.

              • sarchertech 571 days ago
                Web forms have a default behavior. It doesn’t matter whether you accept it, you should understand it.

                If you accidentally break it because you don’t understand it, that’s a strike against you in my book. If you consciously break it for a good reason and can coherently defend that reason, that’s a different story.

              • throwaway285255 570 days ago
                You can always find an outlier who is new to something, but you'd have to show that it's more than a majority of the users to motivate changing the default. Otherwise it's obviously more unintuitive.

                At least you'd have to get a sample big enough to make a statistically significant conclusion.

                And the burden of proof is on you, because you are changing the default, not the other way around.

                Think of any other products, they all have established default behaviours, yet it's also very easy to find someone who's never used a product before and finds it "unintuitive".

                And intuitiveness is only one part of usability anyway.

      • ozim 571 days ago
        This is a good explanation.

        Newbies drop directly to webpack/react and are overwhelmed or having a lot of trouble getting some details right.

        Unfortunately a lot of details that are lost in time are accessible if you go through "build stuff basically from ground" so understanding first "why" we needed these frameworks and what were problems to solve.

        There is also bunch of people who go to rediscover basic stuff and they claim "you don't need a framework - vanilla js is enough" - but they also miss context and did not run into problems that were pain points before we had frameworks.

      • irrational 570 days ago
        > Some, probably a minority, of student web devs may get conditioned to reach for libraries for every problem they encounter

        I teach an online web development course for a university as a side gig. Our students are forbidden from using any third party code, libraries, frameworks, etc. They have to do everything with native html, css, and JS.

      • Karrot_Kream 571 days ago
        This is true for other devs as well. I started playing around with networking code as a teenager (back when port 80 (before 443) was prominent but not the clear majority of traffic on the net.) Watching the evolution of TCP, HTTP, the C10K problem, retry storms, and event loops has made a lot of sense: clear responses to clear issues. But I've noticed a lot of junior devs grow up with the solutions (nginx, haproxy, exponential backoff, etc) but don't know the problems that necessitated the solutions. A lot of junior training is teaching juniors which solutions to apply to which problems, because unlike us they didn't watch the field evolve to where it is now.
      • phpisthebest 571 days ago
        I remember when jQuery was first released.... I wish things were that simple again
    • BolexNOLA 571 days ago
      This is such an interesting perspective. I feel the exact same way as a video producer/editor. The tools we are getting are incredible, and tasks that used to take literally days can be done now in minutes. It’s kind of baffling what we have in our toolbox. But it is also radically changing the expectations of our clients and I find I don’t get to spend as much time on the nuts and bolts I really enjoy - cut, color, and baking the best possible export for the situation - and instead have to find a plug-in to solve every little issue a client thinks up (or created for me haha).

      On a somewhat related but tangential note: I also now have clients demanding certain programs and ecosystems, which is absolutely ridiculous to me. The software I use to give you a final cut should in no way be determined by you. Yet somehow we have ceded that ground!

    • systemvoltage 571 days ago
      The list of things we've added in last 10 years is just staggering: https://landscape.cncf.io/

      The litmus test for abstractions getting out of control is if a KPMG consultant brings it up while sipping Gin and tonic in business class.

      Another problem is explosion of DSLs. Everything is yaml and you spend ages learning things like Terraform and Docker compose yaml syntax.

      • scarface74 571 days ago
        The opposite of using yaml to provision infrastructure in a few minutes was a months long acquisition process for new hardware.

        As far as Terraform/HCL and CloudFormation/Yaml, the alternative is writing code to do the same thing in your language of choice using the CDK with either CloudFormation or the recently ported Terraform/CDK.

      • oneplane 571 days ago
        I'd say DSLs significantly improve specialised tasks. Someone who is working on declarative infrastructure configuration all day really doesn't benefit from classic imperative languages.

        But if you do "a little bit of everything" and infrastructure on the side, you're bound to become a master of none.

        • systemvoltage 571 days ago
          On the other hand, having to learn 17 different DSLs (from crontab to AWS policy) isn't that far fetched from reality and it puts a lot of burden on people. Every product has a specialized DSL with unique restrictions and just never general enough. Stack Overflow is overflowing with such questions. People are thoroughly confused by "127.0.0.1:9090" vs. "127.0.0.1:9090:9090" or "9090:9090" or "0.0.0.0:9090"... docker network configs. Good lord, it is terrible, sorry, not to shed bad light on Docker; but to exemplify that this is a common thing.

          One of the worst experiences in programming is writing CI/CD pipelines. One wonders why...

          • oneplane 571 days ago
            In most cases (including your examples) it's not really the DSL that is the problem, but the domain specific problem. Portmapping in docker isn't uniform because there are many options available that are all equally useful and up to the user to select. Crontabs have to describe time periodic values, and pretty much any other application that tries to solve a specific problem will not be all that generic (since they are not meant to solve all problems generically at once).

            Like I wrote in my comment, how much specific things you need to know will depend on how many specific tasks you perform. If you specialise in nothing, everything will have depths unknown. It will also dilute attention/focus which in turn means you'll never be able to fully understand a specific domain or application. This was reflected in https://news.ycombinator.com/item?id=33056052 where the path it took for many developers and engineers in general to find a fitting solution is unknown to newcomers and also simply not taught in favour of delivering "reviewables" in hopes of a positive review (https://news.ycombinator.com/item?id=33056705) .

            Sidenote: mapping ports used to be rather verbose, you'd have to include the address family, the address and the port, on both sides of the mapping. That's 6 elements (excluding separators). So most applications including docker made various parts optional. You can map two ports to any interface, or opt to specify one interface but not the other one etc. A novice user of a new application might be best served by not using any shorthand forms and only using the fully qualified names everywhere. By spelling out every option explicitly (including optional values) there is no more guessing what may or may not have been configured.

    • not_kurt_godel 571 days ago
      > My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome.

      Agreed, LAMP was just so damn fun in how you could go from zero to a fully functioning site in a day or two. Having to manage all the statefulness of fetching & displaying data asynchronously from the client adds an incredible amount of complication both in theory and in practice.

      Also agree that the old tech wasn't necessarily better either - but it sure would be cool if someone could replicate the developer experience from back then and produce a result that's up to modern engineering and UX standards.

    • ravenstine 571 days ago
      Get rid of the assumption of "frontend first" and most of the complication of web development disappears instantaneously.

      Something I am appreciating about Svelte/Kit and Phoenix is that they are admitting what was right about PHP and server-rendered webpages. I'm a frontend engineer and think that SPAs have their place, but frontend JS represents a kind of tyranny seen nowhere else in tech.

      • 7speter 571 days ago
        Its really hard to find proper learning materials to get started with the backend; its all a bunch of shortcuts glued together because the tutorial really wants to show you how to make a react frontend first and foremost.
      • throwaway81523 571 days ago
        Phoenix, do you mean the Elixir web framework? I had thought it was Elixir's counterpart to Rails.
        • throwawaymaths 571 days ago
          Phoenix liveview is server rendered interactive websites.
    • antod 571 days ago
      I think things are more complicated precisely because things got easier.

      The more complexity we can now handle, the more complexity we will create.

      • Falkon1313 570 days ago
        If you think we're actually handling it. Seems to me there's a whole lot of "let's just glue all these black boxes together, cross our fingers and hope they do what we want, and still do the same thing after the next update" vs. in simpler times actually writing and understanding the code.
    • conductr 571 days ago
      > Even business software felt simpler.

      This made me remember how acceptable a developer designed utility used to be. Now, you can barely launch an MVP without finding or paying for design and high quality UX/UI. If you do it’s likely going nowhere in terms of traction. I’m sure this has only seemed to be the new rule and there are a few exceptions. But not many.

      Even Stripe all those years ago really took off after investing in design. They’ve remained rather polished. But they’re also an exceedingly well funded operation.

    • matwood 571 days ago
      > Information is readily available for free (I recall saving up a lot of money for a kid to buy specific programming books at the book store after exhausting my library’s offerings).

      Gave me flashbacks to when I was younger. I would work on a problem until hitting a wall I couldn't get around then go into Books A Million with pencil and paper and copy concepts/algos out of CS books. I was too poor to spend $50+ on a book at the time. Now, anything a new programmer wants to know is just a Google search away.

      But, there is so much more that just diving in can be hard. Even simple things are complicated I think mainly because expectations are so much higher.

    • bradlys 570 days ago
      > Salaries are much higher and developers are a respected career that isn’t just “IT”.

      I agree the compensation is higher. I don’t agree the respect is any higher. Software engineer is highly associated with terms like neckbeard, redditor, incel, autistic, etc.

      I’m extremely hesitant to tell anyone I’m a software engineer. If anything - I’ll lie and say I do product management just to avoid the association. People treat me way better when I say I’m a PM instead of an eng.

    • the_only_law 571 days ago
      > Information is readily available for free

      As much as people decry the internet and it’s role in modern society, I’m glad that I generally have easy access to massive archives of knowledge.

  • thomastjeffery 571 days ago
    Context.

    We have 1,000 more solutions to 1,000 more problems. We have extensive documentation on all the new things. Documentation is mostly focused on nouns, sometimes on verbs. The "what" and the "how" are easy to find.

    What we don't have is clarity on how they fit together. The overwhelming majority of work done in software is fitting the pieces together in a way that works. The "why" and the "when" are really difficult to pin down.

    The biggest overhead is trying to conceptualize the systems that make up the foundations of software: operating system libraries, compiler toolchains, shell environments, dependencies, etc.

    Because there is so much mysticism involved, people who have spent years or decades treating operating systems, package managers, development environments, etc. as playgrounds to explore have an advantage that is difficult to articulate, let alone teach.

    Anyone learning software today is presented with a lot of exciting opportunities to explore programming itself. There are handy web-based editors where you can write programs that do input and output all in the browser itself. No need to learn about shells or packages or git...

    But those things we factored out of the learning experience are probably the most meaningful subjects to learn about, if you want to actually create something. It's really tricky to find direction to go from vague concepts to working projects.

    • abnercoimbre 571 days ago
      > people who have spent years or decades treating operating systems, package managers, development environments, etc. as playgrounds to explore have an advantage that is difficult to articulate

      This is precisely why we've succeeded, so far, at hosting conferences [0] dedicated to this exploration. Enough people realize "something's amiss" and there's something lacking. Once they're aware of what it is they search for it hungrily.

      [0] https://handmade-seattle.com

  • oppositelock 571 days ago
    I've been programming professionally since about 1994, using C++, Java, Scheme, Python, Go, JavaScript and friends.

    Today, tools are incredibly better; compilers, debuggers, profilers. I'll take something from JetBrains or Visual Studio any day over what I had available in the 1990's. There were some gems back then, but today, tools are uniformly good.

    What has gotten difficult is the complexity of the systems which we build. Say I'm building a simple web app in JS with a Go backend and I want users to have some kind of authentication. I have to deal with something like Oauth2, therefore CORS, and auth flows, and to iterate on it, I have some code open in Goland, some code open in VS Code and my browser, and as a third component, I have something like Auth0 or Cognito in a third window. It's all nasty.

    If I'm writing a desktop application, I have to deal with code signing, I can't just give it to a friend to try. It's doubly as annoying if it's for a cell phone. If I need to touch 3D hardware, I now have to deal with different API's on different platforms.

    It's all, tedious, and it's an awful lot of work to get to a decent starting point. All these example apps in the wild are always missing a lot of the hard, tedious stuff.

    Edit:

    All that being said, today, I can spin up a scalable server in some VM's in the cloud and have something available to users in a week. In the 1990s, if there was a server component, I'd be looking for colo facilities, installing racks, having to set up software, provision network connections, and it would take me ten times as long to the first prototype. I'd have to write more from scratch myself. Much as some things today are more tedious, on net, I'm more productive, but part of that is more than 25 years of experience.

    • greggman3 571 days ago
      > If I need to touch 3D hardware, I now have to deal with different API's on different platforms.

      Been programming since late 70s. Graphics in many ways WAY easier than they were in the 70s, 80s, 90s. Sure there's Metal, DirectX, Vulkan, and OpenGL but you can still use OpenGL on pretty much all platform or use something like ANGLE.

      Back in the 80s, 90s, you didn't even have APIs, you just manipulated the hardware directly and every hardware was entirely different. Apple II was different than Atari 800, which was different than C64, which was different than Amiga, which was different than CGA, and different than EGA, and different than Tandy, and different than VGA, and different than MCGA. NES was different than Sega Master System which was different the SNES which was different than Genesis which was different than 3DO which was different than Saturn which was different than PS1 etc... It's only about 2005-2010 that it all kind of settled down into various APIs that execute shaders and everything basically became the mostly the same. The data and shaders at an algorithmic level that you're using for your PC game are the same or close to it On Xbox 360, PS4, PS5, Xbox One, Mac, Linux. Where as all those previous systems were so different you had to redo all a ton more stuff.

      On top of which there are now various libraries that will hide most of the differences from you. ANGLE, Dawn, wgpu, WebGL, Canvas, Skia, Unreal, Unity, etc...

      • oppositelock 571 days ago
        It is easier in some ways, agreed. I kinda miss the olden days of doing page flipping in DOS through EMS :) You're right, it was the wild wild west up until D3D and OGL standardized everything, and a big shift happened in the shift from fixed function to programmable pipelines. I love this kind of stuff!

        I can still program mode 13h from memory, and I can only imagine the cool, arcane witchcraft that you have acquired having started much before me.

        To me, at least, OpenGL was my favorite, I miss its death, but Metal, Vulkan and DX are close enough. What drives me up a wall is 3D in the browser, since the difficulties of dispatch cost from JS land are profound.

        • greggman3 571 days ago
          I have zero problems with 3D in the browser and I love it way more than native. I can edit/refresh way faster than native, it works across all platforms (Linux/Windows/MacOS/iOS/Android). And I can share things with a link, I don't have to build/sign/noterize/distribute for 5 different platforms.
      • dahart 571 days ago
        I feel like SGI’s Iris GL (OpenGL’s processor) was a really sweet spot for ease of touching 3d hardware. It didn’t have the features we have today, but it was really easy. Very similar to the easy graphics APIs in Processing or NodeBox (and ancestors and descendants).
      • cat_plus_plus 570 days ago
        OpenGL literally makes you write a program in weird language to draw a circle and communicate to it using byte arrays. Turbo Pascal had nice libraries that took care of that for you on various graphics boards.
        • greggman3 570 days ago
          The OPs comment was about 3D. Today, browsers have the Canvas API that makes it trivial to draw on pretty much any platform. Way easier than it was in the past.
    • sillysaurusx 571 days ago
      Thank you for the eloquent comment. I showed up wanting to post something similar, but you did it better than I ever could.

      Can I ask maybe a personal question? Do you worry about becoming obsolete? I’m at the 16 year mark, and I was wondering when’s the appropriate time to panic. You seem to have made it about a decade longer than I have, so I figured I’d ask some tips.

      The hardest thing about programming now vs then seems to be staying relevant.

      • oppositelock 571 days ago
        If you ever stop learning and feel your job is tedious, move on. You are inevitably an expert at something after 16 years, and if that's a relevant technology, you're set. If it's not relevant, find something else. Your engineering experience, even if not too relevant, is valuable because it's also a process, not just an outcome, and experienced people can apply the process of engineering to anything.

        25+ years in, though, while I'm still an engineer in the org chart, I'm in charge of mentoring a lot of younger engineers, and my job has turned more into keeping them from making big mistakes and helping them grow as engineers, versus producing code myself. Through them, I can get much more done in aggregate, than if I sat down and did it myself, even though I'm probably faster than any engineer that I mentor.

        Given what I have done, startups are a great place to learn via trial by fire, while big companies are good places to earn some big bucks while in between cool startup jobs.

        • silarsis 571 days ago
          Seconding this - in the last ummm... almost 30 years I've been in the industry, the whole industry has only expanded, and you gain value as you gain skills - if you feel like you're not learning or not enjoying or no longer able to find valuable roles as eg. a programmer, branch into infra or security or data tech or any of a bunch of different places that have grown - your expertise in any field inside IT will make you more valuable in others.

          As an example, I've recently shifted from manager back to hands-on tech, and then from platform work (building tools for devs) to security - and knowing the engineering space makes me more valuable in the security space. I'll do this for a few years, then look for the next interesting jump. Nothing I've learnt is ever wasted - and I started when everything had to be patched to work on linux and sendmail.cf files were state of the art ;)

          Also, flipping between the big three types of workplace - startup, enterprise, and consulting - adds to your understanding of the world and overall value.

          When I started out, old engineers weren't a thing - it used to be accepted wisdom that this was a young person's field. That's not true any more, and I doubt it'll ever be - just keep learning, pace yourself (marathon, not sprint), enjoy yourself, and keep expanding your awareness/knowledge.

    • throwaway5959 571 days ago
      Great to see a dev with so many years of experience. What you attribute to your success? (In addition to keeping your tech stack fresh)
  • throwaway09223 571 days ago
    Programming is enormously more complicated today because modern development environments aren't designed for simplicity.

    When I first started programming all of my tools had a simple workflow:

    * Write a single text file

    * run a single command to build (cc thing.c)

    * Run the resulting file as a standalone command

    People learning to program are often new in general. They're figuring out their text editors. Figuring out how to run programs. Figuring out so many basic things seasoned developers take for granted.

    I became quite fluent in C, writing many, many useful programs with just a single text file. By the time I had a need to learn about linking multiple files in large projects I was already fluent and comfortable with the language basics.

    Contrast this with modern environments: I need to learn whole sets of tools for managing development environments (venv, bundle, cargo, etc etc etc). These development harnesses all change rapidly and I am constantly googling various sets of commands and starter configs to get things running. These are all things that a seasoned developer will be constantly dealing with on a complex project, but it seems like little effort has been put into creating basic defaults to simplify things for beginners.

    • rmah 571 days ago
      I started programming professionally in the late 1980's (almost a half century ago :-)... and we had IDE's, UI builders, databases, "resource managers", etc. One the UI side, we had to deal with windows, layouts, menus, event loops, controllers, graphs, etc, etc, etc. Pretty much everything you have to deal with today. It was, IMO, just as complicated (if in a somewhat different way).

      Yes, most commercial software packages were written in C. They certainly weren't in one file. They were large systems that took hundreds, sometimes thousands of files and 100k's to millions of lines of code. If anything, we had to write more code to do things because pre-packaged libraries weren't as comprehensive back then. I still remember waiting hours and hours for our application to build. And the old timers told us that that was blazingly fast, lol.

      I would agree with a previous poster that there are many more choices today. And I guess if you suffer from a fear of making the wrong choice, that is a problem. But the other side of that is that literally thousands and thousands of examples and even robust code libraries are now available for free that you can drop in and use. That is a HUGE plus.

      • throwaway09223 571 days ago
        Yes we had those things but my point is that they were optional and not commonly used except in large system projects. We didn't throw all that complexity at people learning the basics.

        The interface for beginners scaled all the way down to a very basic single text file, and most beginners would program for months or even years without using those things. It wasn't necessary to teach these tools in school - you could complete an entire degree writing single-file C programs without ever using an IDE.

        Many utilities were distributed as a .c file and a Makefile and that's it (before the rise of autoconf)

        • thewebcount 571 days ago
          I agree with most of what you’re saying, but for me, the IDE was waaaaay easier than dealing with a Makefile (yet another programming language that has nothing to do with my goal), or worse, entering random hard-to-remember commands and options on the command line. Even if my program was a single file, it was usually just Cmd-R to build and run. No need to memorize that I needed to add “-l math” if I was doing anything with math functions, or whatever.
          • throwaway09223 570 days ago
            Makefiles for basic projects are typically just two to five lines long. It's really different than a large project system, or the absolutely crazy things that autoconf generates.

            all: cc myprogram.c

            clean: rm -f myprogram.o myprogram

            They're extremely simple in simple scenarios. It's just a simple format for writing down the commands you run while working.

    • RunSet 571 days ago
      “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
    • kaeshiwaza 571 days ago
      I still work like that 30 years after, thanks to Go... I would not like beginning today !
    • _wldu 571 days ago
      I agree 100%. Massive frameworks with massive complexity. I don't enjoy them at all. That's one reason I like Go. It's very modern, but I can still use vim and a simple Makefile to control it.
    • ducharmdev 571 days ago
      If you have a base familiarity with the command line, I think many build tools do have great basic defaults, e.g. `dotnet run`, `cargo run`, `npm run dev`, etc. Vite is another example of good defaults in frontend dev; it allows you to sidestep a lot of the difficulty one may run into with webpack.

      But I think you're right in many ways though. For seasoned devs, cli tools give a lot of flexibility and allow dev tasks to be automated in pipelines more easily, but it requires one to read documentation or use the help subcommand to discover what else you can do. Which is not a big deal when you are experienced, but I definitely remember struggling early on with things like that when I was self-teaching how to program.

    • NoraCodes 571 days ago
      I mean, invoking rustc on a single rust file is no harder than invoking gcc on a single file. cargo adds other features, as do make, cmake, etc for c.
    • falcolas 571 days ago
      I agree with all of this, and then some. My recent lament was how one tool in my tool chain for a project managed by someone else in my company which, despite being in the same language, required a specific variable to be set in my environment.

      Back in my day, being on the path was sufficient. And it is for all the other Java projects. I spent too long sorting that one out.

  • silisili 571 days ago
    For me...the rise of a billion devops tools with their own opinionated designs and syntax.

    Programming is programming, and the language doesn't make a ton of difference to me, though I do have strong preferences.

    But that is the -easy- part. The hard part is everything else. Jenkins? Gitlab runners? Github Actions? There are at least 5 people regularly use, all have their own way of doing things and syntax.

    To Docker or not to Docker. Kubernetes? ECS? RPM? How about just lambda functions?

    What about your config management.. Ansible? Chef? Puppet? Salt?

    It's the worst part of switching jobs to me. Especially when feeling like you have to invest time learning a system that's falling(or even has already fallen) out of favor.

    I used to love the idea of owning the whole pipeline. Now I just want to sit in a hole writing solid code, and let someone else handle all the CI and systems parts.

    • jordanpg 571 days ago
      This is in large part what drove me out of programming to become a patent attorney. I started out as a moderately competent enterprise java backend developer who really knew his way around the major app servers.

      Then things changed around me quickly, and one day I woke up and realized that I had hardly any idea what was going on any more. Cloud ops, docker, kubernetes, etc. It was exhausting trying to keep up and it made me lose interest in getting better.

      I always knew I wasn't going to be a programmer forever, but modern devops stuff accelerated the transition out.

      • iepathos 571 days ago
        It's just a matter of a specialization. You were a backend app engineer who wasn't specialized in containers and cloud deployments. You shouldn't be expected to know everything in the software engineering landscape. The same way you're a patent attorney now and aren't expected to practice criminal law.
        • Viliam1234 570 days ago
          > You shouldn't be expected to know everything in the software engineering landscape.

          I like the idea, but people keep talking about "full-stack developers" and "DevOps" (or "DevSecOps").

          Also, many people seem to interpret Agile as "people are completely interchangeable", which means that you need to know all technologies used in the project, because any ticket can be assigned to you.

      • jll29 571 days ago
        Good patent attorneys share with good software architects the ability to distil a design to its essence, and with good scientists the ability to detect the crucial novelty.
      • slyall 571 days ago
        Bit ironic that somebody who has problems keeping up with the fast changing world became a patent attorney..
        • ungamedplayer 571 days ago
          I think that almost any sane, logical human should be having problems keeping up with the fast changing world.

          There is simply too much shit, too much stupid and not enough time to understand things without sacrificing family, relationships, or health.

          World is messed up, and it's not worth understanding it anymore, I don't think the payout is worth the cost.

        • tomcam 570 days ago
          Makes a lot of sense. People who know both programming and law are vanishingly few, and law moves much slower than tech.
      • davnicwil 571 days ago
        > Patent attorney

        Interesting career move! Are you using your software skills in any way (ie does it involve software patents?) or was it just a hard gear shift because of other interests/factors?

    • easterncalculus 571 days ago

        For me...the rise of a billion devops tools with their own opinionated designs and syntax.
      
      I'm convinced that some of the complexity is by design, or at least left around for the purposes of selling certificates and trainings. For example there's basically no reason why Terraform has to be its own language, the declarative engine can be wrapped around any other one. It's much more profitable if your tool is a "skill" to build an industry around than if it is just something that people use, easily understand, and can be done with.
      • uptownfunk 571 days ago
        I doubt it, do people makes their products deliberately hard to use? Wouldn’t they be harder to sell?
        • t-3 570 days ago
          Who are the customers? Are they already experts who know what they're doing and can leap over any barriers to entry in a single bound? Such a person might invite a hard-to-use technology becoming popular as it gives them more leverage when negotiating rates.
    • iepathos 571 days ago
      I deal with devops tooling and infrastructure a lot for my startup, so the part you consider hard is the part I consider easy. It's just a matter of specialization. Doing devops like 20-30 years ago would've been awful compared to today where there are a ton of tools with clear documentation to accomplish full CI/CD chains.
      • t-writescode 571 days ago
        A real problem is that a lot of startups think it’s a good idea for everyone to have to do devops rather than people who know what they’re doing, and we’re all harmed for it until some devops specialists come in and undo the spaghetti
    • willtemperley 570 days ago
      I came to the conclusion that the huge number of DevOps openings are probably because the work sucks. The team is struggling, scrum didn’t work so the next silver bullet is a complex k8ns cluster with ci/cd. The real winners however are the big cloud providers as the overweight pipelines are spun up yet again. NoOps FTW.
  • mr_tristan 571 days ago
    There is so much abstraction going on now, debugging can be very, very challenging. Many systems are distributed and virtualized, so even trying to create a detailed visualization of "what's going on" can require mind-boggling amounts of information.

    Also, even if you understand and agree with Werner Vogels' mantra "everything fails all the time", it's incredibly challenging to make a truly robust distributed system. There's just so much happening so rapidly, low-probability problems become consistent failures as you scale, and the wrong recovery approach can have non-obvious second-order effects leading to bigger problems.

    • dhosek 571 days ago
      My traditional way of understanding a program was to find main() (or it’s equivalent) and find my way through the code from there. In modern codebases, this doesn’t work at all. I do find it a bit amusing that younger devs find it magical that I can trace through code without running it and find bugs.
      • atomicnumber3 571 days ago
        Yeah, I agree, and I didn't even start all that long ago. For many programs, if you read main, all it does is kick off thread pools and then wait. And so to find the "entry point" you have to know how all the whatever (be it http or whatever) handlers work for this system. And it also feels like this kind of stuff is the first thing people end up DIYing / NIHing so even when languages or stacks have normal ways of doing things, somebody has always come along and muddied the water at some point and now you're stuck with it.

        Even UI is so damn hard these days because it's all JavaScript with a truly pitiful standard library and needs to handle 50 different screen sizes and browsers and run on both gaming desktops and 30 year old phones running leaked android pre-releases.

        I'll say it. I miss my days of programming in Java Swing. May GridBagLayout forever bless your pack()s.

      • acuozzo 571 days ago
        This is why the documentation I always write (especially in rushed cases in which I write no other documentation) is a "where do I start?" guide for newcomers to the codebase I'm leaving behind.

        I usually call it TREASURE_MAP so that it won't get skipped over like README is sometimes.

      • easton 571 days ago
        > I do find it a bit amusing that younger devs find it magical that I can trace through code without running it and find bugs.

        I’m finding at work with the mass reliance on AWS stuff that this skill is super necessary for a lot of code thee days, because if you have to wait 10 minutes for your code to deploy before you can see if it’s right, you need to be able to see bugs before you run it.

    • jiggawatts 571 days ago
      The thing that blew my mind is learning that retry loops paradoxically reduce reliability by introducing bi-stable scenarios where a working system at just over 50% load can suddenly enter a death spiral where retries push the load past 100%. This causes more errors, which cause more retries, and so on…
      • mr_tristan 571 days ago
        That's an excellent example. Mostly because I've probably had to replace simple retry loops more than once after they triggered various kinds of death spirals.

        Even recently, an engineer on my team spotted a retry technique, where the system tried to check if the DB's replication mechanism was slowing down, then would just re-enqueue the message for later. This ended up doing a bonkers amount of wasted effort because when DB replication started getting overwhelmed, messages would just constantly be re-enqueued and not ever processed. So the queue system ended up just having a bonkers amount of messages that were just popping off the queue, doing nothing, and hopping right back on.

      • derangedHorse 571 days ago
        Working at scale is truly a different ball game. A 1% yearly failure rate from a service serving around 1m users (a sizable but not huge number in today's age) will end up failing for 1000 different users on average. If the service is responsible for something like payments then 1000 different customers probably will end up calling support, support will have to identify and handle this anticipated failure mode, etc.. Sure error handling strategies like exponential backoff and microservice paradigms like "sagas" try to teach us more error-avoidant approaches, but sometimes it can be a lot to handle.
  • teeray 571 days ago
    The greatest difficulty I find programming today is hard dependency on cloud services without an adequate simulation of those services locally. I strongly believe that you need to be able to run your code, and many teams just can’t without sprawling (expensive) test environments. If you can’t run things on your machine, you sacrifice the ability to experiment without spending time deploying, checking with colleagues about shared test infrastructure, burning cash to run dev clusters, etc., etc. It just slows down the feedback loop.
    • 62951413 571 days ago
      And all the YAML-oriented programming it entails. When frequently the only way to test is to constantly re-deploy two-line changes via CI/CD just to see what breaks in a few minutes.
      • spamalot159 571 days ago
        Depending on your CI/CD pipelines it could be a few dozen minutes - up to an hour, which totally breaks all productivity flow.
  • dale_glass 571 days ago
    Years ago? Not very different. The 90s and before? A few ways.

    1. Concurrency. Multiple cores are a completely normal thing now, so having to think about how different threads may interact went from a theoretical concern to a very practical one.

    2. Dependencies. Back then you could just turn on the computer and start coding. Today many things have large amounts of dependencies that need installing, compiling or setting up. Weird problems can happen. I have an issue where VS Code just refuses to autocomplete in the test section of my project. Why? I have no clue, and VS Code is a giant of a thing. It's quite easy to spend days or even weeks trying to set things up and work out issues with things that aren't even the thing you were trying to write.

    3. Teamwork. Modern computers allow for large programs, which require teams to develop. A lot of the work in building modern successful software is in organization, record keeping, documentation and working with other people.

    4. Security. Pretty much everything interacts with outside untrusted inputs, and so it's far more important than before to treat every input correctly. Anything from image loaders to parsers to APIs may be exploited.

    • mdaniel 571 days ago
      > Why? I have no clue, and VS Code is a giant of a thing

      That experience is why I relentlessly bang on the drum of "do not swallow errors" because it makes troubleshooting indescribably hard

      I would guess the more links in the call/dependency chain, the more opportunities for misplaced assumptions or laziness to sneak in, leading to your cited outcome

  • memset 571 days ago
    UIs are way harder. Back in the day, you could drag and drop form controls in VB and bind the data to a database without writing a single line of code. Today, every step of that requires boilerplate. Backend, fronted, rest api, and so on.

    Distributing native apps has gotten harder in some ways with code signing required in order to share binaries without scary pop ups or the OS blocking outright.

    • gw99 571 days ago
      Oh yes this. I wrote a timesheet system in VB4 32-bit back in the day in 4 days and rolled it out to 10,000 users. And it worked. And it stayed working for 15 years until it was replaced.

      In 4 days worth of work now, I couldn't even do an evaluation of which UI tech is still going to be around in 2 years...

    • nhance 571 days ago
      If you haven't been watching the low-code/no-code space you may be in for a rude awakening.

      These tools continue to get better every day. The target is only "good enough" and once reached it presents an outsized advantage over custom builds.

      I fully expect no-code/low-code to grow in nearly permanent ways within many organizations.

      • bornfreddy 571 days ago
        Any examples that are interesting?
        • mike_hearn 571 days ago
          For example Oracle APEX comes with their DB and has a lot of capabilities.

          https://apex.oracle.com/en/

          The demo video takes you through creating a full blown database backed app with maps and other geo features, from nothing more than an initial CSV file. The code backing the app is represented as database tables, so you can use queries to explore the app.

          The core issue is really the same one the no-code/low-code platforms have always had, or even that VB6 had - the ramp isn't smooth. Eventually you hit the limits of the tool or there's a business requirement the tool can't meet and you get stuck. Often that requirement may be something indirect and non-obvious, like growing the team to the point where you start needing 'real' abstractions, or keeping up with some new feature the underlying platforms added that competitors are exploiting but which aren't exposed. Hence why so many companies have mobile apps that are just ordinary Android/iOS apps instead of written using low-code tools.

          • gw99 571 days ago
            I've been through that "no code" cycle at least three times, ironically including Oracle in two of those cycles, and they always died on their ass. I suspect the same will happen again.

            It's really hard to build something generic enough. MS Access was as near to it as was feasible I suspect.

            • thorin 570 days ago
              Oracle Apex has been around for over 20 years and is still in use by many people who have Oracle Db or product installations so if it is dying, it isn't dying quickly. It's actually really good for a lot of use cases.
          • mawadev 570 days ago
            I worked with apex and I must say, the premise and structure are nice and you learn the basic architecture of web apps as a beginner.

            BUT the performance is _very bad_ if you don't throw money at oracle. The final nail in the coffin was when customers want you to build something that the blackbox doesn't offer. You are in for a wild wild ride, because whatever happens in the backend is undocumented, you cannot debug it properly.

    • mike_hearn 571 days ago
      Code signing requires you to buy certificates, but that's the sort of thing you can delegate to a non-technical assistant as it mostly involves filling out forms, getting access to the corporate credit card and/or receiving phone calls. In very large organizations you probably have code signing certs already and will need to find who has access to them, although there's no theoretical reason why you can't have different departments independently buy certificates for the same organization.

      The bigger pain, and one reason why desktop apps became less popular, is that with the rise of macOS and (to some extent) Linux, you need to distribute your app to three platforms all of which use different code signing technologies and approaches, none of which are portable/standards based or convenient. Also, software update was ignored by platform vendors.

      Nowadays things are a bit different. Windows got MSIX, which is a real package manager and which can silently upgrade apps in the background on a schedule even if they're currently running. macOS has the widely used Sparkle framework for updates and of course Linux has had updating package managers for a long time.

      Up until recently it was still a pain to actually use all those technologies, even though maybe developing your {JVM,Electron,Flutter,native,etc} app was itself quite pleasant and easy. My company has made a tool to fix that [1] and so you can now build self-updating Windows/Mac/Linux packages from your app files and all the signing is handled for you locally. It's an abstraction over the platform-native distribution technologies designed with an obsessive focus on being as simple as web app distribution is.

      Making this stuff easy in turn opens up possibilities for (re)simplifying the dev stack. For example, in some cases you could now make an app that just logs in directly to your RDBMS. No need for a backend/frontend, REST, JSON, web server frameworks, giant JS transpiler pipeline etc. Just use a real GUI toolkit and connect it directly to the output of queries. Any privacy or business logic can be implemented this way using a mix of row-level security [2], security-definer stored procedures [3] and RDBMS server plugins (e.g. [4] or [5]). There are lots of nice things about this, for example, it eliminates a lot of nasty security bugs that are otherwise hard to get rid of (XSS, XSRF, SQLi etc).

      [1] https://www.hydraulic.software

      [2] https://www.postgresql.org/docs/current/ddl-rowsecurity.html

      [3] https://www.postgresql.org/docs/current/sql-createprocedure....

      [4] https://tada.github.io/pljava/

      [5] https://pgxn.org/dist/plv8/doc/plv8.html

  • iLemming 570 days ago
    There's too much information today. Makes it difficult to focus. We have social media, email, Slack, Discord, Telegram, Hackernews, Reddit, etc. We say that our brains are too distracted. In fact, that all is over-stimulation of neurons.

    In the past, when we had no Internet, or when the resources for studying were scarce, we would intently stare at the screen, trying to understand what the program does. You'd have to disassemble and debug the program and even try to explain it to your cat, dog or a rubber duck. These days programmers are often not even familiar with the term "rubber duck debugging". We would study and read actual RFCs; programming wasn't "fish for that stuff on Stackoverflow or Google" kind of thing.

    We used to let our brains wonder. Thinking about past and present. That allowed us to generate awesome, great ideas.

    Today we have tons of tools and techniques that seem to be vastly better than the tools we had in the sixties, eighties, and nineties. But if you compare that with how much computational power, memory and storage capacity have expanded since then, and compare with how we utilize that power, somehow it feels like our applications are getting worse, not better.

    It's nearly impossible today to perform any programming without the Internet. And that's why programming today is much more difficult than in the past. There's too much information. Too many ways to succeed, and even more to fail.

    • RobCodeSlayer 570 days ago
      Beautiful answer. Beyond programming, our society is incredibly overstimulated. TikTok - and now YouTube reels, instagram, etc - are moving towards short form, algorithmically delivered content that gives you a quick dopamine hit for just swiping, and I think it’s killing our collective attention spans. I wonder if there will every be a movement to move away from such platforms, but for now everyone seems always wired in.
      • unforeseen9991 570 days ago
        There are lots of movements to get away from such platforms, but are underground for the most part. Places like /r/digitalminamalism on reddit. The majority of people are not interested enough and do not have enough self awareness or personal motivations to get away from it.

        It's like trying to change the way the world works due to climate change. There's a massive amount of inertia and resistance to make the changes required for a better future.

        I think the problem is continually compounded by the shorter and shorter attention spans that also allow the bigger and bigger longer term problems to be unaddressed.

  • theonemind 571 days ago
    Programming has turned into gluing together mystery meat. You can't make a business case for essentially writing every library you need. It would take too long to have anything like modern stuff. So you always deal with a huge pile of unknowns all of the time. The job consists more of the meta-skill of navigating unchartered waters intelligently, but you get dinged for doing the real job and people doing a terrible job get the rewards. The bad people glue the stuff together without trying to understand it, don't document stuff, pull in any dependency that gets them closer to their goal, and put up a victory flag when the thing does what they want it to do. Then the war goes on day by day dealing with under-documented stuff and a giant pile of dependencies with no internal logic. The hero goes on to do it again, and a team of saps tries to make sense of a Gordian knot for the rest of the product's lifetime.
    • Joeri 571 days ago
      I can’t shake the feeling we are inching closer to the programmer-archeologist from a deepness in the sky, where gluing together mystery meat is the only sensible thing that still needs doing.

      “We should rewrite it all,” said Pham.

      “It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.

      “It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”

      https://www.goodreads.com/quotes/9427225-pham-nuwen-spent-ye...

    • wpietri 571 days ago
      I agree with a lot of what you say!

      When I was a kid, my first computer had 4k of RAM; my next one had 48k. I learned a lot about how everything worked, because there was very little to it. Over the years things have grown, but I've had a chance to grow along with it.

      But somebody starting out today start in the middle of vast layers of complexity. From processors and hardware that are hugely more complicated up through virtualization, containerization, rich OSes, languages and standard libraries with decades of history, tons of add-on libraries, and then out to user platforms with their own tangled ecosystems and decades of history. It's a lot!

      I think it's so much harder now for developers to develop "mechanical sympathy", that intuitive understanding of the rhythms of machinery. Apparently slight shifts in code can be 3 orders of magnitude difference in performance. So much of what they deal with is historically determined. (E.g., what percentage of working developers has every actually seen a carriage returning or a line feeding?) And it's all running on much shorter cycles with an ever-updating mass of tools, libraries, frameworks, and operating systems.

      On the one hand, people can do some amazing¸valuable stuff with very little training. That's great! But I think it's a lot harder to truly master the craft, and I'm concerned that a lot of programmers spend their time in professional contexts where the feedback loops are long or broken such that they are encouraged to be less analytical than superstitious, cargo-culting their way to from one overly-aggressive sprint deadline to the next.

      • Tildey 571 days ago
        > I think it's so much harder now for developers to develop "mechanical sympathy"

        This has always been something that’s fascinated me, how some people don’t have this feeling of “mechanical sympathy”.

        Feeling “bad” (not the exact right term but I don’t know how best to describe this very intangible feeling) for a high-revving engine, or a CPU wasting cycles, or a structural component being under too much stress. Even if all of those things are within spec.

        It feels like the drive to avoid that feeling ends up creating better solutions: more efficient code, a better distribution of stress across a structure, etc.

        But I wonder if it’s something learned, or something that people just “have” to a certain degree. Not trying to pathologise everything, but it feels like the sort of thing that would be correlated with ASD

        • wpietri 571 days ago
          I expect it's both! Easier to learn for some, but nobody's born knowing it.

          However, we have to set ourselves up so that it's easy to learn. With my early gear, I could hear it. Hear the disk seeks. Hear the bits streaming down the wire. I could see it via the blinkenlights.

          In recent years, I've had to activate it, even build it. E.g., at the top of my screen I've got mini graphs of CPU, RAM, net, and disk activity. I'm constantly re-confirming and re-challenging my intuition of what's going on. And with distributed systems, I find various ways to keep feeding myself the sort of data that, over time, turns into intuition.

          And that's so hard these days! Not sure how fresh-out-of-college developers are developing those intuitions, but I hope they're finding ways.

    • xhrpost 571 days ago
      This sums up mostly what I've been feeling for the last 10+ years. When I started "web 2.0" development in 2005, there was a much smaller library ecosystem. I had to use what tools were available, most of which were developed by for-profit companies and thus heavily documented. If the feature I needed didn't exist, I had to build it. I spent much more time reading documentation and specifications than I do today. Now, I do try to read docs first for whatever FOSS I'm using, but often they are lacking the specificity I need and I end up just running through a bunch of Google searches and SO questions hoping to find my specific "glue case".

      Whatever challenging "science" aspect I was expecting from my "computer science" skills just doesn't seem to exist. The concerns of logic and program efficiency/structure are still important, just not what I have to do most of the time when gluing stuff together. I'm trying to accept my fate these days and just be thankful I work in a well-paid field with good people, but it's tough as I feel it does affect my overall performance and personal enjoyment as flow is so hard to achieve anymore.

    • ozim 571 days ago
      We have tools to fight it - code reviews, dependency scanning, QA.

      I am guilty as the next guy of throwing stuff that "just works" out there.

      It is also important what kind of software you write - if you write frameworks, you better get unit tests and full blown process in place to make sure your framework will work in 10 years time.

      If you work on a business application - in 6 months time your application might be gone or requirements change in a way that everything you wrote does not matter anymore and future proofing was just a waste of time.

      I would like that more devs/software engineers understood which one they are writing.

      • wizofaus 571 days ago
        > in 6 months time your application might be gone or requirements change in a way that everything you wrote does not matter anymore

        I'd say if you write code with the assumption this might be true, you're almost guaranteeing it will be - business requirements do often change but typically that means having to modify/refactor existing code so that core components continue to work the same way, because those core requirements haven't changed. While you can make occasional assumptions about which requirements are more likely to change than others I'd much rather write all my code on the assumption the requirements won't change too significantly and that the tools/automated testing and other processes are in place to support that. There's far too much "throwaway/POC" code that ends up getting used in production, often for years, then mysteriously stops working down the line because nobody ever assumed it would need to survive that long. Whereas I don't believe I've seen a project fail because too much time was spent ensuring code was well written and well tested.

        • wpietri 571 days ago
          > Whereas I don't believe I've seen a project fail because too much time was spent ensuring code was well written and well tested.

          This does indeed happen. Kent Beck tells a story of being called in to help a European insurance project where, sick of all the legacy code, they spun off a new company to start fresh. Teams of bright people spent years building The Right Thing and doing it The Right Way. But they never delivered anything actually useful, and by the time the sponsors were entirely fed up, they still couldn't promise anything soon. So they fired everybody, hired back one team's worth, and then started fresh on well-written code that actually did something immediately useful.

          I think there's a middle path between "write garbage that succeeds" and "write perfect code that never gets used". I think it's narrow but doable, and it requires dogged attention to both "ship early and often" and "build in a way that's sustainable over the long term".

          • ozim 571 days ago
            I argue that real dichotomy is:

            "write perfect code that gets used a lot, frameworks, libraries"

            "write throw away glue code with use of libraries and frameworks"

            Because for me "write garbage that succeeds" vs "perfect code that never gets used" is false dichotomy.

            Where most devs want to write libraries and frameworks because these are places where "eternal fame" is - even better if you can write your own programming language that is the highest echelon of software engineering.

            Ugly truth is most devs are driving beat up honda - where Linus Torvalds, Anders Hejlsberg, Bjarne Stroustrup are F1 drivers - you can rev up your honda on street lights all you want - but that is not the same league.

            So most devs write business apps and many of them overshoot quality.

            • wpietri 571 days ago
              > So most devs write business apps and many of them overshoot quality.

              I think this can be true for sufficiently bad definitions of quality, which I agree are very common. To me, though, overengineering something doesn't really increase quality, because actual need puts an upper bound on quality.

          • wizofaus 571 days ago
            I don't doubt there are projects that struggle or even fail due to over-engineering, but I wouldn't consider code written on the assumption it won't be needed in 6 or even 12 months' time (therefore not needing, say, code review, automated testing or even the ability to build on anything other than an individual developer's machine, which I've seen happen more times than I care to remember) to be any sort of sensible "middle ground".
            • wpietri 571 days ago
              I have tried reading this sentence/paragraph multiple times without great confidence I know what you're saying.

              But if your notion is that there's no place for throwaway code, I disagree. The trick with throwaway code is to actually throw it away. Temporary and permanent code, used appropriately, are both vital to projects that are resilient in the face of real-world circumstances like lack of certainty and changing needs. It's the third kind you have to watch out for: https://web.archive.org/web/20190709091156/http://agilefocus...

              • wizofaus 571 days ago
                Absolutely there's a place for throwaway code and I even make a point of always writing it in such a way that there's no possible way it can end up being part of a production solution that ends up in front of customers.
                • wpietri 569 days ago
                  That's one solution. But a more healthy one is building a relationship such that temporary code can ship and then get removed. I've been a part of teams that did this all the time with experiments. We'd hack it together any old way for the experiment and ship it. Then when we had enough data, we'd remove the experimental code and do it right based on what we learned from the trial.
        • hardwaregeek 571 days ago
          > Whereas I don't believe I've seen a project fail because too much time was spent ensuring code was well written and well tested.

          Oh I definitely have. In fact I'd wager that it's the leading cause of failure for programmer lead startups. It's far too easy to spend time worrying about code and ignoring the business when your skillset lies in code and not in business.

          • wizofaus 571 days ago
            Sure, but that's failure due to not getting the business requirements right. If the code had been hastily thrown together with no regard for process it's hard to believe it would've produced a better outcome.
            • hardwaregeek 571 days ago
              Well it's priorities, no? If you spend the time to get the code right, you're not spending time validating your business or talking to customers. And indeed "getting your code right" is a very common excuse to not ship.
              • wizofaus 570 days ago
                I suppose I'm enough of a traditionalist to believe the two skillsets - one being validating a business/talking to customers and the other being the actual software development side of things are sufficiently different that it's extremely rare that one individual is going to handle them both well.
    • bentlegen 571 days ago
      In the past you just wrote the mystery meat yourself, and I assure you it didn’t taste better. Like, I literally wrote and deployed my own garbage JavaScript dependency manager in the late 2000s and we all thought it was great. It wasn’t.
    • 314 570 days ago
      This should be carved into stone somewhere. It is the most accurate description of modern software development that I've come across.
    • dehrmann 571 days ago
      Programming has always been about gluing together mystery meat. Now we just glue together bigger parts.
  • mikewarot 571 days ago
    Programming in the past was far easier, you had better tools, like Visual Basic, and Delphi. They had excellent built in documentation, with working code examples. Delphi even included an installation builder, for bundling your program. It all just worked. Your program would run in any Windows environment you or your customers were ever likely to use.

    For the most part, those programs written with those tools still work.

    Today everything is update on the fly, and subject to random breakage. GIT is better, everything else has gone downhill.

    • coliveira 571 days ago
      What I found incredible is that the new generation doesn't even know the feeling of working in an environment that gives you the power you had with something like Delphi, Borland Pascal, or Visual Basic. With one of these tools you could create pretty much any kind of application you wanted (with VB maybe you needed some Visual C++ for the difficult parts).

      Today's new developers have to deal with a landscape that is, in my opinion, discouraging. You have to spend a huge amount of time setting up environments, creating sane default configurations, and learning about hundreds of dependencies, before you can even start creating something that would be considered semi-useful.

    • nope96 571 days ago
      What I loved about Turbo Pascal (and probably Delphi), was you could put the cursor under any function, press F1, and get documentation that showed that function being used in an example problem. Great way to learn.
      • TheGrassyKnoll 571 days ago
        Turbo Pascal was a GREAT tool; it was like a breath of fresh air. It was 'fast' even on an 8086.
    • ipaddr 571 days ago
      It worked great until you had to install it on a machine that didn't have the vb runtime. The browser removed that aspect which was handy.
      • makapuf 571 days ago
        Until it was not and we redeployed a much heavier runtime in the form of electron.
  • tgflynn 571 days ago
    I think the overabundance of choices is still a major issue but I think an even bigger problem is the growth in complexity of languages and development environments. C++ was already a complex language in 1998. Today it's probably at least three times bigger, to the point where being a true expert in the language is almost beyond the capacity of a single individual. Other languages and ecosystems are no better. There's little comparison between what you had to know to consider yourself an expert web developer in 2000 compared to today.
    • jll29 571 days ago
      Indeed, it's a monster language - every 5 years it looks completely different, but it'll take 15 years to get even the mainstream compilers to implement the changes from 5 years before.

      It makes little sense to violate everything we know about language design (esp. regarding simplicity and orthogonality) and the cognitive limitations of humans (esp. developers) and keeping the Frankenstein language alive.

      It got hashtables in its standard library only when everyone and their dog had already been forced to implement their own for 15 years!

      And STL is so complicated that Stroustroup joked that he couldn't have done it if Alex Stepanov hadn't been able to pull it off. That may be a compliment to Stepanov's intellect, but it isn't a compliment for C++'s design.

      • randcraw 571 days ago
        The growth of language size in the past 30 years is even more stunning when you compare today's C++ to C, or Java to Pascal (or Object Pascal) from 1990. No modern language can be taught in less than a thousand page textbook. And if you include the standard templates and libraries the book size can double again.

        The evolution of any language inevitably adds reams of extensions, variations, and libraries. This makes the tool not only a lot more heavyweight, but much slower and harder to master, and personally, a lot less fun to use. Give me a tiny simple language (e.g. C) any time over a giant language that requires me to navigate multiple programming paradigms and layers of abstraction (e.g. C++).

        Modern languages are like having to speak in Latin. You spend all your time trying to please an nazi grammarian, rather than speaking simply and naturally, as the language was originally conceived.

    • Falkon1313 570 days ago
      You also have a whole lot more polyglot codebases now. So it's not just one language, it's all of them. As a web developer, you'll likely be working with PHP, Javascript, and Python, sometimes Ruby and Go for some things, along with a number of specialized config languages for things like docker, nginx, etc. And of course they each have their own package managers, dependencies, etc. If you thought _one_ language was getting complicated, just wait until you see a system that uses a half dozen of them.
  • thewebcount 571 days ago
    For me it’s the prevalence and growth of tools with completely shitty user interfaces. I started out on the Apple II as a kid. By the time I was ready for my first job, I was doing classic Mac and Windows 3.1/95 stuff. The tools were by no means perfect but they seemed to get better and better over time. Comparing, say, Metrowerks Code Warrior to AppleSoft BASIC was a huge improvement. (I could have done without Visual Studio’s tabs within tabs within tabs configuration windows, though.) But overall there was forward progress. You gained significantly more power, but you also gained significant ease-of-use (with some bumps along the way).

    Fast forward to today and the most painful parts of my day are dealing with shitty tools. git’s incomprehensible interface. Jenkins CI/build system that’s barely more than a log of every damn line the compiler outputs, but split up in a way that somehow makes it even harder to figure out what went wrong when something does go wrong. JFrog’s Artifactory that looks like it’s having seizures when you search for the thing that Jenkins built. And then when you find it, it lists the path, but you can’t click on the path to download it. There’s a separate button in a different place for that. These tools feel like they did a user study and whenever something was easy for the user, they threw that out and figured out a way to make it harder. Interacting with this shit is infuriating, especially when you’re on a deadline to get something out the door. I feel like I’m taking crazy pills when I bring up these problems and other people just shrug. As if that’s the way it’s always been and it can’t be changed.

    • whartung 571 days ago
      > For me it’s the prevalence and growth of tools with completely shitty user interfaces.

      Indeed.

      I'm old school green screen hack, but I think that user interfaces are a black hole of time and bike shedding.

      Back in the day it was a green screen. If you were lucky, you might have line drawing characters. Mind, we're talking smart terminals and curses level work here, vs block mode IBM displays, but still. When options are limited, there's less time spent on discussing and implementing options.

      Did people settle? Yes, they did. They made do, they made it work, and work got done. I recall a tire store running a linux desktop with their work order system some green screen app in a terminal. Yet, tires still got mounted and sold.

      I think they've changed recently, but Lowes used a terminal based UI for all of their order taking and work orders. Whether it was a stove or carpet installation or buying a couple of chaptiks and a 30 pack of batteries. The employees were adept at navigating it and got the job done.

      Did they require training? Of course they did. But training was required no matter what the UI was, as they UIs simple encase process. Process unique to the company, and those processes always have to be trained. Just raw truth.

      I'm no artist, I have no color sense, my layouts are a struggle, and putting 3 fields on a 5K screen is challenging no matter what. But I think the real values of the capabilities of modern UIs are marginal at best for most routine use cases, specifically in the back office (where the vast majority of software is designed and written).

      • BlargMcLarg 571 days ago
        Most people aren't spending days on their UI. The bike shedding is happening where it affects you no matter what you run.

        I dunno why this place has an obsession with 'le good old ways', but either way it's not the elephant in the room.

  • jll29 571 days ago
    1. Yes, overburdened by choice is indeed one of the main challenges today.

    2. And the pace of development, not because one cannot learn fast enough, but what one has learned will quickly be made obsolete because of "developer fashion" (favourite front end JS frameworks, anyone?). This happens because communities are important, e.g. for answering dev questions and fixing bugs, so one cannot adopt a "dead" framework. Each new language suffers from lacking essential libraries, components (and for managers: talent for hire), so it is risky to adopt one that has not yet accrued critical mass, yet it is also risky to miss a trend.

    3. Computer security for many systems is critically important yet very difficult, and attackers are everywhere.

    On the upside, in the past, machines were more heterogeneous, however today there are just a few survivors: Windows, Mac OS X, Linux and in the mobile space Android and iOS. Because many apps are Web apps, cross-platform has become easy, although the user experience of a Web app in no way is comparable to a native app. WASM is likely to address some of that, and - I hope - will remove much of the ugliness of abusing the HTML+HTTP paradigm intended for technical documentation for distributed application.

    • mdaniel 571 days ago
      > 3. Computer security for many systems is critically important yet very difficult, and attackers are everywhere.

      I would offer that a lot of the attack surface came from the intersection of the Internet and shipping faster than testers could keep up. The Internet means a mistake's blast radius is no longer just limited to insider threats or downloading shady software onto your own machine, now every computer in the world is a potential threat. The shipping "faster than thinking" means years worth of best practices about SQLi or CSRF or IDOR or or or get swept under the rug

      In some ways, this is the same as 2 in your list, and I guess some of 1 also given that some frameworks and tools help that problem and some are "welp, good luck, don't screw up"

  • asciimov 571 days ago
    The low hanging fruit has been taken.

    Years ago, a curious individual could solve some basic problem, maybe its a better file system, or a better kernel, or a new scripting language. Your program might take off, and you could build eventually find an entire industry around the problem you tackled.

    Today, that's unlikely to happen. Gone are the days of some people hacking in their room creating a solution to some problem. Most of the "low hanging" fruit is now in very niche areas, you will have to do some extensive study just to understand the problem well enough to approach it.

    • Viliam1234 570 days ago
      Most problems are already solved, but unfortunately, in this context "solved" does not mean "you do not have to worry about it anymore".

      Instead it means "here are three or five industry-standard solutions, each of them working differently, each of them bringing different problems, you need to study how to use them, but some of the important things you will only learn from experience; by the way, five years later most of this will be obsolete".

      The application you are making is built on ten or twenty "solutions" of this kind.

  • siraben 571 days ago
    Setting up developer environments has become vastly more complicated than it was in the past. With language-specific package managers, ad-hoc installation processes (along with surreptitious editing of shell config files under the user's nose), variety of operating systems, library versions and instruction sets, trying to get reproducible builds and environments seems close to impossible without some unified tools such as Nix or well-written Dockerfiles. This is even before issues such as resolving dependency conflicts and trying to work with out-of-tree changes to dependencies and patched dependencies.
  • irrational 570 days ago
    I’ve been doing web development since the mid 90s. In the beginning it was just plain html files that you would manually ftp (not even sftp) to a web server. Later it was html, css, and JS - but it was still a manual process. Eventually jQuery was added to help since nobody was standards compliant and instead of writing different css/JS for every browser, you just used jQuery and it took care of making it work on every browser. Files were still manually uploaded to the web server. Then came ES5. That led to an enormous floodgate of complexity. Suddenly you were expected to have a build system to transpire code to match the lowest browser specs you would support. Then came more complex front end code via react/Vue. Then came typescript. Then came pipelines like Jenkins. And docker. And serverless architecture. And it is absolutely mind boggling how complicated web development is today when It is still just plain html, css, and JS being delivered to the browser.
    • ramraj07 570 days ago
      You can still type html css js and push it via php or cgi.

      Not just theoretically, but literally. I started an internal dashboard with just flask and bootstrap. When I needed some nice interactive graphs, I found dC.js and added it.

      Then my forms got more and more complex and I just added Vue.js to the same pages. I now got extremely interactive and intuitive static html pages that interacted with the server side only when they needed to. All in one file.

      Now my team is starting to take over the project and our form has 100s of fields, so we are migrating to a proper build system with svelte.

      I have done amateur web dev from 1997 till now. It has never been easier to choose the system with the correct amount of complexity and boilerplate you want to be maximally productive.

  • yodsanklai 571 days ago
    > you can choose 20 different programming languages, dozens of framework and operating systems and you’re paralyzed by choice.

    In practice, a lot of these choices are already made of us. When you join a new project, the space of choice is limited. But when choice is too be made, what I find difficult is that you need to reach a consensus with your colleagues. When you're introverted, it's taxing. There's always some person that needs extra convincing.

    Code review wasn't as pervasive and can be tiring too. Sometimes you need to explain again and again why you made that decision (it was in a design document, it was discussed in a meeting, and then the question is brought again in the code review).

    But the worse part for me is the accumulation of abstraction layers and dependencies. I'm working on a project with tons of internal dependencies that are loosely specified and documented, all of them introduce some unreliability. The whole edifice is fragile, and yet is expected to work 24/7. This causes a lot of stress.

    • ravenstine 571 days ago
      I really appreciate the code review process, but one of the sorts of tyranny that can come from code review is for everything needing some kind of explanation. It's not necessarily a good thing for crap to get into a codebase, but as long as the code works well and is readable, I am less concerned if something is redundant. Either someone will realize that redundancy later or it will remain in place because it's just not that important. But code reviews today can feel like an anal probe from FBI agents.
      • 0x445442 571 days ago
        Yeah, I call it collectivized micro management.
    • coliveira 571 days ago
      > But the worse part for me is the accumulation of abstraction layers and dependencies.

      Yes, open source has a lot of positives, but the bad part is how it has created a culture that incentivizes such a huge amount of dependencies for every project. It seems that for any new requirement the only possible solution is to add a new dependency on yet another library. This results in fragile builds and daily changes caused by dependency updates. In some cases it causes as much troubles as the perceived benefits.

  • jasoneckert 571 days ago
    While I agree with many of the other comments here, I'll offer one more way in which programming is more difficult today: speed.

    30 years ago, nearly all projects I worked on were waterfall and my development team was far more relaxed. Spending days tinkering and thinking of a good solution was valued over sprints and rapid commits. Of course, we got far less done back then, but it was also less stressful (in my experience, at least).

  • continuational 571 days ago
    When I started out, there was DOS. It came with a command you could type in: QBASIC.

    Typing this one magic word brought up an IDE, including an editor with highlighting, an interactive help system, samples, an in-editor REPL, and single-key shortcut to run the program. I can't remember if it also came with a debugger and a way to create stand-alone executable, or if that came later.

    It had built in commands for drawing, input and sound, all well documented. And the UI was straightforward and intuitive.

    This doesn't really exist anymore.

    • dragonwriter 571 days ago
      > I can't remember if it also came with a debugger and a way to create stand-alone executable, or if that came later.

      It didn’t, it came earlier: QBasic was (and is, it stopped being part of the default install with Win2k but is still available for current Windows OSs) a stripped down interpreter-only version of QuickBASIC, a compiled BASIC.

    • jodrellblank 571 days ago
      Windows comes with a command you can type in: powershell_ise

      Typing this command brings up an IDE, including an editor with highlighting, an interactive help system, samples, an in-editor REPL, and single-key shortcut to run the program, tabs, step-through debugger, breakpoints, intellisense, snippets, extension system.

      It has access to the .NET framework that C# uses such as System.Windows.Forms, System.Drawing, System.Console.Beep, System.Media.SoundPlayer, and the UI is straighforward with an editor and a console pane. Library is well enough documented if you can use MSDN website, but certainly not as simple as BASIC and SCREEN 12, LINE (10,10)-(20,20).

      This does really exist, but it's deprecated despite being powerful, simple, convenient and useful. Instead the recommended path is to download and install a new PowerShell, VS Code and some VS Code extensions, to get a less integrated, more complex, not-bundled setup.

    • vore 571 days ago
      You can do the same thing if you type https://editor.p5js.org/ into your address bar ;-)
    • bluedino 571 days ago
      It seems like this could be a special website or something packaged with a browser like Firefox.

      A plug-in or add on for the tutorial on how to make HTML/JS pages right there inside the browser.

      • Falkon1313 570 days ago
        Go back 25 years to 1997 and you'll find it. Netscape Composer was built right into the browser.

        That's not quite like QBASIC was though. Making webpages is just a dim shadow of what QBASIC and controlling your computer was.

  • mike31fr 571 days ago
    I think it's a lot easier to build what you have in mind than it was 20 years ago when I started because of increasing levels of abstraction and also because of open source.

    20 years ago, if I needed to convert a json file to xml (that's a very bad example because 20 years ago JSON was just born and nobody used it, but no other example as simple as that one comes to mind, sorry ), that would have taken me days of coding just for that single utility function. And probably lots of headaches with regular expressions or syntaxic/semantic parsing.

    Today, I type "npm install json2csv" and I'm done.

    The only thing that might be harder for people starting today is that the level of abstraction is so high today that we almost don't even need to understand the underlying computer science to be a developer, and so the few rare times when you do need to understand what's going on behind the scenes, that might be complicated. Other than that it's only advantages.

    Progress and history always go this way, things always get easier and cheaper to build. I don't see why programming would be different.

    • pdntspa 571 days ago
      I'm not so sure about that, CPAN was a thing as early as 1993, and I remember loving the articles in Visual Basic Programmers Journal reviewing various libraries and OCX controls. If whatever it was had some popularity, you would probably find something that could do the conversion.

      But whether it was free or not? Maybe another matter entirely.

      And worst comes to worst there was always some awful code to cut-and-paste on ExpertSexChange

    • Falkon1313 570 days ago
      >the few rare times when you do need to understand what's going on behind the scenes, that might be complicated

      Except they're not few or rare. Got a bug? Good luck figuring that out. Performance problem? Who knows. Most of the stuff you're using, you didn't write, you don't understand, it's all just black boxes, or as someone else said, mystery meat, glued together. And when it starts to smell, and oh it will, you're just looking at a mountain of mysterious complexity.

      It's not like building stuff anymore. It's sticking stuff together. "Hey, it's just like playing with Legos!" Except quite often you end up with it feeling like stepping on Legos.

    • ycuser2 571 days ago
      > Today, I type "npm install json2csv" and I'm done.

      You have to do a little bit more: You have to search for it and decide which of the n converters you will use (licence, other dependecies, ..)

      But I get your point.

      • routerl 571 days ago
        You just made something click for me.

        My parent was an electrical engineer. While their skillset was varied, and heavily weighted towards physics and math, their day job was mostly about knowing how to pick parts out of a catalogue, which catalogue to use when, and maintaining good relations with the suppliers behind those catalogues. I always saw this as a weird disjunction.

        Until now.

        I've just realized that while my core skillset is in math, logic, computer science, and system design, I do spend a lot more of my time researching repositories, packages, licenses, etc, than actually coding. But with the new perspective your comment gave me, I finally saw that as totally in line with other, older engineering disciplines.

        So, to address the original point of this thread, it seems as if programming has finally evolved into a semi-mature engineering discipline, wherein our ability to gauge the quality of existing supplies is more important than our ability to produce new supplies.

        • rileyphone 571 days ago
          Brad Cox had a concept of Software ICs, in the chip sense, that align with this. He envisioned these as sharable objects rather than just libraries, but either way that’s the present we’ve been given accidentally.
          • spc476 571 days ago
            Motorola developed the MC6839 which allowed one to do floating point math with the MC6809. The MC6839 was not a floating point CPU, but an actual 8K ROM of 6809 machine code that implemented floating point math (with position independent code so it could be mapped anywhere in the memory map). This is about the closest I have ever seen of a "software IC" in my life.
          • anon25783 569 days ago
            I mean. we do call them "shared object files"
  • wglb 571 days ago
    This is a great book--it starts with JWZ (duct-tape programmer) and ends with Donald Knuth. (One theme that runs through the book is that nobody likes C++ and nobody uses debuggers.)

    When I read that section, I wasn't really convinced. Not that programming is not quite different than it was way back when, but the difficulty that we face is that there is so much more software around. Some bad, some good. Many more useful tools.

    For example, since I started programming the following innovations have significantly added to tools that we have to apply to programming:

      * The C language. Much better than writing high-performance code in assembler.
      * SQL. The power of SQL is extraordinary. 
      * Lots of disk space. Now we don't have to store our programs on punched cards.
      * Extraordinary increase in memory.
      * Networking, beyond dial-up connectivity.
      * Other highly useful languages, such as Python
      * Google, probably the top-used resources by programmers
      * IDEs
    
    The illusion that we are hampered by lots of choice is a bit illusory. Usually our programming environment is set not by our own individual choice, but often by the environment we plug into--by what resources we are going to access, and other team decisions.
    • ted_bunny 570 days ago
      "The illusion that we are hampered by lots of choice is a bit illusory."

      Artifact of a rewrite? Still, undeniably true!

  • thewebcount 571 days ago
    Oh, the other way that things have gotten more difficult is that documentation now seems to lack the depth it once had. For example, I tend to use Xcode for development. There are all sorts of functions within the app that I know about but can’t comprehend (like the memory graph), and the docs are barely more than a screen shot labeling everything and giving a 2 sentence description of it at the most basic level. Like on the memory graph, what do the different symbols tell me? Why are some blocks of memory represented by a square, and some by … what is that? A pyramid? What’s the significance? How does that help me in tracking down my memory bug?
    • pevey 571 days ago
      I second this about the declining quality of docs. Part of the proliferation of different solutions to the same problem IMO is that existing solutions might have such poor documentation that it may seem easier to seriously reinvent the wheel. I can vividly remember when php3 came out because I happened to be working on a project right then. This was before you could carry around iPhones and iPads, so I printed the whole manual and put it in a 4-inch binder so I could read it on the train, etc. I can still see those pages. I was new to programming in general at the time, but I could learn anything I wanted to do with php using those docs. Every single function was thoroughly explained. Haven’t seen anything like it in a very long time. The Prisma docs comes closest.
  • logicalmonster 571 days ago
    The standards for software are definitely elevated.

    Maybe about 20 years ago, if you had a website that allowed users to post a comment and upload a small jpeg, it was considered crazy bonkers cool.

    Today, you probably need some advanced 3d UI that communicates with your phone and millions of users in real-time with geotracking and 100 other features to get a pizza to your door in under 5 minutes to barely raise an eyebrow about the technology.

  • dagmx 571 days ago
    Security is harder.

    Devices have become more complex and interconnected. Expectations for software are also higher, while there’s more resources poured into finding and exploiting vulnerabilities.

    Keeping on top of all of that, isolating your memory access and permissions, etc… is a lot harder today than it was even a few years ago.

    In every other way I think programming is easier now. Languages are more ergonomic, even the ones that are decades old. Libraries are more easily available and there are tons more resources today than ever before for learning.

  • gw99 571 days ago
    Well my pet hate is that I write a lot of YAML. It's poorly structured, unreliable, nuanced, schemaless and difficult to edit reliably. I prefer XML and XSLT to this shit show. Oh and unnecessarily distributed things.

    In summary, apparent simplicity that causes actual complexity is a big problem now.

    • ted_bunny 570 days ago
      I'm still surprised at the apparently unanimous disinterest in a tabular format for data serialization languages. XiMpLe took me way too long to find. I can't recall if it works on YAML, though. But it removes a lot of the headaches for me.
    • everfrustrated 570 days ago
      Are you not using an IDE to edit Yaml? I edit Yaml all week long and it's all got local validation with schema and syntax.
      • gw99 570 days ago
        Yes VScode but fuck me it's still painful.
  • dtagames 571 days ago
    There's some truth to this, but unless you're in a senior architect position or starting a clean sheet project, it's unlikely that you'll have the freedom to choose your platform or framework as a programmer. Like pilots, most people will start out doing "short haul" work on existing equipment and routes. This means you'll have to learn the frameworks that your employer (or desired employer) uses.

    And those things, frameworks and platforms, are the biggest technical burdens for programmers today. The ability of the web to create what we used to call "interactive apps" (but are today just apps) has lead to the desire and expectation that all web content will take on this level of polish and appearance. While that's possible, it's also arduous. In today's world, you must also learn the frameworks, tooling, and CI/CD processes that lead to your work making it onto someone else's screen. That's a whole lot harder than what we used to do -- like when publishing meant copying a floppy and putting it in an envelope, for example.

    The ability of platforms to change their specs and rules all the time (and their insistence on doing so) is another new programmer burden. In the old days, one could write to a piece of hardware or OS and expect that code to run for a long time. Not so anymore. Now the hardware is virtualized and many layers of middleware and SaaS are required to make your code do anything. All of those are moving targets and will change out from under you, without your desire or permission, while you try to continue to deliver new code and service your old code.

    Finally, and this is the final nail in the coffin for some more experienced programmers, the misunderstanding of the concept of "agile" or "XP" and how it became scrum -- a series of micromanagement theatrics and paperwork pushing -- has made programming a lot more difficult and a lot less fun. One of the best parts of software development was the unpredictability and experimentation that led to innovative results and small, incremental improvements. Close contact with customers was also a hallmark of early software development. Today's top-down, "How long will it take you to write and debug a feature that doesn't exist?" management mentality cannot and does not lead to quality software, as everyone can tell. It does lead to programmer burnout, quiet quitting, and a lot of wasted time in dev shops.

    • thakoppno 571 days ago
      > frameworks and platforms, are the biggest technical burdens for programmers today.

      This sounds precisely correct from my experiences lately. At some version of scale, one simply cannot run the application on just a workstation. The complexity of distributed systems has gone way up, primarily in my opinion, because it solves an organization’s problem, not an individual programmer’s.

  • eitland 571 days ago
    When I started VB and Java Swing interfaces were allowed.

    Today, every single business application is somehow supposed to have its own design language and can't use any standard UI library.

    Also, deeply ironically, I think now certain web applications are now heavier than Swing applications (obviously not IDEs, but ordinary applications).

  • jleyank 571 days ago
    Sorta sad. Decades have passed and it’s harder to program machines that have staggeringly more resources. One would have thought that some amount of overhead could have been added to create an abstract environment to work in. Ah well, more job security with complexity I guess.

    And I assume most people are considering web applications rather than native. If so, it’s like a full circle from xterms…

  • coffeedan 571 days ago
    I think programming is made worse today by how programmers are managed. Agile, Kanban, Scrum, JIRA - they’re all there so upper management can get a sky high view of what’s going on, but it’s all based on “estimates” that are often shots in the dark. They might give the CEO a warm, fuzzy feeling, but they’re often based on fiction. And I waste half my day in “planning poker”, “retrospectives”, “standups” - yuck.
    • lelanthran 570 days ago
      > And I waste half my day in “planning poker”, “retrospectives”, “standups” - yuck.

      Only half a day? I've seen these take, cumulatively, more than a day.

      Then add in the requirement that changes/proposals needs a technical document review (and a followup if necessary), that any planned feature needs input from everyone involved, that some proposals which may touch some other teams work needs their buy-in (and so you have to schedule proposal presentation with them), and you can easily see a sustained 25+ hours per week used only for meetings.

    • spaetzleesser 571 days ago
      I agree with this. Tons of process that doesn't improve product quality in my view.
  • Alacart 571 days ago
    I don't think it is more difficult than it use to be. There's more layers in some cases, but most of the time those are abstraction layers that making things easier (or at least attempt to).

    I think the difference is maybe one of perception. We can do more so the baseline expectations of users/customers are higher. I also think that there has been a higher growth of people specializing in one area rather than every programmer sort of being a full stack generalist by default. So for a front end specialist, databases and server side may seem like a black box and more complicated than ever. Or someone specializing in kernel programming might think front end is more complicated than ever.

    Generally you can still do things the way they were done in the past if you want to. New tools might make things easier and learning them might seem complex, but you don't have to.

  • MonkeyMalarky 571 days ago
    I feel like the existence of sites like stackoverflow have led to less quality in available documentation. Like the responsibility has been offloaded.
    • voidhorse 571 days ago
      I’m inclined to agree. I also feel SO gives newbies the wrong impression about programming.

      It’s great to have quick solutions to small problems, but too often newbs stop with the SO answer and move on, which means while they’ve solved their problem in the moment they never spend the time to actually learn the underlying reasons why the solution worked or explore the issue deeper to actually achieve understanding.

      The good software engineers I know are all yak shavers. They’re willing to spend extra time learning how something really works and ensuring they fully understand an issue and its solution, even if the issue is just a small part of what they need to accomplish.

    • chrismartin 571 days ago
      Perhaps, but they also deliver thousands of helpful recipes for "using thing X with thing Y" that neither X nor Y's documenters would have anticipated the need for. We are fortunate that Stack Exchange has tried to keep this community relatively open (and not e.g. paywalled).
  • falcolas 571 days ago
    Marketing.

    Every choice we make as developers today not only has to be vetted by the team, it has to fight against all of the other options and the opinions created by people who are paid to convince others to use their solution.

    I’ve had to push back against both my peers and managers who suggest technology that’s been marketed to them. And by push back, I mean spending days researching these new technologies to see if they’re just an old technology with new wrappings, a VC backed promise that’s still a few years from being usable, or doesn’t solve the problem at all.

  • gU9x3u8XmQNG 571 days ago
    Honestly; navigating the corporate and other “professional” systems is the nightmare for me - especially in big business.

    I won’t go into too much detail, as it will just turn into a vent. And the current industry direction in general doesn’t make it easy (containers, industry leading black-box “solutions”, system admin and security).

    In my immediate experience; cowboys that just “do” no matter the risk, informed or not; are really thriving right now. Not sure if this is a post-pandemic large lockdown results driven thing..

    But if you follow good/great practice, are a responsible, policy and process following individual - constructive and critical thinker; you’re in for a hard time. Smaller firms seem to handle this better, in my past experience.

    Theres also a group of individuals in between these two that seem to be struggling.

    • Falkon1313 570 days ago
      "Best practices" is cargo culting. You wanna talk about people that just 'do' no matter the risk, informed or not - well, blindly adhering to what someone else calls 'best practices' is just exactly that. And they're often bad practices.

      Of course, that's situationally-dependent. If you've got the constructive and critical thinking bit, hopefully you're questioning those practices. If cowboys are thriving, there's a reason why. They may be wrong sometimes, but sometimes they might be right.

      • gU9x3u8XmQNG 570 days ago
        Thanks for the response. Definitely picking up what you’re putting down.

        My choices of words may not be ideal. I guess more soo fundamental good practices than similarly above criticised buzzword-like.

        Not sure how you feel about the rest of the post?

        Either way; thanks.

  • Veuxdo 571 days ago
    Maybe not harder, but microservices have made programming more frustrating. You have reduced freedom to write business logic; now most of your code is infrastructure, connections, and security (you are using the internet as a data bus, after all). Programming with microservices feels like practicing taekwondo in a closet.
    • mdaniel 571 days ago
      I don't think "reduced freedom" was the intention with "microservices" in that their contract was the network boundary. That said, I am 100% on-board with microservices made $x more frustrating because deploying a service is one thing, managing and upgrading it is a whole other can of worms
  • c2h5oh 571 days ago
    Much larger projects. More and more often you are working on just one aspect of something bigger, without having a good grasp of the whole thing. Situations where you and maybe a small team own entirety of a project are less and less common.

    Concurrency and multi-threading. To get good performance you used to care about a single thread on a single CPU core that you had exclusive access to. Now you have to utilize multiple cores, care about context switches and in even more extreme cases handle NUMA memory architecture. It's hard and the fact it's slightly less hard with go is one of the reasons of the language popularity.

  • dynamite-ready 571 days ago
    The weight of user expectations, and the tools created to meet them. In between this, the big tech companies have leveraged this situation to put large segments of software developers on a treadmill, by using open source projects and excellent marketing, to impose a 'soft' form of standardisation on common practices (best demonstrated by React / Typescript).

    My tongue is in my cheek, but I often wonder.

  • jollybean 571 days ago
    A lot of arbitrary complexity in our stacks and it can be totally overwhelming for young people just moving through the drudgery.

    Look at an Android project: there are maybe 12 different kinds of files! For a 'Hello World'. You have manifests, gradle (which is yet another programming language), snippets in Kotlin and Java, and entirely different xml 'language' for view definition, massive APIs, massive complexity and 'weight' in the simulators.

    It's hard for people to focus on the problems space when were are overwhelmed with layers of tooling and abstractions.

  • gajus 571 days ago
    Expectations. If you look back at software 20 years ago, even the most popular websites had relatively low complexity, both in terms of UI and functionality. A single person could have built most of them and value came primarily from content. Even early versions of Google were not that complex to build. Now the entry barrier for a minimum valuable product has dramatically increased to the point where it takes a big team and years to develop a program/website that passes the expectations of a user for their primary website/program.
  • lr4444lr 571 days ago
    There's no time to do things correctly: only "pretty well". The bar for productivity in terms of time to iterative deliverable business value is getting higher, and it's frustrating having to make quality/throughput trade offs.
  • akitzmiller 571 days ago
    About 20 years ago, I was at a pharmaceutical company and, to put together a web application, - we had to buy a prod and a dev database server - we had to buy a prod and a dev application server - we needed Oracle as a database - we needed a DBA, a sysadmin, and a hardware team - we needed Tomcat / Java and Oracle development expertise

    The idea of developing and running this application with a single person would have been laughable.

    Today, because tools are soooo accessible, I can, and do, routinely spin up my own VM, apply Puppet, drop my MySQL and Django containers onto that VM, and pull https certificates in addition to doing the front and back end software development.

    Life would be waaay simpler if I could just write server side web application code and wait around for database developers, sys admins, and front-end folks to do their thing. Imagine not having to learn a testing harness because there are actually people testing the software!

    • sseagull 571 days ago
      This is one thing I feel conflicted about. On one hand, it is a lot easier to do those things (VMs, docker, DBs, etc).

      On the other hand, now a regular run-of-the-mill developer is also supposed to be knowledgeable about sooo much stuff that is not so directly related to their code. As you say, sometimes it’s nice to be able to hand that off to specialists - a person can’t be good at everything.

      • unforeseen9991 570 days ago
        I suppose this is the void that DevOps is trying to fill
        • Viliam1234 570 days ago
          That word seems to mean different things to different people.

          I have heard on internet explanations like "it means that developers and operations are communicating more and working together as one big team", but so far every company I worked at has interpreted it like "it means that in addition to developing the application, you are also supposed to do the operations".

  • RickJWagner 571 days ago
    Just so much crap to learn.

    I started in the mainframe/COBOL days. There, the programs could be sophisticated but the systems documentation was excellent. Just a little complex and you really had to understand algorithms and such.

    Then came client/server. That brought transaction monitors (like Tuxedo) and took away the benevolent dictator model (IBM) that gave you transactions practically for free. New stuff to learn, pitfalls to avoid.

    Then came J2EE. A ton of complexity and false starts on fledgling technology. XML processing, etc. Finally REST came along and made things somewhat understandable-- but still you had to manage transactions finally.

    And now we're in the Kubernetes age. Once again, tons of infrastructure to learn, but a strong framework. (So somewhat like the mainframe days.)

    It's all a big circle. You have to constantly learn. I really think it is harder to get on board now than it was in the past.

  • BiteCode_dev 571 days ago
    Fads, noise vs signal ratio, paradox of choice, economical pressures, culture wars, value signaling taking over, incentives to bs/fake, sea of unskilled juniors, higher customer expectations, everything talks to the networks, OSes vendors policies, legacy everywhere, faster path of technical change, legal and corporate contexts...

    Also if you start today, you gotta learn some part of the pyramid your abstractions stand on, and that's growthing work as time passes.

  • acd 571 days ago
    In the past write monolith, debug one application and one database. Attach debugger to monolith. Cache in ram/l3 cache. Deployment copy monolith binary in placce, run it.

    Now, write distributed micro service. Many many programs. Many databases and queues. How do I debug that? How do I monitor that. We get scalability minus L3 cache access. Now deployment orchestration. Win zero down time deployment.

    • Ekaros 571 days ago
      Also there is lot of code and services you didn't write these days. In past you could be relatively certain that SQL server was sane, but now we have glued chain of different things together with all of the configuration options that we are probably unsure if they are all correct.
  • dragonwriter 571 days ago
    I think the quoted excerpt fro the book hits the big problem. Up into the 1990s, you had a narrow range of well-supported options built in on most platforms, often with bundled hardcopy or online docs, and it was a pain to find or get more options and information on those other options, so where to start was an easy choice.

    Now, getting a decent toolchain takes some (usually small, but nonzero) effort, and there is a flood of conflicting information on every decision, including that first step.

    Information overload and analysis paralysis bite hard.

    (If you can get past that initial hurdle, things are immensely better than any time else in history in most ways, though the trap of getting overwhelmed by info and options is persistent.)

  • specialist 571 days ago
    We used to spend more time programming.

    What passed for project management before XP and Agile wasn't great. But at least it didn't impose ridiculous administrative burden onto devs.

    Gods, we used to complain about doing weekly status reports. Now it sounds like heaven.

  • martin_a 571 days ago
    I'd like to add a datapoint from my professional life. I work with a very niche-specific automation solution for the print industry.

    That solution always had some scripting capabilities, which were a subset of JS with some software-specific extensions. Nothing really fancy, a few useful things were missing but overall there always was a way to reach your goal.

    As I'm not doing developments daily this "low-tech" approach was nice: One file that would be copied to your production system and linked there and that's it. For debugging you had the integrated logging interface, a web-based thing, not too fancy.

    One or two major release ago, they switched to a TypeScript/NodeJS based scripting system.

    While I can now "glue parts together" by using npm, I also have to transpile my code after every change and have to run an extra step to "package" everything for deploying it. Creating a new script requires a specific console command which will then prepare the file & folder structure.

    Debugging can, in theory, still be done through the web-based interface, but the recommendation is to set up a launch.json file, so VS Code can directly connect to the software on a pre-defined port and you can set breakpoints and step over your code while it's being run. I'm sure that's cool for hardcore programmers, but maaaaaaaan, ain't nobody got time for that if you've got business stuff to run.

    Somewhat anecdotal, but I spent 4 hours on Friday with trying to extract numbers from a txt file and write them to another file. Reading the file, running a regex, that's no big deal.

    But creating a temporary file is a HUGE pain with the new system. In the old system it was something like: var fh = job.createNewFile('test.txt', 'UTF-8'); and you could then work with your file handle.

    Now I' fiddling around with the third npm package to create temporary files and it seems like the problems is not me using the packages wrong but how those packages try to create a file in a Windows environment which fails.

    To be honest: I'm still guessing that's the problem and need to ask some smarter people, but thing like that "just worked" before.

    Oh: And now everything is async, which really gives me headaches, or you have to create a function which can then be called with an await so everything else waits for it...

  • Wesmio 571 days ago
    My younger colleagues have not experienced they need to play and upgrade their systems.

    Swap? What's that? Network stuff? Huh? I have wifi.

    They have a laptop, a.good one, and that's just it.

    And while security should always have been a thing, the chance that you write an application accessable on the internet is much higher today than 10 or 20 years ago.

    Beside that, there a much better tools available and easier to use than ever before. Building a whole platform is possible with a small team.

    Building a highly scalable self healing zero downtime system is basically gifted to you when following modern practices like good java frameworks (magic) and k8s

  • ronyfadel 571 days ago
    Deploying has become a nightmare.

    It used to be transfer your source files via FTP, now it’s setting up Docker and Kubernetes and I don’t know what.

    Ofc the latter is better for teams, but now you must learn a whole stack just to deploy code.

    • tw20212021 571 days ago
      Who stops you from deploying with FTP? :)
  • the_jesus_villa 571 days ago
    It was definitely easy to jump from learning HTML to learning that I can run code on the backend and render the output by just adding a tag like this:

        <?php
          $data_from_backend = [whatever I need to do on backend];
          echo $data_from_backend;
        ?>
    
    Right there inside the same document, rather than learning about Node, APIs, a templating engine, etc. It kind of just worked and was very simple. Of course, for professional apps this caused a ton of problems and now using PHP that way is seen as something from the dark ages.
    • firekvz 571 days ago
      PHP its seen only as something from the dark ages when u using VC money and the more new tech u mention to them more money u get, not all the web is based on US and able to pay 5k monthly AWS bills just to even set up a silly enviroment to test stuff/learn or start launching a side/new project
  • mkl95 571 days ago
    Many companies are OK with distributed systems but they don't wanna invest the resources to set up proper observability and monitoring.

    At those companies fixing a bug that would be relatively straightforward with a monolithic architecture is a humongous pain in the ass.

    Technically, programming is not more difficult than it used to be. But in practice it is, due to how expensive and complicated it is to know what is going on, and how little the average manager cares about it.

  • softwaredoug 571 days ago
    Dependencies are more complicated.

    You used to know and a few basic APIs (your OS, your stdlib…). You’d probably spend more time implementing basic utilities. However now you have to manage a complex software supply chain of dependencies of varying quality and security risk.

  • noufalibrahim 570 days ago
    The setup.

    My first language was GW-BASIC on an IBM compatible. Turn on the machine, type GWbasic and hit enter. You get the IDE, the editor and everything else in a single shot. Zero barrier. You could write decent programs in it and gave you that ongoing sense of achievement. Nowadays, you have a long series of things to do (clone a template, setup your editor/environment, download this and that etc.) just to get started.

  • unity1001 571 days ago
    Years ago your users would be educated professionals in different large organizations or the government. Today, anyone can be your user. Creating easily usable software for the general public and daily use is more difficult than creating software that will be used for educated professionals.
  • apatheticonion 571 days ago
    Depends on what kind of programming and at what point you're at.

    Learning programming today is difficult because of context. Some kids don't even know what a file system is. The idea of an interpreter evaluating a text file doesn't click because what even is a text "file"?

    Being an intermediate application developer I would argue is easier than before and possibly the best place to be in your career. Web Development with React and Angular turns front end development into a more complex excel-like experience. It's still programming, obviously, but the tools take a lot of the complexity out of it and a lack of experience has you blissfully ignorant of anti-patterns and a lack of testing.

    Back end web developers have the best life (technologically speaking). Complete control of the runtime, any language they want to use and normally pretty straightforward implementations. Security and testing are often overlooked but that's okay.

    Senior application developers hold so much context that they are often frustrated by the design decisions of the tools they use. "Why can't we all write web applications with Rust, except using JavaScript modules because Rust modules are terrible, but only once Web Assembly has access to the DOM - actually the web sucks and we should write native applications. I can't wait to retire."

    Devops is both harder and easier. It's easier to build a scalable reliable system, but is harder to get started because, while a simple cloud VM is still available, you feel dirty if you haven't provisioned everything using some form of orchestration.

    Desktop application developers are in the worst place possible. Microsoft doesn't even know what GUI API it wants to use. No one uses Apple's native Desktop API, Linux is.... anyway (don't flame, I love GTK4). The best option is Electron until everyone is using a single platform.

    Mobile development is hard af, there are almost no engineers in the field and you need a PHD just to install Android studio and pick the right Android SDK.

    All in all, I love it

  • rr808 571 days ago
    Security is a point that hasn't been mentioned. My first job the corporate was barely attached to the internet. We could deploy applications internally with completely lax security, never had to worry about TLS or account management or often even passwords.
  • ericmcer 571 days ago
    It's interesting his choice is around what programming language you choose, as I kinda assume everyone I work with will be a solid programmer just as a baseline requirement. How well you can slot your code into complex systems and how to design those systems is usually what is difficult.

    We have way better resources for learning but also much higher expectations. You need to be able to write good code without much thought so that you can maintain a higher level of context while programming. Sometimes you get to write just a nice little isolated bit of code, but usually there are a lot of moving pieces. (no matter how functional and immutable we try to make it.)

  • kragen 571 days ago
    We waste all our time getting flamed by assholes on this website instead of programming.
    • openfuture 571 days ago
      OMG this made my day. I wasn't expecting it but it's so true haha.
      • kragen 571 days ago
        Condolences! I hope it gets better.
  • thdxr 571 days ago
    think the rise of the metacreators in software contributes to a lot of noise which indirectly makes things difficult

    lot of people with large followings espousing opinions and practices when they hardly spend time shipping product. this makes it hard, for even someone senior and experienced, to know what they should pay attention to. sometimes it feels like you should pay attention to things you fundamentally know doesn't make sense. and a lot of energy goes into unwinding mainstream rhetoric

    I cite this as difficult because while most of the other hard parts of software are enjoyable, this one is just an energy suck

  • minhmeoke 570 days ago
    Vetting all your software, hardware, and firmware dependencies to ensure that they are trustworthy. Today's software is often very complex and comes from such a wide variety of sources that we rely on third-party security experts to do this, rather than trusting a single vendor as was done in the past.

    Ensuring that your software will still function in 20 years. Previously you had shrink-wrapped software packages on physical media with explicit releases and versions. Now everything depends on cloud services and APIs which could change or disappear at any moment.

  • nurettin 571 days ago
    Most programmers today have never seen a single user, single process machine. They did not experience a constrained hardware or used a compiled language. They just want to layer their over-bloated lego pieces until the job gets done. Have a roach infestation? Bring in the entire military and all intelligence agencies to do the job. Most of them won't even know theory. Let's just layer more ifs and more logic until it is done. State machines? Constraint solvers? Logarithmic complexity? That's the job of the next developer. I rely on pure IQ.
  • jongjong 570 days ago
    The challenge nowadays is navigating the layers upon layers of unnecessary complexity. It's not even possible to choose the best tool for the job anymore because managers have been primed to only accept certain tools and it's impossible to convince them otherwise. The herd mentality is too strong.

    Imagine being a firefighter and your boss tells you that you can't use a water hose to put out the fire; instead you must use either a portable fan or a flamethrower. That's what programming feels like these days.

    • BlargMcLarg 570 days ago
      It's not just the managers either. This attitude has seeped deep into developers themselves, to the point endless bike shedding over the smallest details is becoming the status quo. It can easily take 10x the time it would take you otherwise.

      It is very difficult to justify taking 10x more time. People have cute arguments as to why, but practically every tool common to development lacks significant evidence. I'd argue that's why the endless discussions are so easy to have, too.

  • parasti 571 days ago
    A little off topic, because I can't think of any ways in which programming is harder now. IMO getting into programming is easier than ever. The most popular language in the world (Javascript) is installed on nearly every computer and you can start programming with a single keystroke that opens devtools. Create a canvas and you're ready to dive into graphics programming with OpenGL. That stuff used to require a compiler, SDL, drivers, and what not. Really curious to see actual ways in which programming itself is harder now.
    • ipaddr 571 days ago
      The c64 opened to a prompt you could easy program vs a browser which could lead you to a programming site.

      qBasic was installed on windows machines and didn't require the internet.

      People on mobile don't have a keyboard that makes programming fun.

      You needed to program to get your computer to fully work before.

      It was easier to learn before because you had to and programming was part of operating a computer.

  • syntheweave 571 days ago
    The number one thing that is difficult is that there are lots of problems that have been already solved, but not well. Whereas at the beginning, being first to solve and ship the resulting features as a product was significant, and performance optimization was often a key enabler of feature viability, now quality of solution has lept to the forefront. No crashes, security errors, data loss or corruption. With high quality comes a larger degree of process.

    It's not a bad thing. It just means that knowing one language and some algorithms - formerly the exact skillset of a typical CS grad - isn't really "enough" to do anything. Instead it's whole stacks of software that an organization has staked their own project on. At an old company, they built most of the stack, but nobody who built it is around anymore. At a new company, it's open source, but they don't have anyone to spare to maintain it. Eventually there is some threshold that gets crossed where the stack is the problem, and that drives an attempt at solving things better.

    But it's no longer the case that most orgs have to build most of their own software; it's nearly always glue and one piece of special sauce.

  • debacle 571 days ago
    Libraries and APIs 25 years ago were expensive, well-documented, and stable.

    Users had much lower expectations.

    Programming was slow and intentful. Programming today is much more stressful, I think.

  • usgroup 571 days ago
    I think there were fewer programmers and programming was applied to fewer things. I think over time both aesthetic variability, new entrants and the proliferation of corporate backed super frameworks (which carry needless complexity from day zero and assume all sorts of needless deployment scenarios) became common place.

    The likelihood that the next thing you do sees you scurrying about Stackoverflow even after 20 years in the job, is quite high now.

  • georgeburdell 571 days ago
    It’s not just the choices; as others have pointed out, there “needs” to be a CI/CD pipeline in place for even simple projects just because. Unit testing, integration testing, automated builds and deployments. And yet somehow code is as buggy as ever because the Internet has enabled Day 1 patches. When I started programming, you got software on a diskette, so the consequences for non-working software were severe.
  • intelVISA 571 days ago
    It's only harder if you make it so, the buzzword / abstraction fog has gotten quite thick, I feel lots of shops lose sight of the basics.

    Advances in automation and tooling make it easier than ever to develop complex projects with lean teams but it can't be done if your org opts to inherit a brittle dependency chain every time "I don't want to reinvent the wheel" surfaces.

  • anovikov 570 days ago
    I think the main difficulty is that programming that actually gets things done, these days requires a lot of collaboration and thus, soft skills. Because you can't get much done by yourself anymore. And soft skills is a thing nerds naturally suck at, which makes personal qualities demanded from a good programmer sort of self-contradicting.
  • kidsil 570 days ago
    Noise. So much noise. I miss being able to write code for hours without any distractions via Slack / Email / Zoom.
  • wagslane 571 days ago
    It's so much easier today when it comes to good tooling, scalability of services, specialized tools, etc.

    It's so much worse today when it comes to things like paralysis of choice and configuration.

    I've been working on a new project for Boot.dev students and the goal is to get them a simple but professional dev environment on their machine. It's a very hard problem.

  • GMoromisato 571 days ago
    I talked about how programming has evolved (and become more difficult) here: https://gridwhale.medium.com/gridwhale-and-a-brief-history-o...

    As always, the core issue is complexity. We expect our programs to do much more than before, and that requires additional complexity.

    But we already know how to deal with complexity: we add a layer of abstraction. The problem, in my view, is that current abstraction layers are either too low-level (e.g., React) or tackling only part of the problem (e.g., AWS Lambda).

    With GridWhale, I'm working on creating a layer of abstraction that appears as a single, unified machine that you have full control over, but is actually running on a distributed, scalable micro-services architecture.

    I've got a long way to go, but I think this is the right direction.

    • Falkon1313 570 days ago
      Meanwhile I'd say adding too many layers of abstraction is not the solution but the cause of the complexity. It's exactly what we've done though, so now we have to live with it.
  • ChicagoBoy11 571 days ago
    There's some story online of the guys behind SICP talking about why they abandoned the traditional lisp-based course for its modern python variant, but the gist of it is that, according to them, the kind of "blank sheet of paper" coding that the book was sorta based on is rarer and rarer these days, and that much more of the work revolves around understanding other people's codebases/ideas and building things on top of other people's software. I think this can also serve as a potential shift that could be considered being "harder" in terms of modern computing... in a world where the ecosystem is pretty barren, walking from the very middle to the edge is pretty easy... the more it evolves, the more challenging that trek is.
    • kazinator 571 days ago
      > revolves around understanding other people's codebases/ideas and building things on top of other people's software

      But that's a completely separate topic: "structure and interpretation of other people's mud balls".

      Don't screw up "structure and interpretation of computer programs" just because most dev work is making changes to existing code. Teach that in a different course, if you have to.

      The coding that the book is based on was never common in relation to daily grunt work: business or scientific data processing and whatnot.

  • cat_plus_plus 570 days ago
    Same reason why building a house is so much more difficult than it was years ago - mind numbing, overwhelming bureaucracy. It's easier than ever to solve an important personal, business or non profit need in 10 lines of code, but coding environments are not optimized for such needs. Everything has to be portable, and scalable, and localizable and accessible and graphical and event driven and tools wouldn't let you just have user fill in a single language form from beginning to end and press a button to process it using high school level math or 70s era text parser. What is forgotten is how many ideas need to start quick and dirty and gradually mature if rising need justifies the resources.
  • dadoge 571 days ago
    Established companies with opinionated infrastructure can help with this.

    I sure wouldn’t want to write the backend of a crud app in Fortran, nor the front end. Or do anything with Fortran besides scientific computing (Fortran = Formula Translator…it was built with a limited use case in mind!)

    Companies that adapt to newer frameworks and not write everything in C++ are more efficient, but only if they control how much variation of tooling there is within a discipline.

    So today, you do have to specialize in a discipline a bit more (front end, backend, data) but each discipline has a sensible set of tools IMO. A developer can and should get some exposure to a secondary discipline to be well rounded and “T-shaped”, but should also appreciate the value of specialization.

    • tgflynn 571 days ago
      > each discipline has a sensible set of tools IMO

      More like 4 or 5 distinct sets of tools and that collection changes every few years.

      • dadoge 571 days ago
        That’s fine. I’d rather use Airflow than Luigi, BigQuery/Snowfake over Hive.

        Evolution is generally good.

        So, yes, within a discipline there is more than one tool, but often only 1 or 2 current tools that are mature and worth using if you’re starting a new project

  • amelius 571 days ago
    Programming is still easy for the most part, except now you have to pay 30% to a feudal lord.
    • kagakuninja 571 days ago
      The cost to get old-school retail software on the shelves would set you back far more than 30%, not to mention all the hassles involved in physical distribution.
  • Goosey 571 days ago
    The Micro-Services trend.
  • Jemm 570 days ago
    Put simply, systems now are designed for teams of people and automated testing. This adds complexity and verbosity not necessarily needed for small shop projects.

    The reliance on Design Patterns (Gang of Four) went from nice to use, to mostly required.

    So many developers now go right in to coding without learning other skills. The result is software to automate or help a system that the coders don't really know how to do manually and it shows. These coders know one language, one framework and maybe very good at that but they don't actually know how a computer works or why what they are doing work.

  • patrulek 570 days ago
    Everything is changing a lot faster. You must put a lot of effort to catch up with trends in different domains or be lazy and specialize in only one without certainty that you (your knowledge) will not be useless in a few years.

    Fast growing businesses that require more workforce than market can provide. This result in more inexperienced developers writing code and frustration of the experienced developers (if they have to work with the former ones).

    Too much of dubious quality blog posts (we now have too much information we need to verify with opposite to having too little information that was hard to find years ago).

  • AnimalMuppet 571 days ago
    It's far more complicated, because the expectations are different.

    Once upon a time, "UI" was "print the output to a file".

    Then it was some interconnected CICS screens.

    Then it was some interconnected web pages.

    Now it's some interconnected web pages that also display properly on mobile devices, that display in the user's chosen language (and numeric format), that displays properly no matter the user's screen size, that hopefully allows blind users to use a screen reader, that respects Europe's privacy laws, that has good security...

    It's easier now because we have better tools. It's harder now because the baseline expectations are so much higher.

  • CyanLite5 571 days ago
    Anybody remember the C10K challenge?

    In the 90s you could count on one hand the number of large scale systems that could support thousands of users concurrently.

    Any half-decent mobile app these days could easily get to a few hundred thousand concurrent users.

  • jmartin2683 570 days ago
    As many have noted, it’s harder because you’re now starting at a much higher level of abstraction. This makes it very, very difficult to develop an understanding of how any of this stuff actually works. As we push further and further away from the actual machines doing the work, understanding everything that makes up that huge stack gets way more complicated.

    Always start at the bottom and work up. You’ll avoid much of the frustration of confusion, at the expense of a slow start. Most importantly, when you’re done you’ll be useful at every level.

  • FpUser 571 days ago
    There are number of laws preventing things from doubling indefinitely every X years in physical world. Unfortunately when it comes to software the amount of bullshit doubles every year without any end in sight.
  • dijonman2 570 days ago
    The people make it harder. 20 years ago it was exclusive to those who had passion. Now it’s full of people who just want to get paid and that changes the interpersonal dynamic significantly.
  • ravenstine 571 days ago
    The amount of software that needs to be maintained is far greater, therefore there aren't as many "quick wins" to be found anymore. Not to say that new software isn't being written all the time, but more and more of what makes up the typical software developer's job is to constantly fix bugs and sacrifice standards to get things done with to Eldritch horrors of codebases. We cope by installing tools like Eslint and telling ourselves it means we have "standards."
  • mriet 570 days ago
    Many of the technical problems have been solved by frameworks: persistence, transactions, concurrency, etc.

    The business processes covered by systems have grown much more complexer. The organizations have grown larger because software is doing more (more safely and easily).

    The modern programmer faces thus larger organizations, larger/more-sided discussions and far a complexer social, political and business-related landscape than 20 years ago.

    While the technical problems have gotten much easier, in general.

  • camjohnson26 571 days ago
    Only thing I can think of is we’re so much farther away from the hardware now because of layers of virtualization. 20 years ago the code more or less mapped directly onto a physical reality and you could dig in to it intuitively if something went wrong.

    Now with dynamic languages, interpreters, docker, kubernetes, AWS and layers of dev tools and frameworks it can be harder to know what your code is actually doing. But those abstractions can also give you superpowers.

    • Aromasin 571 days ago
      I've been a hardware engineer for the past few years and I do think a lot of it seems greener on the other side. I use FPGAs and microcontrollers for a whole host of things, along with traditional electronic elements.

      The tools more often than not want me to pull my hair out (coded mostly in Tcl...), the technical forums and support are often non-existent, and the technologies quite often haven't been update with QoL changes in 20+ years.

  • winddude 571 days ago
    The speed in which the cutting edge advances. When you're focused on something I find especially at a job, you're less exposed to advances and less focused on experimentation.

    Yes, like you said the massive amount of code and options, often each with their own trad-offs and advantages

    The hardest thing is sometimes not always chasing the newest and latest thing to gain a bit of performance, or improving an f score, etc.

  • comfypotato 571 days ago
    Organizing information exchange in such a way that respects agent autonomy and heterogeneity. Developers have choices; programming environment infrastructure and interfaces between the developed systems need to allow the people to do what they want and still communicate. They way I just press “share” on my phone and can instantly port whatever content to whatever other application is a marvel.
  • sys_64738 571 days ago
    The layers to get the the hardware is absurd. Just layers of pure garbage to abstract everything away making it feel slower than 25 years ago.
  • mbrodersen 570 days ago
    It isn’t. Programming today is so much easier, in every single way, than it used to be 40 years ago when I learned how to program. I am pretty sure that people making a living today, programming by copy-pasting code from Google searches, would have had real trouble programming without the internet back then.
  • aappleby 571 days ago
    The overabundance of both frameworks and CPU performance causes programmers to massively overdesign systems that end up underperforming.
  • carapace 571 days ago
    In a word: complexity.

    Everything else has gotten better: the machines are almost-inconceivably faster and larger (in capacity, logical size; in physical size they are of course ever smaller!), the compilers are smarter than ever, the languages more ergonomic and safer, etc.

    The only downside, the great undertow, is burgeoning complexity. A "thundering herd" attack on the human mind.

  • giantg2 570 days ago
    Monoliths were easier and more reliable.

    Sure microservices are faster to elevate and more reusable, but the systems have become a birds nest of dependencies. A lot of the time our troubleshooting relies on the other teams that own those services. Coordinating and communicating takes significant overhead and inevitably leads to occasional issues.

  • dehrmann 571 days ago
    User expectations are higher. The era of the solo developer making meaningful applications is largely over.

    There's also the Rick Cook quote:

    > Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

  • agumonkey 571 days ago
    A lot of stuff today has to be networked, cross platform (although web and other stacks lifts you quite above) and security. Your clean code can be abused by botnets in so many ways. This didn't exist at large before internet.

    Also society's pace. Things moved more seasonally.. nowadays there's <lang> fatigue in many places.

  • nvarsj 566 days ago
    I agree with Armstrong here. Just look at the Kubernetes landscape. It’s easy to get crippled by the vast array of choices for every little thing. It’s a consultant’s dream though.
  • naniwaduni 571 days ago
    Almost every concrete task is easier, or at least not meaningfully harder.

    But getting to that point where you feel like you're accomplishing something, where what you've produces feels like it measures up to the standards you've been taught to expect, that barrier's gone up so much faster.

  • zffr 571 days ago
    There are lots of versions of things and online content doesn’t always specify what version it applies to. This can lead to confusing situations where things that are supposed to work don’t work for you.

    For example, there’s still a lot of python 2 code snippets that won’t work now that python 3 is the default interpreter.

  • jstx1 571 days ago
    They had fewer options to choose from but they didn’t have discussion forums, stackoverflow, youtube and all the other wonderful resources to get help and learn from. And once you decide what to work on, most of those other options out there don’t affect you at all. It seems pretty great if you ask me.
  • otikik 571 days ago
    The field has expanded. Things are deprecated but still in use. AI synthetic 3D coexists with supermarket machines using a version of Windows 98 for embedded systems, an a bank mobile app written in electron talking to a Java API which eventually calls COBOL.

    And all that has to work, kindof. It’s a bit crazy.

  • nitwit005 571 days ago
    Most things you get hired to work on make use of the internet. It used to be rare.

    That comes with a lot of problems. You end up dealing with user accounts, authentication, request retries, what to do if the server is down, etc. Plus all the security and abuse issues.

  • ynbl_ 571 days ago
    its too big of a field to say. lots of things are easier and lots are harder. PHP being phased out makes things much easier. languages are cleaner now (as in cleaner than php). hardware is worse (laggy monitors, keyboards, GUIs, internet connections flying all over the place and spamming popups in your face), but CPUs are faster. newer build systems are somewhat cleaner, but theres tons of system complexity as usual, probably worse than before. while a lot of RCE problems are being fixed, security is harder now with worse solutions on offer (for example, identity being solved by ad hoc junk like recaptcha, SMS, photo id, video, web based authentication providers).
  • tester756 571 days ago
    Hmm, I do wonder... are people better at software engineering now than they were 2 decades ago?

    Testing, architectures, patterns, principles, software life cycle, building abstractions, system modeling etc.

  • lvl102 571 days ago
    Think programming was easy up until 90s then it got really hard in 00s and then it started becoming a lot easier along with advances in cloud around 2015-16.
  • RockingGoodNite 571 days ago
    Social politics. Can't say a thing today that would have been seen as funny in a diverse team 20 years ago. We're like robots in Teams chat.
  • issa 571 days ago
    There is certainly a lot of choice, but when is the last time you had to:

    - really worry about disk space or memory size

    - check if something would work on IE

    - worry about version control

    - manually scale something

    • marginalia_nu 571 days ago
      1, 3, and 4: Every day.

      You can not worry about these things, sure, but then your software will be slow and expensive to run.

      • issa 571 days ago
        I knew I should have added some obvious cases where this didn't apply. But curious, what are you doing that you have to manually scale?
  • zasdffaa 571 days ago
    Library/ecosystem size. They're now abso-bloody-lutely massive, and more than a single person can hold in their head.
  • f0e4c2f7 571 days ago
    Google doesn't work as well now. You kind of have to know more tricks to still be able to find similar quality information.
    • jghn 571 days ago
      Depending on one's definition of "years ago", the fact that google exists at all is a giant boon.
    • marginalia_nu 571 days ago
      Dunno, I think if anything reliance on Google has increased. Back in the day I usually looked at the documentation. Back in the day there was documentation. Also books, like not written to be a bait-and-switch cash-grab but they were genuinely well put together by experts in the field.
  • throw_m239339 571 days ago
    One word: Bureaucracy.

    But I understand that in the context of large teams.

    Coding itself is the exact same thing as 40 years ago, input, processing, output

    • napmo 571 days ago
      That's a good point. But the bureaucracy is there for a reason: to improve the quality.
  • hulitu 571 days ago
    Programmers don't write widgets anymore. They write OpenGL surfaces. And they must wait for vsync to blink the cursor.
  • neilobremski 571 days ago
    Programming is now more opaque configuration than it is traceable logic paths. This makes paper debugging impossible.
  • tyingq 571 days ago
    Chasing dependency chains. Both due to the sprawl, and things like vulnerability scanning.
  • hbrn 571 days ago
    We used to write code for computers.

    Today code is written for humans.

    Mostly by people who aren't great communicators.

  • mylons 571 days ago
    the entire javascript ecosystem
  • rektide 571 days ago
    Lots of good answers about here. I think the individual coder's struggle is pretty well covered via many good points. But I think we haven't looked at this question from much of a big picture. So here's some of the challenges that make programming difficult today, on a bigger scale:

    Weirdly, I think the biggest difficulty is that we are much better served. A huge amount of what we do is well paved. React, bundlers (which are fairly isomorphic to each other webpack/rollup/esbuild/parcel/snowpack/vite), Systemd, Kubernetes, React, gRPC/protobuf, npm/node, even uring or eBPF... the list of well entrenched technologies is high. There's better set ways to do things, well served, than there used to be.

    The difficulties show up in a number of fashions. First, it precludes innovation, is highly stasist, when there is a humming along underbelly that maintains life. In the Matrix, the Elders of Xion admitted it was just highly automated machines that kept life alive, and in some ways our ascent upwards has decoupled us similarly; we're rarely liable to go back & reconsider the value of trying other ways. We're kind of "stuck" with a working set of system layers, that we don't innovate on or make much progress on. Our system layer is pretty old and mature. Everything is ingrown & interlocked, depends on the other things.

    When we do try to break out, rarely is it precisely targetted reconsiderations: often the offshoot efforts are from iconoclasts, smashing the scene & building something wildly different or aggressively retro. Iconoclasts seek pre-modern times, rather than a new post-modern alterations or rejiggerings.

    Another difficulty with having vastly more assumed is that there's less adventurers in the world, less find-out-the-truth/roll-up-your-sleeves/dig-in/read-through-the-source mentality & experience (and more looking only on the surface for easy "solutions"). Being generally well served means we rarely go off the beaten path. So our wilderness survival skills/resourcefulnesses are attrophied, and newcomers are less likely to have developed these deep hunting skills that used to be both simpler (because our systems back then were simpler, had less code) and more essential. A lot of these modern works aren't even that hard to dig into. But there's shockingly few guides for how to run gdb or a debugger on systemd, few guides to debgging kube's api-server or it's operators, few people who can talk to implementing gRPC.

    I don't think we're at crisis levels at all, but i think the industrial stratification & expectations of being well served will start to haunt us more and more across decades, that we'll lose appreciation & comprehension (alike the Matrix problem), we'll fail to make real progress.

    GraphQL is an interesting case-study to me. It rejected almost all common web practices & went back to dumb SOAP like all-purpose endpoints. The advantage of just getting the data you ask for was good, of not having to think about assembling entities, of having a schema system. But so many of these things are things the actual web can and should be good at. We spent a long time having schema systems battle each other, but that higher-level usable web just kept failing to get built out, so total disruption made sense. We still haven't a lot of good replacements for GraphQL, still haven't made strong gains, but still, it feels like GraphQL is somewhat fading, that we're less intimidated by making calls & pulling data than we used to be.

  • lamontcg 571 days ago
    I think its easier to learn to develop software now than it was at peak Java/Object-Orientation.

    A lot of the abstraction then was simply mind boggling, and while the patterns all still exist today a lot of them have been simplified by language features and put on a diet.

  • zeroonetwothree 571 days ago
    There’s so many more regulations and rules. You can’t just spin up a website without considering GDPR and a million other regulations, you have to create a privacy policy, consider deletion, recovery, etc.

    Sure we have some frameworks and such to make this easier but it really puts a damper on experimentation and raises the barrier of entry significantly.

  • iepathos 571 days ago
    I think this Fortran quote is hilarious cause that language is ugly and awful to work with, which is why no one uses it anymore. "no paralysis of choice" yeah, no choice of a more readable and maintainable language to work with.
    • TheRealKing 570 days ago
      how do you find it ugly? Can you provide specifics? Have you ever used it? Have you checked out the latest standards 2018 and 2023? By the way, it is still pretty much alive and well: fortran-lang.discourse.group/
  • jrochkind1 571 days ago
    web development is just so much more complicated, there are so many different parts and choices. I honestly don't understand how someone figures it all out today!
  • ffwacom 570 days ago
    Non technical management are putting in their 2 cents.
  • dqpb 571 days ago
    Cargo cult worship of OOP runs deep in the workplace.
  • aaronbrethorst 571 days ago
    JavaScript frontend dependency hell, for starters.
  • ww520 571 days ago
    Complexity of the platforms and packages.
  • ikiris 571 days ago
    people have like... expectations.

    I miss the days where you could hate your users and the code didn't have to like have a SLA.

    /s

  • throwaway0asd 571 days ago
    Hands down the greatest problem for programming as a profession/industry is talent identification. Every couple of years some new job board or head hunter claims to solve this problem, but they never do. The problem remains unsolved, and the reason is because there is no agreement upon definitions of minimally acceptable competency.

    The effect is guessing. Everybody guesses on whether a candidate can potentially do the job. Some of those new hiring start ups might be slightly better at guessing, but its really still just job boards or head hunters with a large margin of error.

    The way other industries solve this problem is to establish baselines of accepted practice. If you exceed the baseline you may or may not be employable, but you do at least exceed the minimal technical qualifications to practice. This is true of professions like: teacher, truck driver, lawyer, doctor, nurse, accountant, real estate, fork lift operator, and really just about everything else. Unfortunately, most software employers spend all their candidate selection effort attempting to determine minimally acceptable technical competence instead of more import things, such as soft skills, and even still its often just guessing.

    Other industries apply this solution in one of two ways: education plus a required internship that may result in a license or education plus a license followed by an agent/broker relationship. Education may refer to a university education or a specific technical school depending upon the industry and/or profession.

    To mitigate hiring risks, since everybody is just guessing anyways, employers turn to things like tools and frameworks to ease requirements around education and/or training. This is problematic as it frequently results in vendor lock-in, leaky abstraction problems, and catastrophic maintenance dead-ends when a dependency or tool reaches end of life. Even still potentially having to rewrite the entire product from scratch every few years is generally assumed to be less costly than waiting for talent in candidate selection since there is no agreed upon definition of talent and hiring is largely just guessing anyways.

    All of this makes programming both easier and more difficult, depending upon which side of a bell curve your capabilities reside. Reliance upon tools to solve a very human competence problem is designed to broaden bell curves which allows more people to participate but also eliminates outliers. This means if you, as a developer, lack the experience and confidence to write original software then you might perceive that programming today is much easier. If, on the other hand, you have no problem writing original software without a bunch of tools and dependencies you may find software employment dreadfully slow and far more challenging than it should be for even the most simple and elementary of tasks.

  • slim 571 days ago
    keeping up with dependencies, I guess, is the main hassle.
  • throwaway285255 570 days ago
    I am forced to use tools, languages/frameworks that are inferior and very slow and cumbersome. It feels like I'm always forced to hop on one leg, when I could simply just run and get where I need to go orders of magnitude faster.

    Every single ticket I get, I just immediately see for my mind's eye: a couple of tables, and a couple of queries. And sometimes if it's something really weird, I see 5-10 lines of shell script.

    But I'm "not allowed" to use those tools, and instead I have to use ORM, Object Oriented Data Model, different JSON translation operations, grotesque frameworks and in the end spend three weeks of very frustrating wasteful work, trying to coerce these humongous tools to achieve the simple feature, that would have just been three days of easy pleasant work if I was "allowed" to use a database.

    I worked in a bank in 2006 and we used VB6 to create CRUD based apps in no time, which were super stable, fast and responsive. Just a simple MVC stack, query the database, get recordsets, render a view. This was a really simple task back then.

    Now it takes ages and so so much work and effort to create an equivalent web based app, which is always slow and buggy, and the work is mainly incredibly frustrating trying to reverse-engineer and figure out some inane "smart" logic in a framework, that tries (and fails) to automate a task that was already very simple and quick, and didn't need automating at all.

  • verisimilitudes 571 days ago
    undefined