The six dumbest ideas in computer security (2005)

(ranum.com)

265 points | by sweenycod 549 days ago

43 comments

  • dang 548 days ago
    Related:

    The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=28068725 - Aug 2021 (21 comments)

    The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=14369342 - May 2017 (6 comments)

    The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=12483067 - Sept 2016 (11 comments)

    The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=522900 - March 2009 (20 comments)

    The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=167850 - April 2008 (1 comment)

    The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=35811 - July 2007 (2 comments)

    • efitz 548 days ago
      Yep, it’s worth repeating.

      Although I’d spin the issue about host vs network security differently. I’ve found that engineering teams prioritize security a lot more if they don’t feel like they’re safe in a cocoon of local network bliss behind network firewalls. I love “beyond corp” or “zero trust” precisely because you’re making it explicit that they’re on the internet and they’re a target.

      • arp242 548 days ago
        > Yep, it’s worth repeating.

        I don't know; I haven't really seen most of these things in the wild for a long time.

        For "#4) Hacking is Cool" the zeitgeist has moved in the exact opposite direction with "white hat", bug bounties, etc. I think that section in particular is a pretty outdated view of things.

        "#6) Action is Better Than Inaction" is probably the only one that still broadly applies today, and is actually a special case of "X exists, therefore, therefore we must use it ASAP, and any possible negativities are not our problem and inevitable anyway" attitude that seems the be prevalent among a certain types of people.

        • Arch-TK 548 days ago
          #1. This is still prolific absolutely everywhere. It's a good chance it is happening on your computer right now. It happens mobile app stores (application releases go through a very rudimentary set of checks and only end up thoroughly analysed by security researchers when the application becomes flagged). It's very common within internal networks and even more so when it comes to outgoing traffic.

          #2. This is still sold by security consultancy firms as a service, it's again, incredibly prolific in a lot of places.

          #3. Likewise, still a very popular service sold by security consultancy firms.

          #5. Still common to this day, services such as vishing/phishing assessments test for user education.

        • zefix 548 days ago
          Honestly #4 applies as much as ever. - At least in most regards.

          The thing is: The 'security researchers' which I've had contact with focus mostly on hacking and memory corruption attacks.

          The thing is: This is a solved problem by now!

          And yet, instead of teaching students to avoid the horrible tools, which cause those problems, they keep on teaching how to penetrate and fix.

          It's maddening.

          • rfoo 547 days ago
            Please tell me you have already thrown Firefox, Chrome, old Microsoft Edge and whatever browser out of window and are posting to HN with you rewritten-in-Rust lynx.

            Not being able to rewrite the world or convincing people to stop using memory unsafe languages is entirely unrelated to what security researchers do.

            I'd love to stop having to build complicated lifetime model in my mind to figure out whether there are hidden code paths for a UAF, but at the same time this is the best thing I can do to secure what we have today, now it's on you to rewrite the world.

            • zefix 547 days ago
              No.

              We need to stop compromising.

              Yes, there is a lot of old code.

              No, I can't do it all on my own.

              But we can do it as a profession. Refuse to take jobs, nag managers, refuse to by hardware that only supports C, etc.

              If construction was as ridiculous our fiels, we'd still use asbestos.

              • rfoo 546 days ago
                > Refuse to take jobs

                Well, I'm unfortunately not in a place where doing so makes sense. Unless you mean only auditing Rust code.

                > nag managers

                I already do so. This doesn't change much. There are still too many must-be-evolved C++ projects (no easy incremental rewrite path forward), it is impractical to have engs put significant effort into rewriting in Rust. It's really difficult to convince someone to fix something ain't broken.

                People coding in C++ are just as desperate as you, that's why someone bring up Carbon [1], a half-baked experimental project to the world last year, instead of just using Rust. Sure, they would like to use a memory safe language as possible. No, they still have to get their job done.

                > refuse to by hardware that only supports C

                If it supports C, we can make it support Rust, it's a very fun weekend project to bring-up some nostd Rust code on it.

                [1] https://github.com/carbon-language/carbon-lang

    • moffkalast 548 days ago
      There's something ironic about there being exactly six past posts about it.
      • dexterdog 548 days ago
        Except there are at least 12 previous posts of this article. It was only posted once the year it came out, but it had a real renaissance about 6 years ago for some reason.
  • tptacek 548 days ago
    A reminder that a big part of the subtext of this piece is a reactionary movement against vulnerability research that Ranum was at the vanguard of. Along with Schneier, Ranum spent a lot of energy railing against people who found and exploited vulnerabilities (as you can see from items #2, #3, and #4). It hasn't aged well.

    I'm not sure there's anything true on this list that is, in 2023, interesting; maybe you could argue they were in 2005.

    The irony is, Ranum went on to work at Tenable, which is itself a firm that violates most of these tenets.

    • diamondo25 548 days ago
      I've read about 80% of this page, and eventually stopped at the part where he says that the next generation will be more cautious. This, in my opinion, is false. Most software has simplified for user experience, and has not helped kids in the slightest bit. Its more addictive than ever, and all caution gets thrown out of the window when we let kids browse youtube unsupervised. Heck, a wrong search query or random text can give you NSFW content. And with the rise of shorts/stories/tiktoks you'll be molded by the algorithms. You don't or have barely any control over the content you see. If it notices you watch, what, 5 seconds? of a clip, it'll start recommending that.

      The issues we have nowadays are different than those in 2005. People that havent seen the bad parts of the internet, will not teach their kids about it either...

    • eduction 548 days ago
      > It hasn't aged well.

      You’re wrong, it’s aged quite well.

      Just take as one important set of examples the new mobile operating systems since this piece was published. Even the most thoughtfully designed and locked down (even with hardware, various uses of encryption etc) continue to have vulnerabilities at the base layer year after year. Bug hunting looks every year more and more like just an expensive sport for condescending security experts who think little about the broader context in which they operate. As much as we all appreciate the whack a mole.

      Where there has been genuine security improvement is where we’ve taken the structural, locked down approach advocated here (see also djb’s paper about qmail security). iOS and Android apps (particularly the former) seem genuinely more secure than most desktop apps because they are structured to have very limited permissions from day one. The app environments on those systems looks like they were designed with many of the principles from this post expressly in mind.

      The lessons for the OS layer seem obvious. Qubes and in particular Joanna’s post about “Qubes Air” point in one very promising direction.

      • UncleMeat 548 days ago
        Offensive research is what motivates lessons for the OS layer. Look at the struggles were are having with things like kernel-level memory safety even when we can point at mountains of CVEs found by white hats. The community would be dragging its feet even more if the shared consensus was that actually it is really hard to beat ASLR and DEP so we are all done and have solved it.
      • tptacek 548 days ago
        One of the major points of the qmail paper is that the structural locked down approach wasn't successful.

        (I disagree with the paper in this regard, but it's a weird thing to hang your argument against vulnerability research on).

        Georgi Guninski would have a thing or two to say about the applicability of vulnerability research to djb software.

        • eduction 548 days ago
          Hi, that’s an interesting assertion but not actually accurate. It is vaguely related to the truth; djb acknowledges that qmail failed to partition in the way he advocates in the paper but says it survived without serious security issues for other reasons:

          “ I failed to place any of the qmail code into untrusted pris- ons. Bugs anywhere in the code could have been security holes. The way that qmail survived this failure was by hav- ing very few bugs, as discussed in Sections 3 and 4.”

          That’s very different from saying the approach wasn’t successful. It was just not tried (by him). My point is it has been tried in other ways since and seems to be working. To me at least!

          (Also you took something I put in parens midway through my post with the opening words “see also” and said I “hang” my argument on it - ok, again interesting, not taking it personally as I’m sure you didn’t mean anything by it!)

          • tptacek 548 days ago
            It didn't "survive" in that manner: it wasn't LP64 clean, and had memory corruption vulnerabilities.
            • eduction 547 days ago
              You described something the qmail paper said and I corrected you. If the paper is inaccurate that’s orthogonal.
              • tptacek 547 days ago
                You're also incorrect about the paper.

                Really, the whole argument you're making --- the reason we're talking about Bernstein in the first place --- is broken. Bernstein himself would probably not agree with the take you're trying to derive from the relationship between his work and "enumerating badness".

      • wildmanx 545 days ago
        > > It hasn't aged well.

        > You’re wrong, it’s aged quite well.

        Part of the problem is that there are many people in the field of security with overly strong opinions. This is not healthy. The field is full of know-it-all people, with if-only-people-were-not-as-dumb kind of attitudes. This is not helping anybody. Any not-as-strongly-opinionated bystander looks at this and has no clue whom to listen to, since so many people are strongly expressing 100% opposing views. Calling everybody else "dumb". This is not helpful to bring the field as a whole forward.

      • rfoo 548 days ago
        > (see also djb’s paper about qmail security)

        You mean the guy who refused to fix an integer overflow bug, claiming it isn't practical to exploit then 64-bit really happened then years later suddenly the fine guys at Qualys decided to have fun? [1] Sure, he is a crypto expert and we're all grateful for his work on curve25519, salsa/chacha, nacl, djbsort, etc (and I'm sure I missed a lot). This does not mean he is an expert on weird machine.

        [1] https://www.qualys.com/2020/05/19/cve-2005-1513/remote-code-...

        • eduction 548 days ago
          No, I don’t mean the guy. I mean the paper.
    • autokad 548 days ago
      I agree. they go onto a useless rant about how pen testing is useless, red team research only enables hackers, etc. That's not true at all. That work is what pushes the improvements in both detection and better programming practices.

      Educating users is not dumb, its one of the most important parts of security a company should address. I really don't know where they are coming from here, this section was nonsense to me.

      I also have a point that will get me downloaded and piss off a lot of people, Security is very important, but not THAT important. If the business doesn't operate, then there's no need for security. So what's the solution? The author comes off as one of those that treat security like a wheelbarrow full of bricks that everyone has to push around. This wont get buy in and people will find ways around it. Instead, security should be like tennis shoes. restrictive but they also allow you to run faster.

      • mavhc 543 days ago
        It's dumb to create a system will fail because of dumb users, why did you invent a system that required everyone using it not to be an idiot, have you never met humans?
    • pvg 548 days ago
      What was the argument against vuln research? The 'Penetrate & Patch' bit makes it sound it's something like 'this is pointless because the proper way to fix this stuff is better design and other things are a waste of time and effort'.
      • castillar76 548 days ago
        This predated a lot of the responsible-disclosure culture that exists now, so there was a lot of “find vuln, post right away for the credits” going on. Couple that with a lot of tool research that was important but also felt very grey-hat, and it was easy to feel like much of the “vulnerability research” community were like a group of scientists working on making cancer airborne “for research purposes”. I admit to having felt that way then, too.

        Fortunately a lot of that has subsided. The focus on responsible disclosure while still holding companies accountable, the great security research being done by projects like Talos or Project Zero, and the consistent flow of new open-source blue-team tooling has really helped balance the scales (if they were ever unbalanced).

      • tptacek 548 days ago
        This is a whole can of worms, and my response will be biased and untrustworthy, but here's my take:

        In the early-to-mid 1990s, serious security research was intensely cliquish. There wasn't a norm of published vulnerability research; in fact, there was the opposite norm: CERT, the well-known public resource, diligently stripped details about vulnerabilities (beyond where to find the patches) out of announcements, and discussions about how vulnerabilities actually worked was relegated to "secret" lists like Core, which were of course ultimately leaked to became BBS tfiles.

        Ranum came to prominence in that era. In the mid-to-late 1990s, after Bugtraq took over, there remained a sort of informal best friends club of, like, Ranum and Dan Farmer and Wietse Venema and like one or two young vulnerability researchers --- Elias "Aleph One" Levy, for instance. There was a sort of acceptance of the idea that Elias and Mudge were doing vulnerability research that was well-intentioned and OK... but that everyone else was just trading exploits on #hack.

        There was a sort of focused beam of hatred on eEye, a security vendor that came to prominence in the early 2000s, and most especially during the "Summer of Worms", some of which worms were based on vulnerabilities that eEye's team --- at the time truly one of the most influential teams in all of vulnerability research --- had published. I worked at the industry's first commercial vulnerability research team and had a soft spot for eEye, which was doing the work we did but like several levels better than us, and it has always pissed me off how Ranum and Schneier tried to make hay by dunking on eEye and casting them as "hackers".

        (Of course, if you tried to make that argument now you'd sound like a clown, so you won't see people like Ranum and Schneier saying that kind of stuff. But the fact is, the arguments were clownish and inappropriate back then, too.)

        So, if you ask me, the argument Ranum is advancing is literally that public vulnerability research is bad, and that details of vulnerabilities should be kept between vendors and a few anointed 3rd party researchers. Because otherwise, you were just helping people break into computers.

        The Ranum of 2005 is I think especially characteristic of what I'd call "moralizing" information security; that security is actually a fight between good and evil, that what's important about machines getting owned up is that somebody's livelihood depends on that machine running, and the hacking is a crime, and the details of how the hack worked are about as relevant as the details of how a burglar breaks into the window of a house they're burgling without setting off the alarms or whatever. I get it, but I'm from the opposing school of information security, which is that security is just a super interesting computer science problem.

        • pvg 548 days ago
          I thought when asking to maybe snarkthrow in a 'this wasn't about disclosure, was it' but then I thought I would sound like a clown asking such a thing about a piece from 2005. Entirely externally/cluelessly my impression (at the time and since) was this was settled in the 90s by things like Bugtraq - that disclosure aligns with the interests of users in critical ways that leaving it up to vendors doesn't and this easily trumps objections about 'responsibility'. I didn't know this went on for so much longer, thanks for the history!
    • mytailorisrich 548 days ago
      > Ranum spent a lot of energy railing against people who found and exploited vulnerabilities

      That's not at all what #2 says. "enumerating badness" is explained as trying to track everything that's 'bad' instead of what's not. It is claimed to be 'dumb' because what 'bad' is orders of magnitude larger and more complex than what's not.

      • tptacek 548 days ago
        It's not what #2 says, it's just why he was saying it.
    • nine_k 548 days ago
      I suppose the idea of denying by default (#1, #2) and the idea of defense in depth (mentioned at the end) aged well enough.

      I'm not sure about educating users. It's obviously not going to be a bulletproof solution. But not educating users at all also does not seem right either: it's hard for a person to care about stuff they have no idea about.

      • SAI_Peregrinus 547 days ago
        The better way is to make the secure path the easy path. You don't have to educate users to do something that makes their life harder, you can educate them in an easier way to do what they want. That's far more likely to stick.

        Usability is a security issue; at the ultimate extreme a DoS attack is just creating a very poor user experience.

    • rfoo 548 days ago
      Thanks for the context! I was too young to know these backstories.
  • bluedino 548 days ago
    This describes the security industry as a whole.

    We had a user click an email and get phished.

    We tried training the users with tools like KnowBe4, banners above the emails that say things like THIS IS AN OUTSIDE EMAIL BE VERY CAREFUL WHEN CLICKING LINKS. Didn't help.

    The email was a very generic looking "Kindly view the attached invoice"

    The attached invoice was a PDF file

    The link went to some suspicious looking domain

    The page the link brought up was a shoddy impersonation of a OneDrive login

    In just minutes, the users machine was infected, it emailed itself to all of their Outlook contacts...

    So this means nothing in this list detected a goddamn thing:

        Next-generation firewall
        AI-powered security
        'MACHINE LEARNING'
        'Prevent lateral spread'
        enterprise defense suite with threat protection and threat detection capabilities designed to identify and stop attacks
        AV software that was advertised to 'Flag malicious phishing emails and scam websites'
        'Defend against ransomware and other online dangers'
        'Block dangerous websites that can steal personal data'
        the cloud-based filtering service that protects your organization against spam, malware, and other email threats
    
    And the company that we pay a huge sum of money to 'delivers threat detection, incident response, and compliance management in one unified platform' didn't make a peep.

    But, we are up to the standards of quite a few acronyms.

    It's all a useless shitshow. And plenty of productivity-hurting false flags happen all the time.

    • causi 548 days ago
      Have you tried threats and public humiliation?

      "ATTN ALL employees: Dave Smith ignored security training and was phished into installing malware. He is now fired because he was an idiot."

      • dexterdog 548 days ago
        I think there are a number of departments that will help you join Dave in his new-found freedom from employment if you send that.
        • Nevermark 548 days ago
          Hmmm. Not if the firing notice was triggered by Dave from a suspicious executable in his email.

          Although the idea of tightening up security practices by having some sociopathic employee tricking colleagues into publicly firing themselves by malware does make me feel a little ill.

      • bigbillheck 548 days ago
        > Have you tried threats and public humiliation?

        Looks like we've found a seventh.

        • Arch-TK 548 days ago
          I think this still falls under the user education section. Just as a rather frowned upon form of education.
    • Spooky23 548 days ago
      Several years ago, I worked on an incident response for an incident that was detected and stopped.

      Tl;dr, a targeted phishing email was the catalyst for the whole thing. The various systems that detect these thing effectively blocked it ~97/100 times. One click was all it took. The user who clicked had a bad feeling and used a blame-free and convenient reporting mechanism to report it.

      That doesn’t mean that tools and training are useless. As a defender in any context, defense has to be multilayered and flexible as circumstances change. In IT, sports or warfare, it’s the same process or funnel.

      The scenario you described likely would have been detected by an EDR tool, or by log analysis if there was a process to do that. Declaring “shitshow” is accepting a bad outcome. Unfortunately as the value of compromising a company has gone up, the opponents have leveled up, and defenders need to as well.

      • namaria 548 days ago
        "The spot where we intend to fight must not be made known; for then the enemy will have to prepare against a possible attack at several different points; (...) If he sends reinforcements everywhere, he will everywhere be weak."

        Sun Tzu, Art of War. I know, cheesy to compare network security with warfare. But, I've learned that big shinny stack of tools is a red flag. If there is no threat model and focused hardening, you're not doing security, you're doing compliance.

  • ufmace 548 days ago
    I wonder how well we all think this article has aged?

    "Penetrate and Patch" is supposedly dumb. But what do we practically do with that? We've seen in the last decade or so a lot of long-lived software everyone thought was secure get caught with massive security bugs. Well, once some software you depend on has infact been found to have a bug, what's there to do but patch it? If some software has never had a bug found in it, does that actually mean that it's secure, or just that no skilled hackers have ever really looked hard at it?

    Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.

    "Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though. It's not very clear how you'd do that, and ensure none of them were ever tampered with, or be able to do development involving frequently producing new binaries, or figure out how to make sure no malicious code ever took advantage of whatever mechanism you want to use to make app development not terrible. And everyone already gripes about how locked-down iOS devices are, wouldn't this mean making everything at least that locked down or more?

    • tptacek 548 days ago
      1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).

      2. "Enumerating badness" is simultaneously an attempt to connect vulnerability research to antivirus (security practitioners have had contempt, mostly justified, for AV since the late 1980s) and an endorsement of the heuristic detection scheme companies like NFR sold. Apart from the shade it throws at vulnerability research, it's fine.

      3. "Penetrate and patch" has aged so poorly that Ranum's own career refutes it; he ended up Chief of Security at Tenable, one of the industry's great popularizers of the idea.

      4. "Hacking is cool": literally, this is "get off my lawn".

      5. Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

      6. "Action is better than inaction" --- this is just a restatement of "something must be done, this is something", or the Underpants Gnome thesis. Is it true? Sure, I guess.

      As a whole, this piece has not aged well at all.

      • jonstewart 548 days ago
        Agree with tptacek. His rant in Enumerating Badness is deeply intertwined with his rant against Default Permit (which as tptacek points out, was a bit of a strawman even then). I could agree that checking against all possible bad things is a flawed approach for security products and IT staff.

        However, enumerating badness is hugely valuable in the security industry for two reasons:

        1) It’s the backbone of security research, just as physiology and anatomy are to zoology and medicine. With enumeration (observation), we can classify, abstract, find trends, identify risky software and approaches, direct engineering resources, and create broad defenses (yay ASLR).

        2) Attackers are lazy, too. I work at a security consulting firm, and routinely see attackers reuse the same TTP across different target companies. Enumerating badness not only offers detection opportunities (perhaps not the best, but higher level detection techniques are often built off understanding the enumeration of badness) but also denies attackers opportunity for reuse. “Impose cost,” as thoughtlords like to say.

      • saagarjha 548 days ago
        > Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

        This can be good or bad depending on how you do it. If you default to the good thing and there really isn’t any need to do the bad thing then it can be quite good. If there is a genuine need for some people to sometimes do the bad thing (so it’s not actually universally “bad”) pretending like nobody can ever make an informed decision here is not a good policy. Sure, it’s very difficult to get people to make informed choices, but you can’t really brush this off as people being uneducateable.

        • pvg 548 days ago
          you can’t really brush this off as people being uneducateable

          The objection isn't that people are uneducable, it's that even expert users can easily make seemingly-trivial mistakes which then have catastrophic consequences (e.g. experts get phished) and that's a conclusion reached through experience/data.

      • acdha 548 days ago
        > 1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).

        I agree with this as the theoretical grounding and certainly wouldn't say that this was especially insightful but I can understand why he said it if he was encountering a long tail of obsolete advice similar to what several places I worked were hearing at the time. I think one of the big problems was simply inertia: I remember the Cisco guys at one job saying they didn't want to do a default-deny policy because it was too much work to switch and they weren't sure if the hardware could handle that many rules.

      • u801e 548 days ago
        > Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.

        The notion that users can't really be educated has led to a lot of questionable security practices that prioritize ease of use over real security. For example, 2FA using codes sent by email or SMS as the second factor rather than relying on key based authentication like client side TLS certificates issued by the service that the client is using.

        This, to some extent, has actually decreased security by allowing people to bypass authentication by compromising the second factor through use of social engineering.

        • pastage 548 days ago
          My bank used client TLS certificates early on, while it is nifty it was a really bad idea without hardware security. (Paper OTP in their case)

          IMHO, client side certificates are a big failure even on server to server. The UX of doing it is error prone and insecure because of foot guns. It fails because there are so many different incompatible ways to use them. Mostly this idea of mine is based on never having had a good experience with browser based client certificates (even highly automated and hardware secured ones). Things does not get much better on server.

          Sure automation helps but the certificate is a such a small part of the system, and when you try to integrate two automated systems that use client side TLS certificates it is easy to trust too much or too little. (Both are troublesome)

          • u801e 548 days ago
            > The UX of doing it is error prone and insecure because of foot guns.

            Many people over the years have mentioned UX issues as a reason why client side TLS certificates aren't more widely used. The question is why hasn't there been an effort to improve the UX rather than re-inventing the wheel (either poorly with SMS/email 2FA or OTP, or in a way that's limited to a specific application level protocol like HTTP for webauthn).

            What I would like to see is a standard workflow where as part of an account creation process, an associated CSR is sent and a client side TLS cert is returned and stored on the device along with a standard way to add additional devices using an existing device and the new device that doesn't depend on a specific application level protocol (so, for example, I can use my email client via SMTP and IMAP to securely authenticate without having to rely on a HTTP intermediary).

            Or, for more secure settings, actually having to verify your identity out of band (e.g., going to the bank and showing multiple forms of ID along with your CSR to get the certificate.

            • acdha 548 days ago
              The problem is basically that it wasn’t a priority for application developers but you need to upgrade things everywhere before you can switch to a new protocol. Those clunky MFA options give CIOs the appealing promise that a bit of duct tape means they get some protection without needing to e.g. replace that old RADIUS server most people depend on to do their jobs.

              It might be interesting to look at WebAuthn passkeys, as they do most of what you want. That took several important developments: the web ate desktop apps, Microsoft lost control of the web, and Google has some strong security people in their management. That does the public key exchange, has robust cross-device support which doesn’t require an internet connection, etc. and it has some features to improve the identity situation (e.g. it includes a device key & authentication info so my bank can say it only accepts transfer requests which came from a known device doing a biometric check, which is a nice edge over x509/SSH-style trust based solely on access to the private key).

              This unfortunately does not work using other protocols but a not-uncommon flow would be using a browser session to issue your IMAP client a token. That’s not great (a compromise gives the attacker your email) but it can be less disastrous if the most important actions can’t be initiated purely from email.

          • pabs3 548 days ago
            Apple/etc Passkeys (WebAuthn in software instead of hardware tokens) seems similar to TLS client certs, so I'm sure the UX stuff with certs is solvable if anyone cared.
            • u801e 548 days ago
              They are, but they require one to use the HTTP application level protocol. I would like to be able to do the same with SMTP and IMAP in my email client without having to make HTTP requests.
      • prof-dr-ir 548 days ago
        > 4. "Hacking is cool": literally, this is "get off my lawn".

        I do not quite agree with this verdict. Back in the early two thousands there were lots of people who thought it rather cool to just hack around (or more precisely crack around), for example by exploiting SQL injection bugs in whatever web form they encountered.

        Of course I might not have a representative impression of the community, but I think this kind of stuff is now much more widely seen as unethical and uncool. So I think the article's prediction that this will "be a dead idea in the next 10 years" was actually quite accurate.

        • AnonCoward42 548 days ago
          It's actually hard to argue here given there is so much ambiguity.

          Technically "hacking" is often portrait as totally cool and often not negatively connotated ("getting/being hacked" however clearly is). The problem with this is that the meaning has shifted a lot in the meantime. He means mostly cracking I think, while the meaning of "hacking" has shifted to be broadly just altering state/behavior often to something that was not originally intended.

          At least he also includes penetration testing as something bad and this has clearly not aged well. For one you can also employ people to do just that and it's pretty essential for applications with big security implications and the other part is the invitation to "hack" the application/system at hand seen with bug bounty programs. The actual practice seems to contradict his point here.

          You can also interpret it in a way that it is not going to improve your security just by virtue of having pen testing, which is true. However I feel like nobody argued that. Pen testing/bug bounties are there to actually get attention of exploits you might not get otherwise and therefore a prerequisite for fixing exploits. Or said differently: If you think you hardened your application/system so much that it's impenetrable, how could it hurt if people try to break it and tell you if they are able to. People will try anyway, but they might not tell you.

      • randomcarbloke 547 days ago
        4/6 and it hasn't aged well?

        1 might be ubiquitous even in 2005 but it's 2022 and this was the page that was shared, obviously it was worth Ranum stating...

        • tptacek 547 days ago
          The things that are true aren't interesting, and the things that are interesting aren't true.
    • acdha 548 days ago
      > Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.

      The charitable interpretation of the “penetrate and patch” section is the architectural parts, and browsers are a great example. At the time he wrote that, a browser was a single process running everything in traditional C/C++ calling other unsafe code (i.e. Flash) in the same process. People did patch a lot but they also did things like split components into separate processes with different privilege levels, change practices throughout the codebase to harden things like pointers or how memory is allocated, rewrite portions in memory-safe languages, etc. It took a decade but browsers became a lot harder to successfully exploit.

    • josephg 548 days ago
      I think this article has aged very well.

      > Also web browsers face a constant stream of security issues. What are we supposed to do instead?

      There's not much that users can do, but web browsers have spent the last decade moving away from "Penetrate and Patch" to much more proactive approaches. Eg, Chrome pioneered moving each tab to a separate process with full sandbox isolation. Firefox is talking about using webassembly as an intermediate compilation step for 3rd party C++ code to effectively sandbox it at a compilation level. Rust was invented by Mozilla in large part because they wanted to solve memory corruption bugs in the browser in a systematic way.

      > "Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though.

      MacOS requires user consent for apps to access shared parts of the filesystem. The first time you see dialogues asking "Do you allow this app to open files in your Documents folder" its sort of annoying, but its a fantastic idea.

      As you say, my iPhone is more secure than linux for the same reason - because iOS has a "default deny" attitude toward app permissions. A single malicious app (or a single malicious npm package) on linux can cryptolocker all my data without me knowing. The security model of iOS / Android doesn't allow that and thats a good thing.

      I wish iOS was more open, but on the flipside I think linux could do a lot more to protect users from malicious code. I think there's plenty of middle ground here that we aren't even exploring. Linux's permission model can be changed. We have all the code - we just need to do the work.

      Also since this article was written, we've seen a massive number of data breaches because MongoDB databases were accidentally exposed on the open internet. In retrospect, having a "default permit" policy for mongodb was a terrible idea.

      • jjav 548 days ago
        > my iPhone is more secure than linux for the same reason

        The phrase "more secure" doesn't mean anything, I wish it wasn't used.

        One needs to talk about the threat model(s) you care about and how a particular solution addresses them (or not). Any given solution can be both more secure and less secure than an alternative, depending on what threat models you care about. Which may well be different than the threat models someone else cares about.

        If you unconditionally trust Apple and all government agencies which have power over Apple (e.g. NSLs) then one could say iOS has a more secure file access model than Linux. But that's a big if. Personally I could never trust a closed source proprietary solution.

        • josephg 548 days ago
          > The phrase "more secure" doesn't mean anything

          Fair point. I'll elaborate:

          The linux (UNIX) security model is designed to protect users from other (potentially malicious) users on the same computer. The system as a whole is designed such that a malicious (or incompetent) user can't make the system as a whole stop working. The system is more important than any particular users' data.

          Software is assumed to be correct. Any program a user runs inherits the full permissions of that user.

          There's some problems:

          1. Computers aren't often shared between mutually-untrusted people.

          2. My data is much more precious than my computer itself.

          3. Malicious software is everywhere. Every time I install a package some stranger wrote on npm or Cargo, I implicitly give it full access to all my data and my entire network.

          So, linux protects me from things I don't need protections from (other users) and doesn't protect me from things I do need protection from (malicious code).

          > One needs to talk about the threat model(s) you care about and how a particular solution addresses them (or not).

          The threat model for malicious code is, I install an apt package / cargo crate / npm package / intellij or vscode extension and the package contains code which either exfiltrates my data over the internet, or cryptolockers it.

          iOS (and Android?) don't let code like this run, since software can only (by default) access the data that it itself has created. Ransomware attacks are trivial on linux and impossible on iOS.

          Its much more likely that me or my family suffers from a keylogger or ransomware attack than we suffer as a result of government intrusion into our digital lives. I'm one bad npm install away from having all my data stolen, and it terrifies me.

          • lmm 548 days ago
            > Its much more likely that me or my family suffers from a keylogger or ransomware attack than we suffer as a result of government intrusion into our digital lives.

            Are you sure? How would you know? We can't know how many people the government blackmails with data taken from their iphones, because it's illegal to publish information about them doing so, whereas ransomware attacks are widely publicised.

            • sbuk 548 days ago
              As key component of threat modelling is risk management and modelling.

              I would counter the “government blackmailing people” by questioning the risk this poses to me as an individual. As much as we’d like to imagine it, and as much as it can often times feel like it, we don’t live in a Kafkaesque society, by and large, as the significant majority of us are of zero interest and have little of anything worth blackmailing.

              • denton-scratch 548 days ago
                It seems to be routine for rape complainants to have to hand over their phone and have their messages scrutinized, as a pre-requisite to proceeding with an investigation. That is a form of government blackmail.
                • sbuk 548 days ago
                  I disagree entirely with that notion. If you make a serious accusation, you must be prepared to hand over the necessary evidence to assist the investigation and get a conviction. I'll also say that at that point, the risk has changed dramatically. Risk isn't a static thing. It needs to be assessed regularly and evaluated when your threat model changes. Your exposure to risk is still a factor.
                  • denton-scratch 548 days ago
                    > hand over the necessary evidence

                    Of course. But you shouldn't have to spill your guts about your entire life (I understand that young folk nowadays life their lives in Instagram selfies).

                    And more to the point, the first step for the policemen investigating a rape complaint should be to investigate the complaint, not the complainant. If the investigation of the complaint raises questions about the complainant, then there might be grounds for seizing the complainant's device. But they can't make device seizure a pre-requisite for doing their job.

                  • chiggsy 548 days ago
                    This is rubbish. The police investigate the claim and based on the results of the investigation, the prosecutor decides whether charges can be laid. The prosecutor!

                    Not the police!

                • chiggsy 548 days ago
                  What? Where is this routine? Where do you live? What if you don't have a phone? Why would you give your phone to the police in this situation instead of the prosecutor?

                  Which government? What level? My god, who told you this thing? For what other crimes is this policy enforced?

          • gmuslera 548 days ago
            The landscape of security switched from the computer to the user.

            As user you need to become root in a way or another to install a system program or something that affects the system as a whole. And you are aware that you are doing something that affects the system as a whole. The security was meant for multiuser environments, the computers were shared in a way or another.

            But what matters now about that computer (specially when you are the single user of it) is your user, your data, your credentials, your network access, etc, all that you as user (and app you run) can access, or modify. Viruses and malware in general used to be system threats, but it is enough to be user threats now.

            Things are moving to containerized in a way or another applications (docker, snaps, whatever do iOS and Android with that, etc), with limited access to your data, credentials and so on.

          • arp242 548 days ago
            > 3. Malicious software is everywhere. Every time I install a package some stranger wrote on npm or Cargo, I implicitly give it full access to all my data and my entire network.

            This particular case wouldn't really be prevented by an Android/iOS-type security model, I think? That package will be part of the program you're writing, and chances are that program requires more than the bare-minimum access.

            That said, it's not extraordinarily difficult to lock this down, if you really want to. Docker is common, but more traditional tools work as well (e.g. running your program as its own user, maybe in a chroot), and/or using cgroups directly.

            This applies even more with things like VSCode extensions, which typically run inside the VSCode process, and without filesystem access VSCode is pretty useless.

          • cpuguy83 548 days ago
            Linux has lots of protective measures outside of just user isolation. There's capabilities, namespaces, cgroups, seccomp, landlock, selinux, apparmor.

            The difference is Linux gives the owner of the machine (for better or worse) decide what to do here. There are distros that try to force you into a more secure posture (Qubes), though.

            • lll-o-lll 548 days ago
              This is interesting to me. Do you know if Windows OS provides similar features? The ideal for corporate environments (outside of dev) would be desktops that operate in the same way as iOS.
              • pixl97 548 days ago
                In theory, yes.

                https://learn.microsoft.com/en-us/windows/security/threat-pr...

                The practical application of this is a different story.

                But there is a problem with trying to compare this to things like iOS and phones. Phones are strongly application based, it is completely common to have your data locked up in an app and getting to another app has varying levels of difficulty.

                In Windows it is much more common for data to be file based, and applications can be launched and ran by clicking the file. File security is based on the user, so typically any application can access files owned by the same user. A huge portion of workflows would break if this were not the case.

          • Y_Y 548 days ago
            [flagged]
            • timlatim 548 days ago
              The point, I believe, is not about Linux kernel but rather "Linux userland". I'm sure you could replace "iOS" with "Android" and the meaning will stay the same: smartphone OSes go to great lengths to isolate apps and prevent them from messing with the user's data, while desktop Linux does not.

              I hope the situation will change once Flatpak becomes more widespread and polished. On paper, it offers a comparable experience to smartphones — you get sandboxing with granular permissions, easy installation without messing with the command line, and so on. In practice, I had enough issues with Flatpak apps breaking in non-obvious ways to make me not recommend it to others. As a recent example, I tried using a JetBrains IDE from a Flatpak and spent quite a bit of time diagnosing issues with paths before resorting to Google and finding out that it's not supposed to work at all (https://intellij-support.jetbrains.com/hc/en-us/community/po...).

              • Y_Y 548 days ago
                If you like Flatpak's issues you'll love Snap! The point about smartphone userlands is a good one. If the desktop modus operandi were similar to how APKs are used then I imagine Linux would have much better security. For now I think that only something like Qubes provides the security and isolation you want without subtly breaking things.
      • scotty79 548 days ago
        > Chrome pioneered moving each tab to a separate process with full sandbox isolation.

        I don't think it was done for security. Before chrome where all tabs ran in single process it was common for a bad site to stall your whole browser. Separating it into single processes was basically admission that, yes, web browser sucks, so the best we can do is give you ability to kill a part of it when it misbehaves.

        • josephg 548 days ago
          It was done for both reasons. Here's the Google Chrome comic book talking about it:

          https://www.google.com/googlebooks/chrome/small_04.html

          Another benefit they cite is reduced memory fragmentation. Because each tab lives in its own memory space, when the tab closes the OS can reclaim all of its memory. Presumably you'd still get framentation, but the OS is probably better able to handle that long term than jemalloc. Clever!

      • ufmace 548 days ago
        The consumer OS lockdown side does have a lot of interesting points. One I also thought of - I'd bet that, even if we reject web browsers, basically every user's "30 most used apps" has at least one that has a plugin system that loads unverified code, or runs macros in some kind of interpreter that is or may later be proved to be exploitable, or parses structured data from files that can't be trusted using non-memory-safe code, or some other thing I haven't thought of.
      • saagarjha 548 days ago
        > The first time you see dialogues asking "Do you allow this app to open files in your Documents folder" its sort of annoying, but its a fantastic idea.

        It’s not a great idea because it’s annoying. It is not really useful in its current incarnation to most people.

        • sbuk 548 days ago
          Further, banner/modal fatigue is a real thing. People will just ignore and click through.
          • chiggsy 548 days ago
            This is because the dialogs have gotten rid of the disclosure arrows that gave path information and metadata about the binary. Also, executables used to have names consistent with the naming conventions on the platform. Now you just get dialogs with some cryptic name, and a one line manpage.
    • munchbunny 548 days ago
      The points didn’t age well, but there’s a kernel of truth in there: none of those things will ever work 100%, so if you’re trying to really lock things down you need defense in depth, which was also not a new security concept in 2005, but it was one we were, as an industry, less sophisticated about.
      • denton-scratch 548 days ago
        "Defence in depth" is a term with obvious military origins.

        You have a relatively thin front-line of defence, with orders to fall back if they are in danger of being overrun. Then you have a very strong second line, manned with assault troops. As the first line falls back, the second line counterattacks.

        This strategy was developed by the Germans in WWII, and adopted by the Russians.

        I disapprove of it's use in computer security. There, it means something different; it means basically having multiple lines of defence, without any notion of counterattack.

    • Gigachad 548 days ago
      The future is probably a two tiered system like what Apple is showing with Lockdown Mode. Normal users get the full speed fully functional system. And those who are at risk of being targeted use a locked down system with less convenience features but more security.

      Along with better languages and tooling ruling out entire classes of exploits.

  • bartread 548 days ago
    > #4) Hacking is Cool

    As an old I strongly object to the corruption of the terms "hacking" and "hacker" in the diatribe following this heading. I'm a fan of hacker culture, in the old sense, and encourage our developers to adopt a hacker mindset when approaching the problems they're trying to solve. Hacking is cool.

  • donatj 548 days ago
    > Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

    That’s like saying “Why don’t they just design locks that are unpickable?”

    They’ve been working on that, for a while. But you need to know what you’re protecting against. Anyone who watches The Lock Picking Lawyer knows about the swaths of new locks vulnerable to comb attacks - a simple attack that had been solved for almost a hundred years but somehow major lock manufacturers forgot about.

    You can’t build something safe without considering potential vulnerabilities, that’s just a frustratingly naive thing to say.

    • albntomat0 548 days ago
      To take the strongest form of the author’s argument, his point is that it’s not possible to take a pile of terrible code with no security, and fix all the problems in it. It’s better to architect it in a way that provides security (e.g least privilege everywhere, sandbox, memory safe languages, etc.).

      I think the author could have phrased it better, in that the best approach is having a good security design, and then taking out all the bugs it couldn’t cover.

  • superkuh 548 days ago
    Back in 2005 the idea that you shouldn't run every bit of executable code sent to you was drilled into people. Nowadays you can't use a commercial/institutional websites without doing the modern equivalent of opening random email attachments.
    • Gigachad 548 days ago
      You also use an OS and browser which is space age technology compared to what they had in 2005. Back then a kid could write an email to install a rootkit on your computer. Now you'd get paid $100k+ if you could work out how to do that.

      It also used to be common knowledge that if someone has physical access to your device, its game over. Which is something that is becoming rapidly untrue. If I hand my macbook to my friend for a day, I can be quite confident they haven't been able to defeat the boot chain security to replace my kernel with a malware version like you trivially could pre secure boot environments.

      Another piece of common advice was to not use public wifi because anyone could steal your password or credit card details. Security advice from 2005 really hasn't held up much at all.

    • mb7733 548 days ago
      But the client side code in a web-app is run within the browser sandbox, which is not equivalent to running a random exe... Unless you meant something else?
      • superkuh 548 days ago
        Speculative execution, sandbox exploits, etc, etc. I thought everyone (myself included) stopped believing in the power of VMs/containers/sandboxes to protect you when all that happened (and kept happening). And it's just getting worse as the JS engine(s) get access to more and more bare metal features and become a true OS in more than just spirit.

        Thus all the crazy insistence on CA TLS in modern web protocols like HTTP/3 which can't even establish an connection without CA based TLS hand-holding.

        • mb7733 547 days ago
          The fact that exploits exist doesn't imply that using sandboxes is equivalent to running untrusted code directly.
  • Animats 548 days ago
    (2005)

    "Default Deny" was, for a while, called "App Store". However, the app store vendors have done much better at keeping out things for competitive reasons than at keeping out things for security reasons.

  • edrxty 548 days ago
    Looking back on that era, the hate towards hackers feels really misplaced. Yeah, at the time it was more local and more dominated by people doing it for the lolz but we kinda owe them a debt of gratitude. If they hadn't gotten everyone to stop being lazy about security we'd be in a very different place now, surrounded by rouge states and agencies launching hyper sophisticated attacks on infrastructure and data. That was also the era that trained the current generation of cybersecurity experts.
    • pookha 548 days ago
      It wasn't misplaced...There were some horrific pieces of "hacker" software that were floating around in 05. Wasn't uncommon for a disgruntled employee to load malware onto a company's network and bring down operations for weeks. Case in point, douchebag loaded a maleware into this small financial company's network that I wound up working for. The virus infected the boot sector and forced the company to do low level formats on all of the company's hard drives. They lost immense amounts of money and respect in their industry and barely recovered. That virus was developed by some garbage hacker boy for laughs.
      • edrxty 547 days ago
        In fairness, there aren't a whole lot of ways left to run around corrupting the boot sectors on an entire network. Given current politics I'd rather have everyone learn how to enforce user access control in '05 rather than in '23.
  • sweetjuly 548 days ago
    >Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

    Sure, but how does one get the knowledge on how to secure systems? Half the job of a security engineer is thinking like an attacker and trying to poke holes in it. Key mitigations like ASLR and stack canaries are so effective because they specifically block off key resources and techniques that attackers use. It would be downright impossible to invent these mitigations (or even meaningfully understand them) if you did not already have a firm grasp on memory corruption and ROP. I'm not sure it's an argument I actually care to defend, but I do honestly believe that you can't be a strong security engineer if you don't have a grasp on the techniques your adversaries use.

    • foooobaba 548 days ago
      With respect to this example, I think he is saying it would be better if we were using memory safe languages, rather than trying to come up with these sorts of mitigations (which is enumerating the bad). Of course it’s probably not possible in every scenario because we’ve been doing it badly for so long, but I think the principle still holds.
      • rfoo 548 days ago
        Of course we can avoid all the lasting architecture mistakes we made if we knew it before and had been doing it correctly since the beginning.

        And it's practical, right? Right?

        See, when there are no "system security" word on it people suddenly start to make sense of it.

        I'm glad that we are reviewing this 2005 post in 2023 though, at least we can fight hindsight by hindsight.

        </rant>

      • bombolo 548 days ago
        Remember the log4j thing? And yet java is memory safe.
        • foooobaba 548 days ago
          This has nothing to do with ASLR and stack canaries.. Log4jshell wasn’t a buffer overflow exploit, it was the result of yet another dumb idea, adding remote jndi loading capability into the logging framework.

          You can assume any input to your program will be manipulated by an attacker. This implies if you use a non memory safe language you’ll need to make sure there is no way the user can input enough data to overflow your buffer, which will corrupt your memory, and get it right 100% of the time. If you’re building a logging framework it’s extremely likely people will be logging some sort of information from the outside, so not a great idea to execute it as code. Similarly for sql injections, if you simply use prepared statements you remove a whole class of problems. Knowing an attacker will probably inject some garbage into the input of your program, and assuming user input is malicious, is a basic principle you can use to design better systems. I believe this is what the author meant by his statement.

          • hnfong 548 days ago
            What do we do in practice?

            - Personally audit all the 392 library dependencies in our project to make sure they don't do anything dumb?

            - Ask the intern to write a non-dumb logger (and the other 392 deps) from scratch?

            - Don't use dependencies and write bare metal assembler? (JDK, libc, OS kernels are dependencies and do introduce a steady stream of CVEs)

            - Give up on doing anything complicated and congratulate myself at being able to write a simple echo service by implementing the whole TCP/IP stack on bare metal?

            • foooobaba 548 days ago
              Well, it isn’t easy and there is no silver bullet. In practice must use your engineering judgment and THINK about these tradeoffs for every problem you encounter.

              That being said, there are some principles you can think about to help you get the tradeoffs right when you encounter a problem.

              The main principal the author discussed is the idea of enumerating the good, rather than enumerating the bad. Deny everything except the good by default, and do it at every level of your system. This a good idea to consider, but may not apply to everything.

              If there is something you don’t control, you are taking a risk, so understand it, and limit it’s potential impact. In some cases it might be better to use a tried and true library or roll your own vs using some fancy new dependency - or maybe not, that’s for you to think about, but it is worth considering carefully.

              Try to keep things as simple as possible and use tech that are easier to understand, well documented, well maintained, hard to shoot yourself in the foot with over things that are fancy and cool.

              For example say you’re building a distributed system.

              At the network level, only allow the types of traffic you need, so don’t allow incoming traffic you don’t need, and don’t allow outbound traffic you don’t need. This means it’s going to be much harder for an attacker to get in, or exfiltrate data out. Use secure channels, for example mTLS where both sides authenticate each other.

              At the application level, think about what data the user has control of and treat it carefully. Is there a way you can authenticate the user is valid, and authorize them to only perform certain actions that are allowed - can you use something like signatures to ensure that every subsystem can verify the data isn’t tampered with.

              At the technology level, yes think about your dependencies, and keep things as simple as possible. This makes it easier to secure but also easier to maintain, reduces risk of vendor lock in ect. Depending on what your are dealing with, yes, you might actually want to have someone audit all your dependencies, or if that’s too expensive maybe you can isolate parts of the system that deal with sensitive information so those subsystems don’t use many dependencies. Your value proposition as an engineer is not to just string together code, but to build useful, reliable, secure, flexible software and juggle the tradeoffs. The dependencies you choose, and the way you choose to use them DO matter. Just like someone designing a bridge must use materials manufactured by a 3rd party, and assembled by another 3rd party they must be careful with who they select, and perform their own testing to ensure things will work. But those tradeoffs are going to be completely different than if you make cheap toy drones for example.

              So basically in practice think carefully about the risks, costs, benefits and what tradeoffs are worth making. Keep relevant principles in mind like: favor only allowing the good, rather than trying to enumerate the bad, assume threats at every level, avoid foot-guns, favor simplicity, favor tested/trustworthy dependencies.

              • hnfong 547 days ago
                Yes, perfectly sound and meaningless advice.

                In practice, for example, I import openssl libraries to get mTLS, even knowing the history of CVEs they had over the years, because I know I'm definitely going to do a worse job at implementing it, and not implementing it is also worse.

                So now, I knowingly included a bad-but-less-bad thing to avoid the bad-bad things. Now I have to keep myself aware of the bad things from the less-bad library that comes up from time to time in the form of CVEs. Those CVEs are "enumerating the bad". In theory I should be able to write a bulletproof mTLS library myself (or convince somebody else to), but apparently this thing doesn't exist, and the only real alternative is to wait for other people to enumerate CVEs from time to time and keep patches up to date.

              • pixl97 548 days ago
                >So basically in practice think carefully about the risks, costs, benefits and what tradeoffs are worth making.

                And then be told by management to scratch that, we need the app out yesterday or the competitor is going to eat us.

          • bombolo 547 days ago
            So? The article doesn't mention "security against specific memory attacks". It's meant to be generic security, and I was pointing out that your comment was restricting the field too much.
        • albntomat0 548 days ago
          I find this line of argument frustrating, personally.

          Yes, memory safe languages have bugs. But, we’ve been spared a world where big enterprise Java apps also have a bunch of trivial stack buffer overflows, in addition to log4j issues.

          It’s like saying seatbelts are pointless because folks still get hurt in car accidents, while ignoring all the times they saved lives.

    • worthless-trash 548 days ago
      As a security engineer, I can't agree more.

      If you dont understand your enemy, you can not hope to defend against them.

    • riffraff 548 days ago
      This confused me too, my interpretation is that the author is saying you should not invest time in learning how to use the exploit or scanner du jour.

      Being aware of the new hacking techniques is ok, but I think this is arguing against vulnerability scanning tools.

  • lmm 548 days ago
    Not convinced these are the dumbest (none of them is quite as dumb as requiring special characters in passwords, for example, and I'm not sure the fourth is dumb at all), or that they're six ideas. The first two are the same, and the third one is a special case of the same thing.
    • geoduck14 548 days ago
      Yeah, and they didn't mention "storing your passwords in plain text"
      • scotty79 548 days ago
        And 'security through obscurity'.
    • rippercushions 548 days ago
      I've been looking for a new bank in the last week. Actual password practices I have encountered in 2023:

      * ME Bank: Password must be between 6 and 20 chars long and consist entirely of numbers

      * Westpac: Password must be exactly six (6) characters long, letters and numbers only

      • durnygbur 548 days ago
        ING bank in Germany: we will implicitly trim your password to 10 characters.

        Various SAP-based systems: special character in password is required... but not THIS special character, different one.

      • tibanne 548 days ago
        Stronk.
      • mrtweetyhack 548 days ago
        [dead]
    • mixmastamyk 548 days ago
      What’s the deal with special chars? A site made me use one today.
      • bombolo 547 days ago
        Special but not special, don't you dare use a non ASCII character or the whole backend explodes.
        • durnygbur 546 days ago
          Non ASCII special character?! Most systems which demand special character don't even allow all ASCII special characters...
  • saghm 548 days ago
    It's super interesting to read this list as someone young enough that the first time I was ever prompted to consider computer security was in a college course almost a decade after this was written. Although different terminology was used, some of the ideas, like "Default Permit" and "Enumerating Badness" were so heavily discouraged when I first started studying that it's almost hard to imagine them being considered good practice so recently before (although even today they're common enough that it's still worth calling out, so maybe this wasn't uncommon knowledge at the time either). On the other hand, the next two ideas, "penetrate and patch" along with "hacking is cool" certainly don't seem to be as reviled as the author would like, and I don't think that the latter was a dead idea within a decade like they suggested. Trying to interpret them charitably, I could believe that the intention here was to decry the lack of proper threat modeling that was done in advance at the time (which still is a real issue today). On the other hand, reading it at face value sounds like the idea that if you think enough in advance and just "don't write bugs" that your product will be 100% secure and never need any patching, which I don't think is a good take. I'd counter that it's essentially the same as the fallacy they mention later, "We don't need host security, we have a good firewall"; proper design up front is a good "firewall" to stop bugs from coming in, but it's not a substitute for having proper mitigations for when they do inevitably occur.
    • acdha 548 days ago
      I’m feeling old remembering reading this at the time and being glad that it was getting pointedly directed to certain large vendors.

      I think the key part of “penetrate and patch” is rejecting the idea that you can hire a tester, patch a couple of holes, and otherwise not change anything. It’s the difference between being _shocked_ that your C++ has another memory safety issue after someone exploits it or using tools like Rust, sandboxes, static analysis, etc. to avoid having an exploitable vulnerability in the first place.

      The major confound here is that a lot of companies realized there aren’t actually many penalties for releasing unsafe software, and decided that throwing bodies at patching was cheaper. I’m reminded of how many antivirus programs had basically 90s-level C code running with system privileges because the owners decided it’d cost too much to rewrite it until Tavis Ormandy started fuzzing them. I doubt many customers switched despite clear evidence that those vendors had serious deficiencies in their development processes.

  • npteljes 548 days ago
    What I think happened is that with computing, humanity began to build a new world, a Different World that's not like the other, old world outside. But since humans were building it, it became just like that. It has the same buildup, the same issues, the same dumbness as the original, real world.

    #1: Default permit: people don't like to spend energy, especially not upfront. Integrating "Permit by default" systems is much faster than setting them up with proper authentication, authorization and access rights. Permit default just works, starts quickly, and works fast.

    #2: Enumerating badness: you mean, like how we name every single strain of viruses? So now we enumerate computer badness too.

    #3: Penetrate and patch: very similar to how our laws work, I think. There are people who create injustises, and later the legal code is upgraded to handle that. Again, reactive, like in #1.

    #4: Hacking is cool - well, other criminals are cool too, like pirates and maffiosos, and so on. People are drawn to power.

    #5: Educating users: someone has to, doesn't they, if they haven't learnt the thing by themselves? You can't make everyone go away if they are dumb, if you need them.

    #6: Action is Better Than Inaction: This one, I think, imitates business. There's a lot of ways to make money in business, and being there early is one of them.

    That said, I really enjoyed the article. Permit by default is especially dumb, it was really funny when mongo installed itself with no password and listen on public IP, default port. And how long it took them to patch that. And how that haven't burned the public goodwill! So maybe these things are not really dumb after all?

    • pixl97 548 days ago
      > It has the same buildup, the same issues, the same dumbness as the original, real world.

      Why would it not? Both computers and humans exist in the same world. There is no 'towing it outside the environment', we are the environment and all of our warts and problems are going to follow.

  • marcus0x62 548 days ago
    > A few years ago I worked on analyzing a website's security posture as part of an E-banking security project.

    Cool, so a pen test?

    > One of the best ways to discourage hacking on the Internet is to ... pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.

    ...

    Most of these are well thought out and still relevant 17 years later. #4 -- particularly the "don't learn offensive security skills as a defender" idea -- was dumb in 2005, and its dumb now. Its also, unsurprisingly, not advice the author has himself followed.

  • bryanrasmussen 548 days ago
    I feel let down as a Dane that neither NemID or MitID deserve a mention.

    https://www.nemid.nu/dk-da/om-nemid/historien_om_nemid

    https://www.borger.dk/internet-og-sikkerhed/mitid

    full disclosure - I worked on the JavaScript implementation of NemID. My problems with it are not the implementation, but the whole concept.

    • optionalsquid 548 days ago
      The article is from 2005, while NemID and MitID were rolled out around 2010 and 2021, respectively. That nit-picking aside, would you be willing to elaborate on your problems with the concept of NemID/MitID as a whole?

      And thank you for your work. The JS based NemID login was a huge improvement over the earlier, Java based implementation.

      • bryanrasmussen 548 days ago
        big unload coming - (tldr - maybe my nemid issues are just silly and paranoid and not really something that would actually happen, or maybe Danish criminals are not ambitious enough, and MitID issues are just the process for handling when you forget your password is broken)

        my problems with nemid - it just always struck me as a security issue that a large number of people were using their person numbers as their ids for nemid services - sure you could change but not sure how many did. The passwords they used were case insensitive and it was played up that you didn't need to worry about that, it could be real simple so the only real line of defense was the nøgle card, which a lot of people also used the paper version.

        Personally if I'd been a crime lord during NemID's heyday I would have tried to get pictures of rich people's nøgle card, have burglars hit the whiskey belt, - you find a card take a picture, then the only real issue is finding the id and password - id is probably personnummer, password is probably simple and might be easy to find (or put some spyware on their computers) But this didn't happen as far as I know so maybe there are reasons why it isn't that good a plan anyway and I'm just like a paranoid guy.

        MitID bugs me because of the process when a user forgets their login or something otherwise goes wrong, which is that you get random questions from the personal register in borger.dk, my wife (who is Italian) had a problem with her MitID had to reset she got asked what her address was, and what her children's names were - which I submit would be real easy for an attacker to find out.

        I had a problem I got asked my mother's maiden name, what age she got married at, what month she was born, where I live, what year and month we moved in our house, and what sogn I was baptised in.

        Now I submit those questions are reaaaallll cool and easy to answer for any good and proper Danish family that have never had any problems for the last few generations but as it happens I was estranged from my parents. I don't offhand know where I was baptized (I was born in Rigs but baptized somewhere in Jylland because of a trip to visit grandparents IIRC), I'm not sure when my mother married my father - if she was 18, 19, or even 20. I couldn't remember what month she was born but my wife could because it was the month before her mother was born.

        We rented our house for nearly 9 months before buying, so trying to remember again what exact month we bought it in would be difficult and of course we had transferred our address to the house before purchase because we were living there and intending to buy but the person asking the questions wouldn't even answer if what they wanted was when we said we were living there or when we bought the house, but they did urge that I should "take a guess".

        The process as I said is beneficial for people with perfect families, but say a family where people got divorced and didn't talk to each other and were drunks like mine, I get screwed over by that process. The process is, it seems also beneficial to people from outside Danmark as they will of course have a less extensive record in borger register for random questions to be drawn from, hence the easiness of the questions my wife received.

        I have requested clarification from Digitaliseringsstyrelsen as to what the background and technical discussion was related to the decision to use these randomized questions as I would like to write a longer article about how stupid it is, also because I can think of several ways in which I think malicious actors might be able to get access to that data relatively easily and answer the questions easier than an average citizen.

        But they don't seem to understand what I mean when I say I want the background and technical discussion - which I mean I want the kind of meeting notes that go on when implementing a standard (such as when I worked on Efaktura when one element was considered informative but unfortunately that did not make it into the bekendtgørelsen, but we obviously had those meeting notes to refer to as to how it was informative and not to be used in any calculation of the faktura)

        on edit: I have done a mix of English and Danish here, mainly English so everyone can follow; some Danish terms because I figured not that important.

        • optionalsquid 548 days ago
          Thank you for the detailed answer.

          With regards to the passwords, I somehow didn't catch that they were case-insensitive back when I created my account, so I used a mixed-case password for NemID for the longest time. Boy did I feel silly when I discovered this fact by accident.

          I also didn't know that was how the recovery process went, and I can easily see it causing problems for a lot of people. I'd probably also have problems answering that kind of questions.

        • bryanrasmussen 548 days ago
          so I want some notes where one senior guy says I think we should pull randomized questions from the citizen data, and either everyone says that is a great idea, or there is a bunch of discussion about it and they actually bring up the points that I find painful but they have smart reasons why that is the way it has to be anyway - or somewhere in between these two poles.
  • deafpolygon 548 days ago
    > but the second version used what I termed "Artificial Ignorance" - a process whereby you throw away the log entries you know aren't interesting. If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting. This approach worked amazingly well, and detected a number of very interesting operational conditions and errors that it simply never would have occurred to me to look for.

    As a sysadmin, I took this approach as well. On the local machine, the server(s) would log normally. But, when I set-up centralized logging, I set-up a list of log entries that wouldn't normally interest me day-to-day. The server would only send to a central logging server things that weren't on this list. What was left were usually problems that I would need to pay attention to and they got fixed faster.

    The rest of the uninteresting log entries would just be audited from time to time.

    On the matter of security, every user that logs in on a daily basis gets logged with their IP address. Anytime that a user logged in with a different IP - it would get logged to the central log server and I would be notified. Most of the time, it was harmless.. but there were enough times I would find a compromised account in a sea of normal day-to-day login activity.

    When your logs are full of normal things in it, it's easy to miss important details.

    • somat 548 days ago
      I have the idea of doing spam detection style bayesien analysis on logs. the theory being you feed it your log stream, those are your normal logs, if the log stream start deviating from normal the statistical analysis starts to pop warnings. if it deviants for too long that would become the new normal.

      At this point I am elbow deep in bayesien email code trying to work out the nuts and bolts of the operation. One important trick is that you need a location aware hash to feed into your statistics engine. A better hash would utilize the structure of log lines, but categorizing logs is big messy yak shaving sort of work. Perhaps a worse more generic hash would be good enough.

  • mcqueenjordan 548 days ago
    I agreed with much of the article and points made. Maybe I'm missing something (if so, would love to learn!) but I felt that the "Penetrate and Patch" section was a little naive.

    e.g.

    > Let me put it to you in different terms: if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years?

    I agree with the point that "Penetrate and Patch" shouldn't be the primary strategy, but the author seems to write it off entirely with a viewpoint like "you should just write software and build systems that don't have security bugs". Well yes, of course that would be nice, but that's not feasible. And some software is much more difficult to get right than other kinds.

    "Penetrate and Patch" is a useful piece of security in that (a) it can catch what slips through the cracks, (b) it provides a sort of incentive mechanism to get it right in the first place, and (c) it simply isn't possible to build bug-free systems.

    The author claims that "Penetrate and Patch" finding bugs every month as evidence that it's bad, but isn't it the opposite? You cannot be bug free, so in fact any incremental progress/fixes is in fact good.

    All that said, I do agree that all of this starts with secure by design. "Penetrate and Patch" isn't a good primary strategy and cannot replace Doing It Right. But I think it complements it well.

    • tptacek 548 days ago
      It's not naive so much as it is motivated by enmity for vulnerability research and vulnerability researchers, which was a thing from '98-'05 or so.
      • mcqueenjordan 548 days ago
        Ah, got it. Yeah that makes sense, thanks -- I missed how old this was.
    • kgeist 548 days ago
      >if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years?

      It also assumes software is static and never changes so it's possible to run out of vulnerabilities to find.

  • ghostpepper 549 days ago
    According to Slashdot this article was online since at least September 2005.

    I would be interested to hear the author's thoughts on what has changed in the 18+ years since it was written.

    • deathanatos 548 days ago
      Oof, well, I was going to say,

      > My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.

      … that won't age well, and apparently, that didn't age well. It won't happen in the next 10, either.

      Nor will good engineering: as an industry, we a.) dislike the very idea that knowledge is required for software engineering and b.) every "Rust fixes this entire class of bugs, permanently" "oh god not the Rust evangelists" … yeah, the bugs will continue.

  • scotty79 548 days ago
    > My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.

    That didn't age well. In the era of growing corruption in government and business alike hacking becomes important way through which people can actually learn anything about their overlord's shady deals.

  • xkcd-sucks 548 days ago
    How about "our users can't tell the difference between a DOS attack and us having screwed something up" plus "the people that want to sue us for sucking are at war with the people that want us to look successful to get a promotion for hiring good vendors" etc.

    /enterprise

  • mikewarot 548 days ago
    >The real question to ask is not "can we educate our users to be better at security?" it is "why do we need to educate our users at all?"

    Great point, but the emphasis on system administration instead of the broken nature of operating systems causes the point to be missed.

  • nokcha 548 days ago
    > #4 ... "Hacking is Cool" is a really dumb idea.

    This has aged poorly; nowadays, the most notable attacks are conducted by state actors (e.g., Russia and China) or for-profit criminal groups (e.g., ransomware) rather than lone hackers doing it for fun.

  • jojobas 548 days ago
    I guess this man's internet heaven is filled by lobotomized users who can only exchange emails with a list of approved correspondents and browse only whitelisted websites. He, of course, gets to approve the lists.
  • adql 548 days ago
    > One of the best ways to get rid of cockroaches in your kitchen is to scatter bread-crumbs under the stove, right? Wrong! That's a dumb idea. One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.

    That's like, entirely unrelated. Black hats are motivated by monetary gains, not scout badges. The proliferation of internet made "for fun" hackers minority and irrelevant factor (or benefit, as they might actually report a bug instead of sow mayhem) when it comes to security.

    • BulgarianIdiot 548 days ago
      Yeah the cockroach analogy is kinda bad. A more apt analogy would be that you can either let rodents help themselves to your food supplies on their own terms, or you can set up traps with a little bit of cheese on them.

      The traps with little bit of cheese on them here being offering hackers a viable low-stress way to earn income and the respect of society for doing ethical work, which they'll prefer over the high-risk, despite higher-gain, illegal activity they'd contemplate and perpetrate otherwise.

      Similar mechanics in many ecosystems. Carrot and stick work best together.

      • namaria 548 days ago
        I think current practices would be better described in ecosystem terms as: "If a mammal is eating your food, you can adopt a bigger one to prey on them so you share a little bit of food on your own terms instead of compromising the whole community's supply".
  • amelius 548 days ago
    My favorite dumbest idea: autorun.

    But of course, the dumbest idea in computer security is that it always comes last on the budget list.

  • fatih-erikli 548 days ago
    Penetration testing probably is the dumbest. You will not be sure if it is an honey pot or a real security vulnerability.
  • tsukikage 548 days ago
    > A better idea might be to simply quarantine all attachments as they come into the enterprise, delete all the executables outright, and store the few file types you decide are acceptable on a staging server where users can log in with an SSL- enabled browser

    An odd suggestion in an otherwise relatively uncontroversial article. It implicitly trains your users in a bunch of unpleasant things:

    * clicking on some URL in an email, typing your password into whatever webpage pops up, downloading the blob it serves you and opening it (after clicking through the browser's "this was downloaded from the internet, are you sure?" warning) is a perfectly normal and legitimate part of the working day

    * one needs to find ways to obfuscate documents of types that aren't on the IT whitelist so one can send them to one's colleagues so they can do their jobs (and no, the corporate whitelists never capture everything people urgently need to share in order to do their jobs)

    * since everyone now does that habitually, receiving an automangled email with a link to an attachment which has its actual payload contained in several layers of archive obfuscation wrapper is perfectly normal because that's just what you have to do to share stuff with your colleagues now

    These could, of course, be mitigated by suitably educating users, but since the practice is advocated in a section about user education never working, that is unlikely to happen.

    • acdha 548 days ago
      I think this is a little less bad in context: in 2005 Gmail was a year old. Most people used a dedicated email client app such as Outlook or Mail.app so in your flow it would be far more defensible and his view was focused on corporate users. That makes the first point a little more reasonable:

      1. Your desktop application shows a list of attachments in the navigation chrome where a message can't display content.

      2. When you click on something in that list, Internet Explorer or Firefox seamlessly logs you into the server using Active Directory.

      Storing things on a server was also more relevant in the era where space was limited and services like Exchange were famously difficult to scale or customize. If you didn't have good tools to retroactively yank a message out of everyone's inbox when your AV signatures were updated an hour after it arrived, storing it on a server you controlled had a certain practicality.

      Your second and third points are spot-on, however, and really hit at a key principle too few security teams appreciate: normalization of deviance. This approach fails badly in the real world where IT security says “don't open attachments from people you don't know” and everyone's manager says “oh, it's totally normal to get passworded ZIP files from the HR services subcontractor. Open it, we have a deadline!”. The real lesson here should be defense in depth so your organization's security isn't jeopardized when one person opens the wrong email.

  • scotty79 548 days ago
    > Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

    How can you engineer a good lock without investing all the ways it can be bypassed by lockpicking lawyer?

  • nibbleshifter 548 days ago
    That hasn't aged well, at all, lol.

    The first two points are alright, then it just veers off the rails

    • faeranne 548 days ago
      And I'm not even sure about those two. There's a limit to deny first that most end users will gladly override to keep things moving smoothly.
  • plaguepilled 548 days ago
    I like some of this, but "enumerating badness is a bad idea" is just wrong. Quantifying errors is an important part of tracking progress in software work.

    Its the same as any other project in life: you track mistakes and address them.

  • quickthrower2 548 days ago
    User education is not dumb. Services that send test phishing emails and check that people mark them as such are a good idea. It gets people used to receiving suspect emails and dealing with them.
    • adql 548 days ago
      Especially considering most big breaches appear to be "some human somewhere fucked up"
  • cmdialog 548 days ago
    Sorry guy who wrote this article in 2005, hacking is definitely cool.
  • gnu8 548 days ago
    The clown who wrote this thinks the word hacking means cracking. That automatically negates everything he says. I would not take anything in this article seriously.
  • AtlasBarfed 548 days ago
    I think this misses two or three big points:

    1) Offer solutions not process/procedure:

    Devs want to make secure systems, but they have VERY LIMITED TIME. Security is always something that is #1 in the bullet points of a presentation of priorities, and always a distant priority in the boots on the ground of features and keeping shit running.

    What I've noticed is that the security team doesn't want to be responsible for cleanup or doing lots of work or engineering. They want to make presentations for the upper management, pick some enterprise partners to impose on the orgs, and kick back in offices. Most know little about cryptography or major incidents. If a great security practice like "sync ssh keys" or other things that may require a bit of legwork, they don't want to do it.

    They'd rather load down the devs. They'd rather come in and review the architecture rather than provide drop-in solutions. If something needs customization for interface with SSO or getting credentials, they drop the integration in the devs laps. Who's supposed to be the experts here? The security team should own whatever craptastic enterprisey shit they select, and ALSO be responsible for making it useful to the dev org.

    The biggest example of this is the desire for "minimum permission". Take AWS for example with its explosive number of permissions, old and new permissions models, and very complicated webs of "do I need this permission" and "what permission does this error message mean I'm missing". And ye gods, the dumb magic numbers in the JSON, but anyway. If the security team wants AWS roles with "minimum viable permission" THEY need to be experts in the permission model and craft these VERY COMPLICATED permission sets FOR THE DEVS. And if the Devs need more, they need to very quickly provide (say < 1/2 day) new permissions in case some new S3 bucket is needed or some new AWS service is needed. But security teams don't want to do such gruntwork.

    2) recognize that automated infrastructure is the rule, not the exception, aka the devs are not the enemy

    It took sooo long for ssh keys to become prevalent in development that people weren't ssh'ing in using passwords. Like, decades. This practice represented a big leap in administration productivity and probably was more secure.

    And you could automate on top of it in shell scripts, not leave passwords in .history, lots of good things.

    And the security industry wants you to undo it. Wants TOTP passwords from your phone hand-typed, wants a web page to pop up to gain temporary credentials, pretends you know how long your process will run so those temporary credentials won't expire and if you do, what, you're supposed to manually re-authenticate?

    Security at my last job wanted an ssh replacement to be used (the enterprise security industry is waging war on ssh/sshd) that if I used it from the command line IT POPPED UP A BROWSER PAGE. And no way to automate this for any task.

    In general security teams seem obsessed with making devs lives as hard as possible. Are most leaks via dev channels? In my experience the BIG leaks are "County Password Inspector", phishing, disgruntled/angry employees selling access. Well, and credentials checked into github. Most places I've worked at have involved this steadily slide into less and less usability by the devs, at GREAT cost to productivity, for questionable payoff in actual platform security.

    Meanwhile, no joke, ssl protocols on internal password reset sites were using such poor algorithms that Chrome was refusing to display it. Githubs were open to the public that shouldn't have been. 8-character limit passwords with proscribed character usage.

    Nuts.

  • EVa5I7bHFq9mnYK 548 days ago
    Isn't the author Enumerating Badness in that article?
  • Pengtuzi 548 days ago
    Regarding

    > 6) Action is Better Than Inaction

    I’m a fan of the

    > don’t just do something, stand there!

  • msla 548 days ago
    > "We can't stop the occasional problem" - yes, you can. Would you travel on commercial airliners if you thought that the aviation industry took this approach with your life? I didn't think so.

    This person has a fundamentally mistaken idea of how airliners and, therefore, security systems as a whole work. Yes, airliners have the occasional problem. That's why they have:

    * checklists and inspections, to catch them beforehand

    * communications, to catch them while they're evolving

    * redundancies, to turn ramified problems nobody caught into annoyances instead of disasters

    No matter how some people whine and moan, "Just Be Perfect" fails to be an actionable plan.

    Also: Hackers will be cool as long as DRM and planned obsolescence/designed-in insecurities exist.

    • lolinder 548 days ago
      I don't think the author intended to say that you can prevent all problems, I think they meant you can't just shrug and say "we can't help but get hacked". You can stop all problems from becoming critical, which is what airlines attempt to do.

      They talk earlier about defense in depth, so it's obvious that they're not oblivious to the need for redundant safety measures:

      > "We don't need a firewall, we have good host security" - no, you don't. If your network fabric is untrustworthy every single application that goes across the network is potentially a target. 3 words: Domain Naming System.

      > "We don't need host security, we have a good firewall" - no, you don't. If your firewall lets traffic through to hosts behind it, then you need to worry about the host security of those systems.

      • msla 548 days ago
        Maybe I'm being too harsh, but my interpretation of that point is that they expect we'll eventually become perfect, which isn't going to happen in the software world as it hasn't happened in the airline world, even though the airline world has more incentives to be perfect in the form of more penalties when it isn't.
        • riffraff 548 days ago
          My understanding is the author suggestion is to start with a security first approach, rather than wait-and-fix.

          They don't expect the airline to be infallible, but they expect the airline to be proactively avoiding trouble.

          • scotty79 548 days ago
            The most secure plane is the one that stays on the ground.

            It's always point of contention between people with security mindset and people that need to earn money to even hire people with security mindset.

        • kortilla 548 days ago
          You’re not being too harsh, you’re missing the point. Defense in depth is not something you advocate for if you expect perfection.
      • worthless-trash 548 days ago
        There is the flipside of that problem. If you say "We can never get hacked" then you will find that you breed a culture of denial if there is a problem.
    • josephg 548 days ago
      > "We can't stop the occasional problem" - yes, you can.

      All those tools (checklists, redundancies, etc) exist to increase the reliability rate. And to stop the occasional problem (ground crew forgets to refuel plane) from turning into a disaster[1].

      I might be overly generous, but thats my read of the author's intent. That just like in the airline industry, we have tools to stop occasional problems from turning into disasters. Things like:

      - Deployment scripts instead of manual processes

      - Dependency auditing (ideally automated)

      - Automatic OS-level security updates

      - Memory-safe languages (Go, Rust, Java) instead of C/C++

      - Defence-in-depth (firewalls, host security, etc)

      - Sandboxing (OpenBSD's pledge, Linux's seccomp, Deno's capabilities, etc)

      Just like checklists in the aeroplane industry, these approaches require active effort. We don't get secure software if nobody cares enough to make it a priority.

      [1] https://en.wikipedia.org/wiki/Gimli_Glider

      • closeparen 548 days ago
        As long as software has bugs and accepts user input, there are going to be ways to make it do things it shouldn't. You can avoid running specific known vulnerabilities. You can avoid creating certain kinds of dumb and obvious ones. But barring, like, formal verification, it is always possible for someone smarter or more patient than the original software development team to think real hard and come up with an edge case they didn't. And operational or system-level controls can only do so much about that.

        Preventing every security vulnerability is the same problem as writing bug-free code. And that is manifestly not happening, not even in the most sophisticated software development operations in the world.

        • josephg 548 days ago
          Right; which is why all the things on that list are so important. We can’t seem to stop the endless flood of memory bugs in C/C++ code. Iirc 65% of security issues in chrome are due to memory bugs. But we can move to Rust and friends, where those bugs are a lot harder to write.

          We’ll never get the bug count to 0. That isn’t the goal. The goal is to get the number of in-the-wild exploited vulnerabilities as low as possible. And there’s all sorts of ways to move the needle on that, which don’t require humans to suddenly become infallible.

          • zefix 548 days ago
            Well said: The point is to make a proper effort to make the tools we use better.

            Humans will always make errors. Let's stop denying that and start fixing the mess we are making.

      • krisoft 548 days ago
        > And to stop the occasional problem (ground crew forgets to refuel plane) from turning into a disaster[1]. > [1] https://en.wikipedia.org/wiki/Gimli_Glider

        While I agree with your general point I have to disagree the particulars here.

        The ground crew did not refuel the plane, because the pilots did not request refuelling. There was no-one "forgetting" to refuel, least of all the ground crew.

        The pilots did not request fuel because they thought they have enough. And they thought they have enough because they made a unit conversion error in their calculations. (there are even more layers and twists and turns to this cheese lasagne, but no one "forgot" to refuel that is for sure.)

      • sand500 548 days ago
        I wonder how well the swiss cheese model works when there is an adversary actively targeting you as opposed to accidental disasters.
      • scotty79 548 days ago
        > checklists, redundancies

        These things are created and extended because occasional problems happened.

    • acdha 548 days ago
      It sounds like you’re disagreeing but you’re restating his point: all of the things you listed are how rare events are prevented from becoming worse.
      • msla 548 days ago
        I am disagreeing because this person doesn't understand the concept of defense in depth: Occasional problems will happen, will ye or nil ye, and the best you can do is to, as you say, prevent them from becoming worse. Thinking airliners don't have occasional problems is missing a lot of what the air industry does that we can implement in other realms.
        • acdha 548 days ago
          He clearly does elsewhere, so I would suggest reading this more charitably with the assumption that you’re talking about the same idea from different perspectives. If I’m the passenger, I don’t even know about something which is caught by a checklist or redundant hardware before it progresses. If I’m the pilot or mechanic, the reverse is true. In both cases, what matters is the spirit of the point: saying something is too infrequent to prevent is defeatist.
          • msla 548 days ago
            Maybe, but I interpreted that as him insisting we must eventually become perfect, which isn't going to happen.
    • adql 548 days ago
      Aviation industry mostly protects against random failures and human mistakes, not targeted attacks so comparison there is silly from the start.

      There are lessons to be learned, but they are about building resilient systems, not secure ones. All of the "security" of airplanes pretty much relies on pilot noticing something is wrong, without that man with SDR could fuck up a lot of stuff

      • trashtester 548 days ago
        Came here to say this.

        Protecting the average corporate networks against the most sophisticated state actors is like protecting an airliner against an F35 armed with AIM-260 missiles.

    • zdragnar 548 days ago
      Airliners have to deal with all sorts of problems on the fly, literally. You can't stop lightning strikes, birds, engines catching fire, or any other myriad problems.

      It's such a terrible analogy I'm a little flabbergasted. Planes need to reboot all the time to clear out hardware and software faults. The occasional problem is planned for.

    • yjftsjthsd-h 548 days ago
      >> "We can't stop the occasional problem" - yes, you can. Would you travel on commercial airliners if you thought that the aviation industry took this approach with your life? I didn't think so.

      Also, security is like reliability/uptime: You pay for every nine. You want to keep the server up on a best effort basis and have only the most basic security? Cheap, easy, minimal time investment. You want 1 minute down per month and good security? It'll take time and effort. You want... IIRC airplanes have a failure rate around 1 in 14 million flights, give or take? How many billions of dollars do you have? Because quality still ain't cheap.

    • happymellon 548 days ago
      The fundamental flaw is normally "but doing it correctly would cost too much and take too long, what can we do for $5 and a chocolate bar?".

      Airline projects don't have the same level of issues because the FAA (or equivalent domestic authority) tells them to do it correctly.

      • SQueeeeeL 548 days ago
        Except when they don't, then you get the Boeing 747 MAX literally avoiding mandatory safety evaluations and ignoring engineers
        • happymellon 548 days ago
          But that is notable for being unusual.

          After the FAA agreed that the two crashes were similar it grounded all planes, revoked Boeings certification authority, and fined Boeing.

          Has Lastpass received anything other than bad publicity?

        • anticensor 548 days ago
          737 MAX, but otherwise correct.
      • Gigachad 548 days ago
        Is there any real equivalent process for tech? It seems like the majority of security certifications is a box checking exercise where actually being secure has very little relation to how many boxes you checked.
      • mrtweetyhack 548 days ago
        [dead]
    • baxtr 548 days ago
      I think the analogy is highly flawed. Flying is mostly a safe and predictable environment.
    • mnw21cam 548 days ago
      Also, "penetrate and patch" is definitely at work in the airline industry. Yes, airliners are well designed as safe systems, but every now and again a problem does occur, to varying degrees of seriousness, and when such a problem is identified, it is patched.

      Defence in depth. Sure, design a system well. But "penetrate and patch" is another layer of protection. I mean, if you find your system is penetrated, what else can you do right now but patch it?

  • FatActor 548 days ago
    No billion dollar company is anywhere near as glib has the author makes them out to be. Maybe my experience varies, but the European companies I've worked with have strict cybersecurity liability, and they take every aspect of security seriously and do not just pat themselves on the back smugly, as OP portrays. Maybe this was the case in the 90's, but it sure is not the case today.

    EDIT: I deleted most of my post because I found it was repeated up and down the comments which I am so relieved to see. I kept my post because I want newcomers to hear as many voices in objection to OP's outdated essay as possible.

  • m463 548 days ago
    If I could come up with one dumb idea it would be something like:

    You can trust large-organization to secure your device.

    (especially for orgs that give themselves, advertisers or apps more access to the device than you have)

    • yjftsjthsd-h 548 days ago
      I dunno, the question is "against what?"; I trust a Chromebook to resist an evil maid attack, but not to stop an advertiser from stalking the user. Some people are okay with that threat model.
  • scotty79 548 days ago
    > My prediction is that in 10 years users that need education will be out of the high-tech workforce entirely, or will be self-training at home in order to stay competitive in the job market. My guess is that this will extend to knowing not to open weird attachments from strangers.

    And yet, just yesterday I've seen a TV ad explaining how to not get phished out of your money through your banking app.

    I think it a running theme in this document that author displays severe lack of understanding how security becomes hard as soon as you let anyone do anything online.

    • blowski 548 days ago
      Only last week, I saw someone post on Slack:

      > I got an email from [CEO] asking me to read a Word doc. I thought it might be a dodgy email so I checked the attachment..."

  • scotty79 548 days ago
    > #1) Default Permit

    I guess author of this post is no longer with us because they got heart attack when npm and similar rose to prominence.

  • Luciatrutth 548 days ago
    [dead]