Although I’d spin the issue about host vs network security differently. I’ve found that engineering teams prioritize security a lot more if they don’t feel like they’re safe in a cocoon of local network bliss behind network firewalls. I love “beyond corp” or “zero trust” precisely because you’re making it explicit that they’re on the internet and they’re a target.
I don't know; I haven't really seen most of these things in the wild for a long time.
For "#4) Hacking is Cool" the zeitgeist has moved in the exact opposite direction with "white hat", bug bounties, etc. I think that section in particular is a pretty outdated view of things.
"#6) Action is Better Than Inaction" is probably the only one that still broadly applies today, and is actually a special case of "X exists, therefore, therefore we must use it ASAP, and any possible negativities are not our problem and inevitable anyway" attitude that seems the be prevalent among a certain types of people.
#1. This is still prolific absolutely everywhere. It's a good chance it is happening on your computer right now. It happens mobile app stores (application releases go through a very rudimentary set of checks and only end up thoroughly analysed by security researchers when the application becomes flagged). It's very common within internal networks and even more so when it comes to outgoing traffic.
#2. This is still sold by security consultancy firms as a service, it's again, incredibly prolific in a lot of places.
#3. Likewise, still a very popular service sold by security consultancy firms.
#5. Still common to this day, services such as vishing/phishing assessments test for user education.
Please tell me you have already thrown Firefox, Chrome, old Microsoft Edge and whatever browser out of window and are posting to HN with you rewritten-in-Rust lynx.
Not being able to rewrite the world or convincing people to stop using memory unsafe languages is entirely unrelated to what security researchers do.
I'd love to stop having to build complicated lifetime model in my mind to figure out whether there are hidden code paths for a UAF, but at the same time this is the best thing I can do to secure what we have today, now it's on you to rewrite the world.
Well, I'm unfortunately not in a place where doing so makes sense. Unless you mean only auditing Rust code.
> nag managers
I already do so. This doesn't change much. There are still too many must-be-evolved C++ projects (no easy incremental rewrite path forward), it is impractical to have engs put significant effort into rewriting in Rust. It's really difficult to convince someone to fix something ain't broken.
People coding in C++ are just as desperate as you, that's why someone bring up Carbon , a half-baked experimental project to the world last year, instead of just using Rust. Sure, they would like to use a memory safe language as possible. No, they still have to get their job done.
> refuse to by hardware that only supports C
If it supports C, we can make it support Rust, it's a very fun weekend project to bring-up some nostd Rust code on it.
A reminder that a big part of the subtext of this piece is a reactionary movement against vulnerability research that Ranum was at the vanguard of. Along with Schneier, Ranum spent a lot of energy railing against people who found and exploited vulnerabilities (as you can see from items #2, #3, and #4). It hasn't aged well.
I'm not sure there's anything true on this list that is, in 2023, interesting; maybe you could argue they were in 2005.
The irony is, Ranum went on to work at Tenable, which is itself a firm that violates most of these tenets.
I've read about 80% of this page, and eventually stopped at the part where he says that the next generation will be more cautious. This, in my opinion, is false. Most software has simplified for user experience, and has not helped kids in the slightest bit. Its more addictive than ever, and all caution gets thrown out of the window when we let kids browse youtube unsupervised. Heck, a wrong search query or random text can give you NSFW content. And with the rise of shorts/stories/tiktoks you'll be molded by the algorithms. You don't or have barely any control over the content you see. If it notices you watch, what, 5 seconds? of a clip, it'll start recommending that.
The issues we have nowadays are different than those in 2005. People that havent seen the bad parts of the internet, will not teach their kids about it either...
Just take as one important set of examples the new mobile operating systems since this piece was published. Even the most thoughtfully designed and locked down (even with hardware, various uses of encryption etc) continue to have vulnerabilities at the base layer year after year. Bug hunting looks every year more and more like just an expensive sport for condescending security experts who think little about the broader context in which they operate. As much as we all appreciate the whack a mole.
Where there has been genuine security improvement is where we’ve taken the structural, locked down approach advocated here (see also djb’s paper about qmail security). iOS and Android apps (particularly the former) seem genuinely more secure than most desktop apps because they are structured to have very limited permissions from day one. The app environments on those systems looks like they were designed with many of the principles from this post expressly in mind.
The lessons for the OS layer seem obvious. Qubes and in particular Joanna’s post about “Qubes Air” point in one very promising direction.
Offensive research is what motivates lessons for the OS layer. Look at the struggles were are having with things like kernel-level memory safety even when we can point at mountains of CVEs found by white hats. The community would be dragging its feet even more if the shared consensus was that actually it is really hard to beat ASLR and DEP so we are all done and have solved it.
Part of the problem is that there are many people in the field of security with overly strong opinions. This is not healthy. The field is full of know-it-all people, with if-only-people-were-not-as-dumb kind of attitudes. This is not helping anybody. Any not-as-strongly-opinionated bystander looks at this and has no clue whom to listen to, since so many people are strongly expressing 100% opposing views. Calling everybody else "dumb". This is not helpful to bring the field as a whole forward.
Hi, that’s an interesting assertion but not actually accurate. It is vaguely related to the truth; djb acknowledges that qmail failed to partition in the way he advocates in the paper but says it survived without serious security issues for other reasons:
“ I failed to place any of the qmail code into untrusted pris- ons. Bugs anywhere in the code could have been security holes. The way that qmail survived this failure was by hav- ing very few bugs, as discussed in Sections 3 and 4.”
That’s very different from saying the approach wasn’t successful. It was just not tried (by him). My point is it has been tried in other ways since and seems to be working. To me at least!
(Also you took something I put in parens midway through my post with the opening words “see also” and said I “hang” my argument on it - ok, again interesting, not taking it personally as I’m sure you didn’t mean anything by it!)
Really, the whole argument you're making --- the reason we're talking about Bernstein in the first place --- is broken. Bernstein himself would probably not agree with the take you're trying to derive from the relationship between his work and "enumerating badness".
You mean the guy who refused to fix an integer overflow bug, claiming it isn't practical to exploit then 64-bit really happened then years later suddenly the fine guys at Qualys decided to have fun?  Sure, he is a crypto expert and we're all grateful for his work on curve25519, salsa/chacha, nacl, djbsort, etc (and I'm sure I missed a lot). This does not mean he is an expert on weird machine.
I agree. they go onto a useless rant about how pen testing is useless, red team research only enables hackers, etc. That's not true at all. That work is what pushes the improvements in both detection and better programming practices.
Educating users is not dumb, its one of the most important parts of security a company should address. I really don't know where they are coming from here, this section was nonsense to me.
I also have a point that will get me downloaded and piss off a lot of people, Security is very important, but not THAT important. If the business doesn't operate, then there's no need for security. So what's the solution? The author comes off as one of those that treat security like a wheelbarrow full of bricks that everyone has to push around. This wont get buy in and people will find ways around it. Instead, security should be like tennis shoes. restrictive but they also allow you to run faster.
What was the argument against vuln research? The 'Penetrate & Patch' bit makes it sound it's something like 'this is pointless because the proper way to fix this stuff is better design and other things are a waste of time and effort'.
This predated a lot of the responsible-disclosure culture that exists now, so there was a lot of “find vuln, post right away for the credits” going on. Couple that with a lot of tool research that was important but also felt very grey-hat, and it was easy to feel like much of the “vulnerability research” community were like a group of scientists working on making cancer airborne “for research purposes”. I admit to having felt that way then, too.
Fortunately a lot of that has subsided. The focus on responsible disclosure while still holding companies accountable, the great security research being done by projects like Talos or Project Zero, and the consistent flow of new open-source blue-team tooling has really helped balance the scales (if they were ever unbalanced).
This is a whole can of worms, and my response will be biased and untrustworthy, but here's my take:
In the early-to-mid 1990s, serious security research was intensely cliquish. There wasn't a norm of published vulnerability research; in fact, there was the opposite norm: CERT, the well-known public resource, diligently stripped details about vulnerabilities (beyond where to find the patches) out of announcements, and discussions about how vulnerabilities actually worked was relegated to "secret" lists like Core, which were of course ultimately leaked to became BBS tfiles.
Ranum came to prominence in that era. In the mid-to-late 1990s, after Bugtraq took over, there remained a sort of informal best friends club of, like, Ranum and Dan Farmer and Wietse Venema and like one or two young vulnerability researchers --- Elias "Aleph One" Levy, for instance. There was a sort of acceptance of the idea that Elias and Mudge were doing vulnerability research that was well-intentioned and OK... but that everyone else was just trading exploits on #hack.
There was a sort of focused beam of hatred on eEye, a security vendor that came to prominence in the early 2000s, and most especially during the "Summer of Worms", some of which worms were based on vulnerabilities that eEye's team --- at the time truly one of the most influential teams in all of vulnerability research --- had published. I worked at the industry's first commercial vulnerability research team and had a soft spot for eEye, which was doing the work we did but like several levels better than us, and it has always pissed me off how Ranum and Schneier tried to make hay by dunking on eEye and casting them as "hackers".
(Of course, if you tried to make that argument now you'd sound like a clown, so you won't see people like Ranum and Schneier saying that kind of stuff. But the fact is, the arguments were clownish and inappropriate back then, too.)
So, if you ask me, the argument Ranum is advancing is literally that public vulnerability research is bad, and that details of vulnerabilities should be kept between vendors and a few anointed 3rd party researchers. Because otherwise, you were just helping people break into computers.
The Ranum of 2005 is I think especially characteristic of what I'd call "moralizing" information security; that security is actually a fight between good and evil, that what's important about machines getting owned up is that somebody's livelihood depends on that machine running, and the hacking is a crime, and the details of how the hack worked are about as relevant as the details of how a burglar breaks into the window of a house they're burgling without setting off the alarms or whatever. I get it, but I'm from the opposing school of information security, which is that security is just a super interesting computer science problem.
I thought when asking to maybe snarkthrow in a 'this wasn't about disclosure, was it' but then I thought I would sound like a clown asking such a thing about a piece from 2005. Entirely externally/cluelessly my impression (at the time and since) was this was settled in the 90s by things like Bugtraq - that disclosure aligns with the interests of users in critical ways that leaving it up to vendors doesn't and this easily trumps objections about 'responsibility'. I didn't know this went on for so much longer, thanks for the history!
> Ranum spent a lot of energy railing against people who found and exploited vulnerabilities
That's not at all what #2 says. "enumerating badness" is explained as trying to track everything that's 'bad' instead of what's not. It is claimed to be 'dumb' because what 'bad' is orders of magnitude larger and more complex than what's not.
I suppose the idea of denying by default (#1, #2) and the idea of defense in depth (mentioned at the end) aged well enough.
I'm not sure about educating users. It's obviously not going to be a bulletproof solution. But not educating users at all also does not seem right either: it's hard for a person to care about stuff they have no idea about.
The better way is to make the secure path the easy path. You don't have to educate users to do something that makes their life harder, you can educate them in an easier way to do what they want. That's far more likely to stick.
Usability is a security issue; at the ultimate extreme a DoS attack is just creating a very poor user experience.
We tried training the users with tools like KnowBe4, banners above the emails that say things like THIS IS AN OUTSIDE EMAIL BE VERY CAREFUL WHEN CLICKING LINKS. Didn't help.
The email was a very generic looking "Kindly view the attached invoice"
The attached invoice was a PDF file
The link went to some suspicious looking domain
The page the link brought up was a shoddy impersonation of a OneDrive login
In just minutes, the users machine was infected, it emailed itself to all of their Outlook contacts...
So this means nothing in this list detected a goddamn thing:
'Prevent lateral spread'
enterprise defense suite with threat protection and threat detection capabilities designed to identify and stop attacks
AV software that was advertised to 'Flag malicious phishing emails and scam websites'
'Defend against ransomware and other online dangers'
'Block dangerous websites that can steal personal data'
the cloud-based filtering service that protects your organization against spam, malware, and other email threats
And the company that we pay a huge sum of money to 'delivers threat detection, incident response, and compliance management in one unified platform' didn't make a peep.
But, we are up to the standards of quite a few acronyms.
It's all a useless shitshow. And plenty of productivity-hurting false flags happen all the time.
Several years ago, I worked on an incident response for an incident that was detected and stopped.
Tl;dr, a targeted phishing email was the catalyst for the whole thing. The various systems that detect these thing effectively blocked it ~97/100 times. One click was all it took. The user who clicked had a bad feeling and used a blame-free and convenient reporting mechanism to report it.
That doesn’t mean that tools and training are useless. As a defender in any context, defense has to be multilayered and flexible as circumstances change. In IT, sports or warfare, it’s the same process or funnel.
The scenario you described likely would have been detected by an EDR tool, or by log analysis if there was a process to do that. Declaring “shitshow” is accepting a bad outcome. Unfortunately as the value of compromising a company has gone up, the opponents have leveled up, and defenders need to as well.
"The spot where we intend to fight must not be made known; for then the enemy will have to prepare against a possible attack at several different points; (...) If he sends reinforcements everywhere, he will everywhere be weak."
Sun Tzu, Art of War. I know, cheesy to compare network security with warfare. But, I've learned that big shinny stack of tools is a red flag. If there is no threat model and focused hardening, you're not doing security, you're doing compliance.
I wonder how well we all think this article has aged?
"Penetrate and Patch" is supposedly dumb. But what do we practically do with that? We've seen in the last decade or so a lot of long-lived software everyone thought was secure get caught with massive security bugs. Well, once some software you depend on has infact been found to have a bug, what's there to do but patch it? If some software has never had a bug found in it, does that actually mean that it's secure, or just that no skilled hackers have ever really looked hard at it?
Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.
"Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though. It's not very clear how you'd do that, and ensure none of them were ever tampered with, or be able to do development involving frequently producing new binaries, or figure out how to make sure no malicious code ever took advantage of whatever mechanism you want to use to make app development not terrible. And everyone already gripes about how locked-down iOS devices are, wouldn't this mean making everything at least that locked down or more?
1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).
2. "Enumerating badness" is simultaneously an attempt to connect vulnerability research to antivirus (security practitioners have had contempt, mostly justified, for AV since the late 1980s) and an endorsement of the heuristic detection scheme companies like NFR sold. Apart from the shade it throws at vulnerability research, it's fine.
3. "Penetrate and patch" has aged so poorly that Ranum's own career refutes it; he ended up Chief of Security at Tenable, one of the industry's great popularizers of the idea.
4. "Hacking is cool": literally, this is "get off my lawn".
5. Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.
6. "Action is better than inaction" --- this is just a restatement of "something must be done, this is something", or the Underpants Gnome thesis. Is it true? Sure, I guess.
Agree with tptacek. His rant in Enumerating Badness is deeply intertwined with his rant against Default Permit (which as tptacek points out, was a bit of a strawman even then). I could agree that checking against all possible bad things is a flawed approach for security products and IT staff.
However, enumerating badness is hugely valuable in the security industry for two reasons:
1) It’s the backbone of security research, just as physiology and anatomy are to zoology and medicine. With enumeration (observation), we can classify, abstract, find trends, identify risky software and approaches, direct engineering resources, and create broad defenses (yay ASLR).
2) Attackers are lazy, too. I work at a security consulting firm, and routinely see attackers reuse the same TTP across different target companies. Enumerating badness not only offers detection opportunities (perhaps not the best, but higher level detection techniques are often built off understanding the enumeration of badness) but also denies attackers opportunity for reuse. “Impose cost,” as thoughtlords like to say.
> Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.
This can be good or bad depending on how you do it. If you default to the good thing and there really isn’t any need to do the bad thing then it can be quite good. If there is a genuine need for some people to sometimes do the bad thing (so it’s not actually universally “bad”) pretending like nobody can ever make an informed decision here is not a good policy. Sure, it’s very difficult to get people to make informed choices, but you can’t really brush this off as people being uneducateable.
you can’t really brush this off as people being uneducateable
The objection isn't that people are uneducable, it's that even expert users can easily make seemingly-trivial mistakes which then have catastrophic consequences (e.g. experts get phished) and that's a conclusion reached through experience/data.
> 1. Default deny is one of the oldest best practices in security engineering; it barely needed saying in 1995 (but Cheswick & Bellovin said exactly that in Firewalls & Internet Security).
I agree with this as the theoretical grounding and certainly wouldn't say that this was especially insightful but I can understand why he said it if he was encountering a long tail of obsolete advice similar to what several places I worked were hearing at the time. I think one of the big problems was simply inertia: I remember the Cisco guys at one job saying they didn't want to do a default-deny policy because it was too much work to switch and they weren't sure if the hardware could handle that many rules.
> Objecting to user education is an idea that is coming back into vogue, especially with authentication and phishing. It's the idea that has held up best here.
The notion that users can't really be educated has led to a lot of questionable security practices that prioritize ease of use over real security. For example, 2FA using codes sent by email or SMS as the second factor rather than relying on key based authentication like client side TLS certificates issued by the service that the client is using.
This, to some extent, has actually decreased security by allowing people to bypass authentication by compromising the second factor through use of social engineering.
My bank used client TLS certificates early on, while it is nifty it was a really bad idea without hardware security. (Paper OTP in their case)
IMHO, client side certificates are a big failure even on server to server. The UX of doing it is error prone and insecure because of foot guns. It fails because there are so many different incompatible ways to use them. Mostly this idea of mine is based on never having had a good experience with browser based client certificates (even highly automated and hardware secured ones). Things does not get much better on server.
Sure automation helps but the certificate is a such a small part of the system, and when you try to integrate two automated systems that use client side TLS certificates it is easy to trust too much or too little. (Both are troublesome)
> The UX of doing it is error prone and insecure because of foot guns.
Many people over the years have mentioned UX issues as a reason why client side TLS certificates aren't more widely used. The question is why hasn't there been an effort to improve the UX rather than re-inventing the wheel (either poorly with SMS/email 2FA or OTP, or in a way that's limited to a specific application level protocol like HTTP for webauthn).
What I would like to see is a standard workflow where as part of an account creation process, an associated CSR is sent and a client side TLS cert is returned and stored on the device along with a standard way to add additional devices using an existing device and the new device that doesn't depend on a specific application level protocol (so, for example, I can use my email client via SMTP and IMAP to securely authenticate without having to rely on a HTTP intermediary).
Or, for more secure settings, actually having to verify your identity out of band (e.g., going to the bank and showing multiple forms of ID along with your CSR to get the certificate.
The problem is basically that it wasn’t a priority for application developers but you need to upgrade things everywhere before you can switch to a new protocol. Those clunky MFA options give CIOs the appealing promise that a bit of duct tape means they get some protection without needing to e.g. replace that old RADIUS server most people depend on to do their jobs.
It might be interesting to look at WebAuthn passkeys, as they do most of what you want. That took several important developments: the web ate desktop apps, Microsoft lost control of the web, and Google has some strong security people in their management. That does the public key exchange, has robust cross-device support which doesn’t require an internet connection, etc. and it has some features to improve the identity situation (e.g. it includes a device key & authentication info so my bank can say it only accepts transfer requests which came from a known device doing a biometric check, which is a nice edge over x509/SSH-style trust based solely on access to the private key).
This unfortunately does not work using other protocols but a not-uncommon flow would be using a browser session to issue your IMAP client a token. That’s not great (a compromise gives the attacker your email) but it can be less disastrous if the most important actions can’t be initiated purely from email.
> 4. "Hacking is cool": literally, this is "get off my lawn".
I do not quite agree with this verdict. Back in the early two thousands there were lots of people who thought it rather cool to just hack around (or more precisely crack around), for example by exploiting SQL injection bugs in whatever web form they encountered.
Of course I might not have a representative impression of the community, but I think this kind of stuff is now much more widely seen as unethical and uncool. So I think the article's prediction that this will "be a dead idea in the next 10 years" was actually quite accurate.
It's actually hard to argue here given there is so much ambiguity.
Technically "hacking" is often portrait as totally cool and often not negatively connotated ("getting/being hacked" however clearly is). The problem with this is that the meaning has shifted a lot in the meantime. He means mostly cracking I think, while the meaning of "hacking" has shifted to be broadly just altering state/behavior often to something that was not originally intended.
At least he also includes penetration testing as something bad and this has clearly not aged well. For one you can also employ people to do just that and it's pretty essential for applications with big security implications and the other part is the invitation to "hack" the application/system at hand seen with bug bounty programs. The actual practice seems to contradict his point here.
You can also interpret it in a way that it is not going to improve your security just by virtue of having pen testing, which is true. However I feel like nobody argued that. Pen testing/bug bounties are there to actually get attention of exploits you might not get otherwise and therefore a prerequisite for fixing exploits. Or said differently: If you think you hardened your application/system so much that it's impenetrable, how could it hurt if people try to break it and tell you if they are able to. People will try anyway, but they might not tell you.
> Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.
The charitable interpretation of the “penetrate and patch” section is the architectural parts, and browsers are a great example. At the time he wrote that, a browser was a single process running everything in traditional C/C++ calling other unsafe code (i.e. Flash) in the same process. People did patch a lot but they also did things like split components into separate processes with different privilege levels, change practices throughout the codebase to harden things like pointers or how memory is allocated, rewrite portions in memory-safe languages, etc. It took a decade but browsers became a lot harder to successfully exploit.
> Also web browsers face a constant stream of security issues. What are we supposed to do instead?
There's not much that users can do, but web browsers have spent the last decade moving away from "Penetrate and Patch" to much more proactive approaches. Eg, Chrome pioneered moving each tab to a separate process with full sandbox isolation. Firefox is talking about using webassembly as an intermediate compilation step for 3rd party C++ code to effectively sandbox it at a compilation level. Rust was invented by Mozilla in large part because they wanted to solve memory corruption bugs in the browser in a systematic way.
> "Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though.
MacOS requires user consent for apps to access shared parts of the filesystem. The first time you see dialogues asking "Do you allow this app to open files in your Documents folder" its sort of annoying, but its a fantastic idea.
As you say, my iPhone is more secure than linux for the same reason - because iOS has a "default deny" attitude toward app permissions. A single malicious app (or a single malicious npm package) on linux can cryptolocker all my data without me knowing. The security model of iOS / Android doesn't allow that and thats a good thing.
I wish iOS was more open, but on the flipside I think linux could do a lot more to protect users from malicious code. I think there's plenty of middle ground here that we aren't even exploring. Linux's permission model can be changed. We have all the code - we just need to do the work.
Also since this article was written, we've seen a massive number of data breaches because MongoDB databases were accidentally exposed on the open internet. In retrospect, having a "default permit" policy for mongodb was a terrible idea.
> my iPhone is more secure than linux for the same reason
The phrase "more secure" doesn't mean anything, I wish it wasn't used.
One needs to talk about the threat model(s) you care about and how a particular solution addresses them (or not). Any given solution can be both more secure and less secure than an alternative, depending on what threat models you care about. Which may well be different than the threat models someone else cares about.
If you unconditionally trust Apple and all government agencies which have power over Apple (e.g. NSLs) then one could say iOS has a more secure file access model than Linux. But that's a big if. Personally I could never trust a closed source proprietary solution.
The linux (UNIX) security model is designed to protect users from other (potentially malicious) users on the same computer. The system as a whole is designed such that a malicious (or incompetent) user can't make the system as a whole stop working. The system is more important than any particular users' data.
Software is assumed to be correct. Any program a user runs inherits the full permissions of that user.
There's some problems:
1. Computers aren't often shared between mutually-untrusted people.
2. My data is much more precious than my computer itself.
3. Malicious software is everywhere. Every time I install a package some stranger wrote on npm or Cargo, I implicitly give it full access to all my data and my entire network.
So, linux protects me from things I don't need protections from (other users) and doesn't protect me from things I do need protection from (malicious code).
> One needs to talk about the threat model(s) you care about and how a particular solution addresses them (or not).
The threat model for malicious code is, I install an apt package / cargo crate / npm package / intellij or vscode extension and the package contains code which either exfiltrates my data over the internet, or cryptolockers it.
iOS (and Android?) don't let code like this run, since software can only (by default) access the data that it itself has created. Ransomware attacks are trivial on linux and impossible on iOS.
Its much more likely that me or my family suffers from a keylogger or ransomware attack than we suffer as a result of government intrusion into our digital lives. I'm one bad npm install away from having all my data stolen, and it terrifies me.
> Its much more likely that me or my family suffers from a keylogger or ransomware attack than we suffer as a result of government intrusion into our digital lives.
Are you sure? How would you know? We can't know how many people the government blackmails with data taken from their iphones, because it's illegal to publish information about them doing so, whereas ransomware attacks are widely publicised.
As key component of threat modelling is risk management and modelling.
I would counter the “government blackmailing people” by questioning the risk this poses to me as an individual. As much as we’d like to imagine it, and as much as it can often times feel like it, we don’t live in a Kafkaesque society, by and large, as the significant majority of us are of zero interest and have little of anything worth blackmailing.
It seems to be routine for rape complainants to have to hand over their phone and have their messages scrutinized, as a pre-requisite to proceeding with an investigation. That is a form of government blackmail.
I disagree entirely with that notion. If you make a serious accusation, you must be prepared to hand over the necessary evidence to assist the investigation and get a conviction. I'll also say that at that point, the risk has changed dramatically. Risk isn't a static thing. It needs to be assessed regularly and evaluated when your threat model changes. Your exposure to risk is still a factor.
Of course. But you shouldn't have to spill your guts about your entire life (I understand that young folk nowadays life their lives in Instagram selfies).
And more to the point, the first step for the policemen investigating a rape complaint should be to investigate the complaint, not the complainant. If the investigation of the complaint raises questions about the complainant, then there might be grounds for seizing the complainant's device. But they can't make device seizure a pre-requisite for doing their job.
The landscape of security switched from the computer to the user.
As user you need to become root in a way or another to install a system program or something that affects the system as a whole. And you are aware that you are doing something that affects the system as a whole. The security was meant for multiuser environments, the computers were shared in a way or another.
But what matters now about that computer (specially when you are the single user of it) is your user, your data, your credentials, your network access, etc, all that you as user (and app you run) can access, or modify. Viruses and malware in general used to be system threats, but it is enough to be user threats now.
Things are moving to containerized in a way or another applications (docker, snaps, whatever do iOS and Android with that, etc), with limited access to your data, credentials and so on.
> 3. Malicious software is everywhere. Every time I install a package some stranger wrote on npm or Cargo, I implicitly give it full access to all my data and my entire network.
This particular case wouldn't really be prevented by an Android/iOS-type security model, I think? That package will be part of the program you're writing, and chances are that program requires more than the bare-minimum access.
That said, it's not extraordinarily difficult to lock this down, if you really want to. Docker is common, but more traditional tools work as well (e.g. running your program as its own user, maybe in a chroot), and/or using cgroups directly.
This applies even more with things like VSCode extensions, which typically run inside the VSCode process, and without filesystem access VSCode is pretty useless.
The practical application of this is a different story.
But there is a problem with trying to compare this to things like iOS and phones. Phones are strongly application based, it is completely common to have your data locked up in an app and getting to another app has varying levels of difficulty.
In Windows it is much more common for data to be file based, and applications can be launched and ran by clicking the file. File security is based on the user, so typically any application can access files owned by the same user. A huge portion of workflows would break if this were not the case.
The point, I believe, is not about Linux kernel but rather "Linux userland". I'm sure you could replace "iOS" with "Android" and the meaning will stay the same: smartphone OSes go to great lengths to isolate apps and prevent them from messing with the user's data, while desktop Linux does not.
I hope the situation will change once Flatpak becomes more widespread and polished. On paper, it offers a comparable experience to smartphones — you get sandboxing with granular permissions, easy installation without messing with the command line, and so on. In practice, I had enough issues with Flatpak apps breaking in non-obvious ways to make me not recommend it to others. As a recent example, I tried using a JetBrains IDE from a Flatpak and spent quite a bit of time diagnosing issues with paths before resorting to Google and finding out that it's not supposed to work at all (https://intellij-support.jetbrains.com/hc/en-us/community/po...).
If you like Flatpak's issues you'll love Snap! The point about smartphone userlands is a good one. If the desktop modus operandi were similar to how APKs are used then I imagine Linux would have much better security. For now I think that only something like Qubes provides the security and isolation you want without subtly breaking things.
> Chrome pioneered moving each tab to a separate process with full sandbox isolation.
I don't think it was done for security. Before chrome where all tabs ran in single process it was common for a bad site to stall your whole browser. Separating it into single processes was basically admission that, yes, web browser sucks, so the best we can do is give you ability to kill a part of it when it misbehaves.
Another benefit they cite is reduced memory fragmentation. Because each tab lives in its own memory space, when the tab closes the OS can reclaim all of its memory. Presumably you'd still get framentation, but the OS is probably better able to handle that long term than jemalloc. Clever!
The consumer OS lockdown side does have a lot of interesting points. One I also thought of - I'd bet that, even if we reject web browsers, basically every user's "30 most used apps" has at least one that has a plugin system that loads unverified code, or runs macros in some kind of interpreter that is or may later be proved to be exploitable, or parses structured data from files that can't be trusted using non-memory-safe code, or some other thing I haven't thought of.
This is because the dialogs have gotten rid of the disclosure arrows that gave path information and metadata about the binary. Also, executables used to have names consistent with the naming conventions on the platform. Now you just get dialogs with some cryptic name, and a one line manpage.
The points didn’t age well, but there’s a kernel of truth in there: none of those things will ever work 100%, so if you’re trying to really lock things down you need defense in depth, which was also not a new security concept in 2005, but it was one we were, as an industry, less sophisticated about.
"Defence in depth" is a term with obvious military origins.
You have a relatively thin front-line of defence, with orders to fall back if they are in danger of being overrun. Then you have a very strong second line, manned with assault troops. As the first line falls back, the second line counterattacks.
This strategy was developed by the Germans in WWII, and adopted by the Russians.
I disapprove of it's use in computer security. There, it means something different; it means basically having multiple lines of defence, without any notion of counterattack.
The future is probably a two tiered system like what Apple is showing with Lockdown Mode. Normal users get the full speed fully functional system. And those who are at risk of being targeted use a locked down system with less convenience features but more security.
Along with better languages and tooling ruling out entire classes of exploits.
As an old I strongly object to the corruption of the terms "hacking" and "hacker" in the diatribe following this heading. I'm a fan of hacker culture, in the old sense, and encourage our developers to adopt a hacker mindset when approaching the problems they're trying to solve. Hacking is cool.
> Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?
That’s like saying “Why don’t they just design locks that are unpickable?”
They’ve been working on that, for a while. But you need to know what you’re protecting against. Anyone who watches The Lock Picking Lawyer knows about the swaths of new locks vulnerable to comb attacks - a simple attack that had been solved for almost a hundred years but somehow major lock manufacturers forgot about.
You can’t build something safe without considering potential vulnerabilities, that’s just a frustratingly naive thing to say.
To take the strongest form of the author’s argument, his point is that it’s not possible to take a pile of terrible code with no security, and fix all the problems in it. It’s better to architect it in a way that provides security (e.g least privilege everywhere, sandbox, memory safe languages, etc.).
I think the author could have phrased it better, in that the best approach is having a good security design, and then taking out all the bugs it couldn’t cover.
Back in 2005 the idea that you shouldn't run every bit of executable code sent to you was drilled into people. Nowadays you can't use a commercial/institutional websites without doing the modern equivalent of opening random email attachments.
You also use an OS and browser which is space age technology compared to what they had in 2005. Back then a kid could write an email to install a rootkit on your computer. Now you'd get paid $100k+ if you could work out how to do that.
It also used to be common knowledge that if someone has physical access to your device, its game over. Which is something that is becoming rapidly untrue. If I hand my macbook to my friend for a day, I can be quite confident they haven't been able to defeat the boot chain security to replace my kernel with a malware version like you trivially could pre secure boot environments.
Another piece of common advice was to not use public wifi because anyone could steal your password or credit card details. Security advice from 2005 really hasn't held up much at all.
Speculative execution, sandbox exploits, etc, etc. I thought everyone (myself included) stopped believing in the power of VMs/containers/sandboxes to protect you when all that happened (and kept happening). And it's just getting worse as the JS engine(s) get access to more and more bare metal features and become a true OS in more than just spirit.
Thus all the crazy insistence on CA TLS in modern web protocols like HTTP/3 which can't even establish an connection without CA based TLS hand-holding.
Looking back on that era, the hate towards hackers feels really misplaced. Yeah, at the time it was more local and more dominated by people doing it for the lolz but we kinda owe them a debt of gratitude. If they hadn't gotten everyone to stop being lazy about security we'd be in a very different place now, surrounded by rouge states and agencies launching hyper sophisticated attacks on infrastructure and data. That was also the era that trained the current generation of cybersecurity experts.
It wasn't misplaced...There were some horrific pieces of "hacker" software that were floating around in 05. Wasn't uncommon for a disgruntled employee to load malware onto a company's network and bring down operations for weeks. Case in point, douchebag loaded a maleware into this small financial company's network that I wound up working for. The virus infected the boot sector and forced the company to do low level formats on all of the company's hard drives. They lost immense amounts of money and respect in their industry and barely recovered. That virus was developed by some garbage hacker boy for laughs.
In fairness, there aren't a whole lot of ways left to run around corrupting the boot sectors on an entire network. Given current politics I'd rather have everyone learn how to enforce user access control in '05 rather than in '23.
>Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?
Sure, but how does one get the knowledge on how to secure systems? Half the job of a security engineer is thinking like an attacker and trying to poke holes in it. Key mitigations like ASLR and stack canaries are so effective because they specifically block off key resources and techniques that attackers use. It would be downright impossible to invent these mitigations (or even meaningfully understand them) if you did not already have a firm grasp on memory corruption and ROP. I'm not sure it's an argument I actually care to defend, but I do honestly believe that you can't be a strong security engineer if you don't have a grasp on the techniques your adversaries use.
With respect to this example, I think he is saying it would be better if we were using memory safe languages, rather than trying to come up with these sorts of mitigations (which is enumerating the bad). Of course it’s probably not possible in every scenario because we’ve been doing it badly for so long, but I think the principle still holds.
This has nothing to do with ASLR and stack canaries.. Log4jshell wasn’t a buffer overflow exploit, it was the result of yet another dumb idea, adding remote jndi loading capability into the logging framework.
You can assume any input to your program will be manipulated by an attacker. This implies if you use a non memory safe language you’ll need to make sure there is no way the user can input enough data to overflow your buffer, which will corrupt your memory, and get it right 100% of the time. If you’re building a logging framework it’s extremely likely people will be logging some sort of information from the outside, so not a great idea to execute it as code. Similarly for sql injections, if you simply use prepared statements you remove a whole class of problems. Knowing an attacker will probably inject some garbage into the input of your program, and assuming user input is malicious, is a basic principle you can use to design better systems. I believe this is what the author meant by his statement.
Well, it isn’t easy and there is no silver bullet. In practice must use your engineering judgment and THINK about these tradeoffs for every problem you encounter.
That being said, there are some principles you can think about to help you get the tradeoffs right when you encounter a problem.
The main principal the author discussed is the idea of enumerating the good, rather than enumerating the bad. Deny everything except the good by default, and do it at every level of your system. This a good idea to consider, but may not apply to everything.
If there is something you don’t control, you are taking a risk, so understand it, and limit it’s potential impact. In some cases it might be better to use a tried and true library or roll your own vs using some fancy new dependency - or maybe not, that’s for you to think about, but it is worth considering carefully.
Try to keep things as simple as possible and use tech that are easier to understand, well documented, well maintained, hard to shoot yourself in the foot with over things that are fancy and cool.
For example say you’re building a distributed system.
At the network level, only allow the types of traffic you need, so don’t allow incoming traffic you don’t need, and don’t allow outbound traffic you don’t need. This means it’s going to be much harder for an attacker to get in, or exfiltrate data out. Use secure channels, for example mTLS where both sides authenticate each other.
At the application level, think about what data the user has control of and treat it carefully. Is there a way you can authenticate the user is valid, and authorize them to only perform certain actions that are allowed - can you use something like signatures to ensure that every subsystem can verify the data isn’t tampered with.
At the technology level, yes think about your dependencies, and keep things as simple as possible. This makes it easier to secure but also easier to maintain, reduces risk of vendor lock in ect. Depending on what your are dealing with, yes, you might actually want to have someone audit all your dependencies, or if that’s too expensive maybe you can isolate parts of the system that deal with sensitive information so those subsystems don’t use many dependencies. Your value proposition as an engineer is not to just string together code, but to build useful, reliable, secure, flexible software and juggle the tradeoffs. The dependencies you choose, and the way you choose to use them DO matter. Just like someone designing a bridge must use materials manufactured by a 3rd party, and assembled by another 3rd party they must be careful with who they select, and perform their own testing to ensure things will work. But those tradeoffs are going to be completely different than if you make cheap toy drones for example.
So basically in practice think carefully about the risks, costs, benefits and what tradeoffs are worth making. Keep relevant principles in mind like: favor only allowing the good, rather than trying to enumerate the bad, assume threats at every level, avoid foot-guns, favor simplicity, favor tested/trustworthy dependencies.
In practice, for example, I import openssl libraries to get mTLS, even knowing the history of CVEs they had over the years, because I know I'm definitely going to do a worse job at implementing it, and not implementing it is also worse.
So now, I knowingly included a bad-but-less-bad thing to avoid the bad-bad things. Now I have to keep myself aware of the bad things from the less-bad library that comes up from time to time in the form of CVEs. Those CVEs are "enumerating the bad". In theory I should be able to write a bulletproof mTLS library myself (or convince somebody else to), but apparently this thing doesn't exist, and the only real alternative is to wait for other people to enumerate CVEs from time to time and keep patches up to date.
Not convinced these are the dumbest (none of them is quite as dumb as requiring special characters in passwords, for example, and I'm not sure the fourth is dumb at all), or that they're six ideas. The first two are the same, and the third one is a special case of the same thing.
It's super interesting to read this list as someone young enough that the first time I was ever prompted to consider computer security was in a college course almost a decade after this was written. Although different terminology was used, some of the ideas, like "Default Permit" and "Enumerating Badness" were so heavily discouraged when I first started studying that it's almost hard to imagine them being considered good practice so recently before (although even today they're common enough that it's still worth calling out, so maybe this wasn't uncommon knowledge at the time either). On the other hand, the next two ideas, "penetrate and patch" along with "hacking is cool" certainly don't seem to be as reviled as the author would like, and I don't think that the latter was a dead idea within a decade like they suggested. Trying to interpret them charitably, I could believe that the intention here was to decry the lack of proper threat modeling that was done in advance at the time (which still is a real issue today). On the other hand, reading it at face value sounds like the idea that if you think enough in advance and just "don't write bugs" that your product will be 100% secure and never need any patching, which I don't think is a good take. I'd counter that it's essentially the same as the fallacy they mention later, "We don't need host security, we have a good firewall"; proper design up front is a good "firewall" to stop bugs from coming in, but it's not a substitute for having proper mitigations for when they do inevitably occur.
I’m feeling old remembering reading this at the time and being glad that it was getting pointedly directed to certain large vendors.
I think the key part of “penetrate and patch” is rejecting the idea that you can hire a tester, patch a couple of holes, and otherwise not change anything. It’s the difference between being _shocked_ that your C++ has another memory safety issue after someone exploits it or using tools like Rust, sandboxes, static analysis, etc. to avoid having an exploitable vulnerability in the first place.
The major confound here is that a lot of companies realized there aren’t actually many penalties for releasing unsafe software, and decided that throwing bodies at patching was cheaper. I’m reminded of how many antivirus programs had basically 90s-level C code running with system privileges because the owners decided it’d cost too much to rewrite it until Tavis Ormandy started fuzzing them. I doubt many customers switched despite clear evidence that those vendors had serious deficiencies in their development processes.
What I think happened is that with computing, humanity began to build a new world, a Different World that's not like the other, old world outside. But since humans were building it, it became just like that. It has the same buildup, the same issues, the same dumbness as the original, real world.
#1: Default permit: people don't like to spend energy, especially not upfront. Integrating "Permit by default" systems is much faster than setting them up with proper authentication, authorization and access rights. Permit default just works, starts quickly, and works fast.
#2: Enumerating badness: you mean, like how we name every single strain of viruses? So now we enumerate computer badness too.
#3: Penetrate and patch: very similar to how our laws work, I think. There are people who create injustises, and later the legal code is upgraded to handle that. Again, reactive, like in #1.
#4: Hacking is cool - well, other criminals are cool too, like pirates and maffiosos, and so on. People are drawn to power.
#5: Educating users: someone has to, doesn't they, if they haven't learnt the thing by themselves? You can't make everyone go away if they are dumb, if you need them.
#6: Action is Better Than Inaction: This one, I think, imitates business. There's a lot of ways to make money in business, and being there early is one of them.
That said, I really enjoyed the article. Permit by default is especially dumb, it was really funny when mongo installed itself with no password and listen on public IP, default port. And how long it took them to patch that. And how that haven't burned the public goodwill! So maybe these things are not really dumb after all?
> A few years ago I worked on analyzing a website's security posture as part of an E-banking security project.
Cool, so a pen test?
> One of the best ways to discourage hacking on the Internet is to ... pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.
Most of these are well thought out and still relevant 17 years later. #4 -- particularly the "don't learn offensive security skills as a defender" idea -- was dumb in 2005, and its dumb now. Its also, unsurprisingly, not advice the author has himself followed.
> but the second version used what I termed "Artificial Ignorance" - a process whereby you throw away the log entries you know aren't interesting. If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting. This approach worked amazingly well, and detected a number of very interesting operational conditions and errors that it simply never would have occurred to me to look for.
As a sysadmin, I took this approach as well. On the local machine, the server(s) would log normally. But, when I set-up centralized logging, I set-up a list of log entries that wouldn't normally interest me day-to-day. The server would only send to a central logging server things that weren't on this list. What was left were usually problems that I would need to pay attention to and they got fixed faster.
The rest of the uninteresting log entries would just be audited from time to time.
On the matter of security, every user that logs in on a daily basis gets logged with their IP address. Anytime that a user logged in with a different IP - it would get logged to the central log server and I would be notified. Most of the time, it was harmless.. but there were enough times I would find a compromised account in a sea of normal day-to-day login activity.
When your logs are full of normal things in it, it's easy to miss important details.
I have the idea of doing spam detection style bayesien analysis on logs. the theory being you feed it your log stream, those are your normal logs, if the log stream start deviating from normal the statistical analysis starts to pop warnings. if it deviants for too long that would become the new normal.
At this point I am elbow deep in bayesien email code trying to work out the nuts and bolts of the operation. One important trick is that you need a location aware hash to feed into your statistics engine. A better hash would utilize the structure of log lines, but categorizing logs is big messy yak shaving sort of work. Perhaps a worse more generic hash would be good enough.
The article is from 2005, while NemID and MitID were rolled out around 2010 and 2021, respectively. That nit-picking aside, would you be willing to elaborate on your problems with the concept of NemID/MitID as a whole?
And thank you for your work. The JS based NemID login was a huge improvement over the earlier, Java based implementation.
big unload coming - (tldr - maybe my nemid issues are just silly and paranoid and not really something that would actually happen, or maybe Danish criminals are not ambitious enough, and MitID issues are just the process for handling when you forget your password is broken)
my problems with nemid - it just always struck me as a security issue that a large number of people were using their person numbers as their ids for nemid services - sure you could change but not sure how many did. The passwords they used were case insensitive and it was played up that you didn't need to worry about that, it could be real simple so the only real line of defense was the nøgle card, which a lot of people also used the paper version.
Personally if I'd been a crime lord during NemID's heyday I would have tried to get pictures of rich people's nøgle card, have burglars hit the whiskey belt, - you find a card take a picture, then the only real issue is finding the id and password - id is probably personnummer, password is probably simple and might be easy to find (or put some spyware on their computers) But this didn't happen as far as I know so maybe there are reasons why it isn't that good a plan anyway and I'm just like a paranoid guy.
MitID bugs me because of the process when a user forgets their login or something otherwise goes wrong, which is that you get random questions from the personal register in borger.dk, my wife (who is Italian) had a problem with her MitID had to reset she got asked what her address was, and what her children's names were - which I submit would be real easy for an attacker to find out.
I had a problem I got asked my mother's maiden name, what age she got married at, what month she was born, where I live, what year and month we moved in our house, and what sogn I was baptised in.
Now I submit those questions are reaaaallll cool and easy to answer for any good and proper Danish family that have never had any problems for the last few generations but as it happens I was estranged from my parents. I don't offhand know where I was baptized (I was born in Rigs but baptized somewhere in Jylland because of a trip to visit grandparents IIRC), I'm not sure when my mother married my father - if she was 18, 19, or even 20. I couldn't remember what month she was born but my wife could because it was the month before her mother was born.
We rented our house for nearly 9 months before buying, so trying to remember again what exact month we bought it in would be difficult and of course we had transferred our address to the house before purchase because we were living there and intending to buy but the person asking the questions wouldn't even answer if what they wanted was when we said we were living there or when we bought the house, but they did urge that I should "take a guess".
The process as I said is beneficial for people with perfect families, but say a family where people got divorced and didn't talk to each other and were drunks like mine, I get screwed over by that process. The process is, it seems also beneficial to people from outside Danmark as they will of course have a less extensive record in borger register for random questions to be drawn from, hence the easiness of the questions my wife received.
I have requested clarification from Digitaliseringsstyrelsen as to what the background and technical discussion was related to the decision to use these randomized questions as I would like to write a longer article about how stupid it is, also because I can think of several ways in which I think malicious actors might be able to get access to that data relatively easily and answer the questions easier than an average citizen.
But they don't seem to understand what I mean when I say I want the background and technical discussion - which I mean I want the kind of meeting notes that go on when implementing a standard (such as when I worked on Efaktura when one element was considered informative but unfortunately that did not make it into the bekendtgørelsen, but we obviously had those meeting notes to refer to as to how it was informative and not to be used in any calculation of the faktura)
on edit: I have done a mix of English and Danish here, mainly English so everyone can follow; some Danish terms because I figured not that important.
With regards to the passwords, I somehow didn't catch that they were case-insensitive back when I created my account, so I used a mixed-case password for NemID for the longest time. Boy did I feel silly when I discovered this fact by accident.
I also didn't know that was how the recovery process went, and I can easily see it causing problems for a lot of people. I'd probably also have problems answering that kind of questions.
so I want some notes where one senior guy says I think we should pull randomized questions from the citizen data, and either everyone says that is a great idea, or there is a bunch of discussion about it and they actually bring up the points that I find painful but they have smart reasons why that is the way it has to be anyway - or somewhere in between these two poles.
I agreed with much of the article and points made. Maybe I'm missing something (if so, would love to learn!) but I felt that the "Penetrate and Patch" section was a little naive.
> Let me put it to you in different terms: if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years?
I agree with the point that "Penetrate and Patch" shouldn't be the primary strategy, but the author seems to write it off entirely with a viewpoint like "you should just write software and build systems that don't have security bugs". Well yes, of course that would be nice, but that's not feasible. And some software is much more difficult to get right than other kinds.
"Penetrate and Patch" is a useful piece of security in that (a) it can catch what slips through the cracks, (b) it provides a sort of incentive mechanism to get it right in the first place, and (c) it simply isn't possible to build bug-free systems.
The author claims that "Penetrate and Patch" finding bugs every month as evidence that it's bad, but isn't it the opposite? You cannot be bug free, so in fact any incremental progress/fixes is in fact good.
All that said, I do agree that all of this starts with secure by design. "Penetrate and Patch" isn't a good primary strategy and cannot replace Doing It Right. But I think it complements it well.
> My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.
… that won't age well, and apparently, that didn't age well. It won't happen in the next 10, either.
Nor will good engineering: as an industry, we a.) dislike the very idea that knowledge is required for software engineering and b.) every "Rust fixes this entire class of bugs, permanently" "oh god not the Rust evangelists" … yeah, the bugs will continue.
> My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.
That didn't age well. In the era of growing corruption in government and business alike hacking becomes important way through which people can actually learn anything about their overlord's shady deals.
How about "our users can't tell the difference between a DOS attack and us having screwed something up" plus "the people that want to sue us for sucking are at war with the people that want us to look successful to get a promotion for hiring good vendors" etc.
This has aged poorly; nowadays, the most notable attacks are conducted by state actors (e.g., Russia and China) or for-profit criminal groups (e.g., ransomware) rather than lone hackers doing it for fun.
I guess this man's internet heaven is filled by lobotomized users who can only exchange emails with a list of approved correspondents and browse only whitelisted websites. He, of course, gets to approve the lists.
> One of the best ways to get rid of cockroaches in your kitchen is to scatter bread-crumbs under the stove, right? Wrong! That's a dumb idea. One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.
That's like, entirely unrelated. Black hats are motivated by monetary gains, not scout badges. The proliferation of internet made "for fun" hackers minority and irrelevant factor (or benefit, as they might actually report a bug instead of sow mayhem) when it comes to security.
Yeah the cockroach analogy is kinda bad. A more apt analogy would be that you can either let rodents help themselves to your food supplies on their own terms, or you can set up traps with a little bit of cheese on them.
The traps with little bit of cheese on them here being offering hackers a viable low-stress way to earn income and the respect of society for doing ethical work, which they'll prefer over the high-risk, despite higher-gain, illegal activity they'd contemplate and perpetrate otherwise.
Similar mechanics in many ecosystems. Carrot and stick work best together.
I think current practices would be better described in ecosystem terms as:
"If a mammal is eating your food, you can adopt a bigger one to prey on them so you share a little bit of food on your own terms instead of compromising the whole community's supply".
> A better idea might be to simply quarantine all attachments as they come into the enterprise, delete all the executables outright, and store the few file types you decide are acceptable on a staging server where users can log in with an SSL- enabled browser
An odd suggestion in an otherwise relatively uncontroversial article. It implicitly trains your users in a bunch of unpleasant things:
* clicking on some URL in an email, typing your password into whatever webpage pops up, downloading the blob it serves you and opening it (after clicking through the browser's "this was downloaded from the internet, are you sure?" warning) is a perfectly normal and legitimate part of the working day
* one needs to find ways to obfuscate documents of types that aren't on the IT whitelist so one can send them to one's colleagues so they can do their jobs (and no, the corporate whitelists never capture everything people urgently need to share in order to do their jobs)
* since everyone now does that habitually, receiving an automangled email with a link to an attachment which has its actual payload contained in several layers of archive obfuscation wrapper is perfectly normal because that's just what you have to do to share stuff with your colleagues now
These could, of course, be mitigated by suitably educating users, but since the practice is advocated in a section about user education never working, that is unlikely to happen.
I think this is a little less bad in context: in 2005 Gmail was a year old. Most people used a dedicated email client app such as Outlook or Mail.app so in your flow it would be far more defensible and his view was focused on corporate users. That makes the first point a little more reasonable:
1. Your desktop application shows a list of attachments in the navigation chrome where a message can't display content.
2. When you click on something in that list, Internet Explorer or Firefox seamlessly logs you into the server using Active Directory.
Storing things on a server was also more relevant in the era where space was limited and services like Exchange were famously difficult to scale or customize. If you didn't have good tools to retroactively yank a message out of everyone's inbox when your AV signatures were updated an hour after it arrived, storing it on a server you controlled had a certain practicality.
Your second and third points are spot-on, however, and really hit at a key principle too few security teams appreciate: normalization of deviance. This approach fails badly in the real world where IT security says “don't open attachments from people you don't know” and everyone's manager says “oh, it's totally normal to get passworded ZIP files from the HR services subcontractor. Open it, we have a deadline!”. The real lesson here should be defense in depth so your organization's security isn't jeopardized when one person opens the wrong email.
Devs want to make secure systems, but they have VERY LIMITED TIME. Security is always something that is #1 in the bullet points of a presentation of priorities, and always a distant priority in the boots on the ground of features and keeping shit running.
What I've noticed is that the security team doesn't want to be responsible for cleanup or doing lots of work or engineering. They want to make presentations for the upper management, pick some enterprise partners to impose on the orgs, and kick back in offices. Most know little about cryptography or major incidents. If a great security practice like "sync ssh keys" or other things that may require a bit of legwork, they don't want to do it.
They'd rather load down the devs. They'd rather come in and review the architecture rather than provide drop-in solutions. If something needs customization for interface with SSO or getting credentials, they drop the integration in the devs laps. Who's supposed to be the experts here? The security team should own whatever craptastic enterprisey shit they select, and ALSO be responsible for making it useful to the dev org.
The biggest example of this is the desire for "minimum permission". Take AWS for example with its explosive number of permissions, old and new permissions models, and very complicated webs of "do I need this permission" and "what permission does this error message mean I'm missing". And ye gods, the dumb magic numbers in the JSON, but anyway. If the security team wants AWS roles with "minimum viable permission" THEY need to be experts in the permission model and craft these VERY COMPLICATED permission sets FOR THE DEVS. And if the Devs need more, they need to very quickly provide (say < 1/2 day) new permissions in case some new S3 bucket is needed or some new AWS service is needed. But security teams don't want to do such gruntwork.
2) recognize that automated infrastructure is the rule, not the exception, aka the devs are not the enemy
It took sooo long for ssh keys to become prevalent in development that people weren't ssh'ing in using passwords. Like, decades. This practice represented a big leap in administration productivity and probably was more secure.
And you could automate on top of it in shell scripts, not leave passwords in .history, lots of good things.
And the security industry wants you to undo it. Wants TOTP passwords from your phone hand-typed, wants a web page to pop up to gain temporary credentials, pretends you know how long your process will run so those temporary credentials won't expire and if you do, what, you're supposed to manually re-authenticate?
Security at my last job wanted an ssh replacement to be used (the enterprise security industry is waging war on ssh/sshd) that if I used it from the command line IT POPPED UP A BROWSER PAGE. And no way to automate this for any task.
In general security teams seem obsessed with making devs lives as hard as possible. Are most leaks via dev channels? In my experience the BIG leaks are "County Password Inspector", phishing, disgruntled/angry employees selling access. Well, and credentials checked into github. Most places I've worked at have involved this steadily slide into less and less usability by the devs, at GREAT cost to productivity, for questionable payoff in actual platform security.
Meanwhile, no joke, ssl protocols on internal password reset sites were using such poor algorithms that Chrome was refusing to display it. Githubs were open to the public that shouldn't have been. 8-character limit passwords with proscribed character usage.
I don't think the author intended to say that you can prevent all problems, I think they meant you can't just shrug and say "we can't help but get hacked". You can stop all problems from becoming critical, which is what airlines attempt to do.
They talk earlier about defense in depth, so it's obvious that they're not oblivious to the need for redundant safety measures:
> "We don't need a firewall, we have good host security" - no, you don't. If your network fabric is untrustworthy every single application that goes across the network is potentially a target. 3 words: Domain Naming System.
> "We don't need host security, we have a good firewall" - no, you don't. If your firewall lets traffic through to hosts behind it, then you need to worry about the host security of those systems.
Maybe I'm being too harsh, but my interpretation of that point is that they expect we'll eventually become perfect, which isn't going to happen in the software world as it hasn't happened in the airline world, even though the airline world has more incentives to be perfect in the form of more penalties when it isn't.
As long as software has bugs and accepts user input, there are going to be ways to make it do things it shouldn't. You can avoid running specific known vulnerabilities. You can avoid creating certain kinds of dumb and obvious ones. But barring, like, formal verification, it is always possible for someone smarter or more patient than the original software development team to think real hard and come up with an edge case they didn't. And operational or system-level controls can only do so much about that.
Preventing every security vulnerability is the same problem as writing bug-free code. And that is manifestly not happening, not even in the most sophisticated software development operations in the world.
Right; which is why all the things on that list are so important. We can’t seem to stop the endless flood of memory bugs in C/C++ code. Iirc 65% of security issues in chrome are due to memory bugs. But we can move to Rust and friends, where those bugs are a lot harder to write.
We’ll never get the bug count to 0. That isn’t the goal. The goal is to get the number of in-the-wild exploited vulnerabilities as low as possible. And there’s all sorts of ways to move the needle on that, which don’t require humans to suddenly become infallible.
While I agree with your general point I have to disagree the particulars here.
The ground crew did not refuel the plane, because the pilots did not request refuelling. There was no-one "forgetting" to refuel, least of all the ground crew.
The pilots did not request fuel because they thought they have enough. And they thought they have enough because they made a unit conversion error in their calculations. (there are even more layers and twists and turns to this cheese lasagne, but no one "forgot" to refuel that is for sure.)
I am disagreeing because this person doesn't understand the concept of defense in depth: Occasional problems will happen, will ye or nil ye, and the best you can do is to, as you say, prevent them from becoming worse. Thinking airliners don't have occasional problems is missing a lot of what the air industry does that we can implement in other realms.
He clearly does elsewhere, so I would suggest reading this more charitably with the assumption that you’re talking about the same idea from different perspectives. If I’m the passenger, I don’t even know about something which is caught by a checklist or redundant hardware before it progresses. If I’m the pilot or mechanic, the reverse is true. In both cases, what matters is the spirit of the point: saying something is too infrequent to prevent is defeatist.
Aviation industry mostly protects against random failures and human mistakes, not targeted attacks so comparison there is silly from the start.
There are lessons to be learned, but they are about building resilient systems, not secure ones. All of the "security" of airplanes pretty much relies on pilot noticing something is wrong, without that man with SDR could fuck up a lot of stuff
>> "We can't stop the occasional problem" - yes, you can. Would you travel on commercial airliners if you thought that the aviation industry took this approach with your life? I didn't think so.
Also, security is like reliability/uptime: You pay for every nine. You want to keep the server up on a best effort basis and have only the most basic security? Cheap, easy, minimal time investment. You want 1 minute down per month and good security? It'll take time and effort. You want... IIRC airplanes have a failure rate around 1 in 14 million flights, give or take? How many billions of dollars do you have? Because quality still ain't cheap.
Is there any real equivalent process for tech? It seems like the majority of security certifications is a box checking exercise where actually being secure has very little relation to how many boxes you checked.
Also, "penetrate and patch" is definitely at work in the airline industry. Yes, airliners are well designed as safe systems, but every now and again a problem does occur, to varying degrees of seriousness, and when such a problem is identified, it is patched.
Defence in depth. Sure, design a system well. But "penetrate and patch" is another layer of protection. I mean, if you find your system is penetrated, what else can you do right now but patch it?
No billion dollar company is anywhere near as glib has the author makes them out to be. Maybe my experience varies, but the European companies I've worked with have strict cybersecurity liability, and they take every aspect of security seriously and do not just pat themselves on the back smugly, as OP portrays. Maybe this was the case in the 90's, but it sure is not the case today.
EDIT: I deleted most of my post because I found it was repeated up and down the comments which I am so relieved to see. I kept my post because I want newcomers to hear as many voices in objection to OP's outdated essay as possible.
> My prediction is that in 10 years users that need education will be out of the high-tech workforce entirely, or will be self-training at home in order to stay competitive in the job market. My guess is that this will extend to knowing not to open weird attachments from strangers.
And yet, just yesterday I've seen a TV ad explaining how to not get phished out of your money through your banking app.
I think it a running theme in this document that author displays severe lack of understanding how security becomes hard as soon as you let anyone do anything online.