> Obviously forking go’s crypto library is a little scary, and I’m gonna have to do some thinking about how to maintain my little patch in a safe way
This should really be upstreamed as an option on the ssh library. Its good to default to sending chaff in untrusted environments, but there are plenty of places where we might as well save the bandwidth
It sort of already is. This behavior is only applied to sessions with a TTY and then the client can disable it, which is a sensible default. This specific use case is tripping it up obviously since the server knows ahead of time that the connection is not important enough to obfuscate and this isn't a typical terminal session, but in almost any other scenario there is no way to make that determination and the client expects its ObscureKeystrokeTiming to be honored.
Yes, but I wouldn't be surprised if the change is rejected. The crypto library is very opinionated, you're also not allowed to configure the order of TLS cipher suites, for example.
That is a completely valid threat model analysis, though? "Just hope no bad guy ever gets into the safe" is rather the entire point of a safe. If you have a safe, in which you use the contents of the safe daily, does it make sense to lock everything inside the safe in 100 smaller safes in some kind of nesting doll scheme? Whatever marginal increase in security you might get by doing so is invalidated by the fact that you lose all utility of being able to use the things in the safe, and we already know that overburdensome security is counterproductive because if something is so secure that it becomes impossible to use, those security measures just get bypassed completely in the name of using the thing. At some level of security you have to have the freedom to use the thing you're securing. Anything that could keep a bad guy from doing anything ever would also keep the good guy, ie. you, from doing anything ever.
+1... Given how much SSH is used for computer-to-computer communication it seems like there really should be a way to disable this when it isn't necessary.
In practice I've never felt this was an issue. But I can see how with extremely low bandwidth devices it might be, for instance LoRa over a 40 km link into some embedded device.
Threats exist in both trusted and untrusted environments though.
This feels like a really niche use case for SSH. Exposing this more broadly could lead to set-it-and-forget-it scenarios and ultimately make someone less secure.
Very interesting, I hadn't heard of this obfuscation before so it was well worth clicking.
Another good trick for debugging ssh's exact behavior is patching in "None" cipher support for your test environment. It's about the same work as trying to set up a proxy but lets you see the raw content of the packets like it was telnet.
For terminal games where security does not matter but performance and scale does, just offering telnet in the first place can also be worth consideration.
I don't see how Claude helped the debugging at all. It seemed like the author knew what to do and it was more telling Claude to think about that.
I've used Claude a bit and it never speaks to me like that either, "Holy Cow!" etc. It sounds more annoying than interacting with real people. Perhaps AIs are good at sensing personalities from input text and doesn't act this way with my terse prompts..
Even if the chatbot served only as a Rubber Ducky [1], that's already valuable.
I've used Claude for debugging system behavior, and I kind of agree with the author. While Claude isn't always directly helpful (hallucinations remain, or at least outdated information), it helps me 1) spell out my understanding of the system (see [1]) and 2) help me keep momentum by supplying tasks.
A rubber ducky demands that you think about your own questions, rather than taking a mental back seat as you get pummeled with information that may or may not be relevant.
I assure you that if you rubber duck at another engineer that doesn't understand what you're doing, you will also be pummeled with information that may or may not be relevant. ;)
AIs are exceptional at sensing personalities from text. Claude nailed it here, the author felt so good about the "holy cow" comments that he even included them in the blog post. I'm not just poking this, but saying that the bots are fantastic sycophants.
The reliance on LLMs is unfortunate. I bet this mystery could gave been solved much quicker by simply looking at the packet capture in Wireshark. The Wireshark dissectors are quite mature, SSH is covered fairly well.
> I bet this mystery could gave been solved much quicker by simply looking at the packet capture in Wireshark.
For some people who are used to using Wireshark and who know what to look for, probably yes. For the vast majority of even technical people, probably not.
In my case, I did a packet capture of a single keystroke using tcpdump and imported it into Wireshark and I get just over 200 'Client: encrypted packet' and 'Server: encrypted packet' entries. Nothing useful there at all. If I tcpdump the entire SSH connection setup from scratch I get just as much useful information - nothing - but, oddly, fewer packets than my one keystroke triggered.
So yeah, I dislike LLMs entirely and dislike the reliance on LLMs that we see today, but in this case the author learned a lot of interesting stuff and shared it with us, whereas without LLMs he might have just shrugged and moved on.
And thats a huge downside when people howl about "Encryption everywhere! ".
Try debugging that shit. Thats right, debugging interfaces aren't safe, by some wellakshually security goon.
You want a real fun one to debug, is a SAML login to a webapp, with internal Oauth passthrough between multiple servers. Sure, I can decrypt client-server stuff with tools, but server-server is damn near impossible. The tools that work break SSL, and invalidate validation of the ssl.
I used to share that opinion but after decades in industrial automation I find myself coming down much more on the "yeah, encryption everywhere" because while many vendors do not provide good tools for debugging, that's really the problem, and we've been covering for them by being able to snoop the traffic.
Having to MITM a connection to snoop it is annoying, but the alternative appears to be still using unencrypted protocols from the 1970s within the limitations of a 6502 to operate life-safety equipment.
Unfortunately with SSH specifically, the dissectors aren't very mature - you only get valid parsing up to the KeX completion messages (NEWKEYS), and after that, even if the encryption is set to `none` via custom patches, the rest of the message flow is not parsed.
Seems because dumping the session keys is not at all a common thing. It's just a matter of effort though - if someone put in the time to improve the SSH story for dissectors, most of the groundwork is there.
Interesting, I thought it was possible to decrypt SSH in Wireshark a la TLS, but it seems I'm mistaken. It still would have been my first goto, likely with encryption patched out as you stated. With well documented protocols, it's generally not too difficult deciphering the raw interior bits as needed with the orientation provided by the dissected pieces. So let me revise my statement: this probably would have been a fairly easy task with protocol analysis guided code review (or simply CR alone).
Not even remotely accurate. While the dissector is not as mature as I thought and there's no built-in decryption as there is for TLS, that doesn't matter much. Hint: every component of the system is attacker controlled in this scenario.
obviously OPs empirical and analytical rigor are top notch. He applied LLMs in the best way possible: fill gaps with clumsy command line flags or protocol implementations. Those aren't things one needs to keep in their head all the time.
My thoughts exactly. The OP used AI to get a starting point to their investigation, then used their skills to improve their game, with actual (I guess according to the article itself) proof of that, as opposed to just approving changes from the LLM.
This looks like an actual productivity boost with AI.
Well, I spent a good part of my career reverse engineering network protocols for the purpose of developing exploits against closed source software, so I'm pretty sure I could do this quickly. Not that it matters unless you're going to pay me.
What are you even trying to say? I suppose I'll clarify for you: Yes, I'm confident I could have identified the cause of the mysterious packets quickly. No, I'm not going to go through the motions because I have no particular inclination toward the work outside of banter on the internet. And what's more, it would be contrived since the answer has already shared.
I think the point they're making is that "I, a seasoned network security and red-team-type person, could have done this in Wireshark without AI assistance" is neither surprising nor interesting.
That'd be like saying "I, an emergency room doctor, do not need AI assistance to interpret an EKG"
I'm still waiting for a systems engineering tool that can log every layer, and handle SSL the whole pipe wide.
Im covering everything from strafe and ltrace on the machine, file reads, IO profiling, bandwidth profiling. Like, the whole thing, from beginning to end.
Real talk though, how much would such a tool be worth to you? Would you pay, say, $3,000/license/year for it? Or, after someone puts in the work to develop it, would you wait for someone else to duct tape something together approximately similar enough using regexps that open source but 10% as good, and then not pay for the good proprietary tool because we're all a bunch of cheap bastards?
We have only ourselves to blame that there aren't better tools (publicly) available. If I hypothetically (really!) had such a tool, it would be an advantage over every other SRE out there that could use it. Trying to sell it directly comes with more headaches than money, selling it to corporations has different headaches, open-sourcing it don't pay the bills, nevermind the burnout (people don't donate for shit). So the way to do it is make a pitch deck, get VC funding so you're able to pay rent until it gets acquired by Oracle/RedHat/IBM (aka the greatest hits for Linux tool acquisition), or try and charge money for it when you run out of VC funding, leading to accusations of "rug pull" and development of alternatives (see also: docker) just to spite you.
In the base case you sell Hashimoto and your bank account has two (three!) commas, but worst case you don't make rent and go homeless when instead you could've gone to a FAANG and made $250k/yr instead of getting paid $50k/yr as the founder and burning VC cash and eating ramen that you have to make yourself.
I agree, that would be an awesome tool! Best case scenario, a company pays for that tool to be developed internally, the company goes under, it gets sold as an asset and whomever buys it forms a compnay and tries to sell it directly and then that company goes under but that whomever finally open sources it because they don't want it to slip into obscurity but if falls into obscurity anyway because it only works on Linux 5.x kernels and can't be ported to the 6.x series that we're on now easily.
Oh wow - I've never heard of TCP_CORK before. Without disabling pings I'd still pay the cost of receiving way more packets, but maybe that'd be tolerable if I didn't have to send so many pongs. This is super handy; excited to play around with it.
I am aware of TCP_NODELAY (funny enough I recently posted about TCP_NODELAY to HN[1] when I was thinking about it for the same game that I wrote about here). But I think the latency hit from disabling it just doesn't work for me.
I missed that thread originally, the post and the comments where a good read, thank you for sharing.
I got a kick out of this comment [0]. "BenjiWiebe" made a comment about the SSH packets you stumbled across in that thread. Obviously making the connection between what you were seeing in your game and this random off-hand comment would be insane (if you had seen the comment at all), but I got a smile out of it.
3. If you uncork the socket, or if the buffer hits MSS, the kernel sends the packet
Basically, the kernel waits until it has a full packet worth of data, or until you say you don't have any more data to send, and then it sends. Sort of an extreme TCP_YESDELAY.
Can you explain how TCP_CORK helps here? The chaff packets are spaced 20ms apart and sent per socket, so I don’t see how TCP_CORK could help unless it coalesced across 20ms intervals? But coalescing is clearly not an option for the intended obfuscation effect of the original feature.
I seem to hit this logic often recently for some reason.
There are two issues with it:
- a primary is not a totality: if "security is the #1 consideration for SSH", that implies there's a #2, maybe even a #3 and so on consideration. So the question that follows becomes tautological: "but if the author doesn't need security, why use ssh?" -> surely for one or more of the #2, #3, etc. considerations, right?
- overabstraction (*): you ended up strawmanning the author. What they had issue with was keystroke timing obfuscation, which is a privacy feature. Timing attacks are (in part) a privacy concern, and privacy is a security concern, yes, but security is not just privacy concern, and privacy concerns are not just about timing attacks; these groups are not equal. For example, they might very well want the transmitted keypresses themselves to remain confidential, or they might very well want to retain cryptographic assurance of their integrity. These are security features they can continue to utilize by sticking with SSH.
All of this is to say, it's not even necessarily them using SSH for a hypothetical #2 or #3 (...etc...) reason, but likely because they still very much want to make use of large chunks of #1, which disabling keypress obfuscation does not actually rid SSH of, only at most weakens it in ways they clearly seem to be okay with.
(*) although if I zoom out enough, this is once again just "a primary is not a totality", just implicitly
In 2023, ssh added keystroke timing obfuscation. The idea is that the speed at which you type different letters betrays some information about which letters you’re typing. So ssh sends lots of “chaff” packets along with your keystrokes to make it hard for an attacker to determine when you’re actually entering keys.
Now that's solving the problem the wrong way. If you really want that, send all typed characters at 50ms intervals, to bound the timing resolution.
Wouldn't this just change the packet interval from 20ms to 50ms? Or did you mean a constant stream of packets at 50ms intervals, nonstop?
I think the idea behind the current implementation is that the keystrokes are batched in 20ms intervals, with the optimization that a sufficiently long silence stops the chaff stream, so the keystroke timing is obfucated with an increased error bar of 20ms multiplied by number of chaff packets.
I assume the problem, such as it is, relates to the fact that a real human typing in 20-50ms would generate a few characters at most but a program could generate gobs of data. So automatically you know what packets to watch. Then you know if there were more the likely keys were in set X, while if there were fewer the likely keys were in set Y.
So a clock doesn't solve the problem. The amount of data sent on each clock pulse also tells you something about what was sent.
The Chaff packets already fire on a timer. They inject random extra fake keystrokes so you can't tell how many keystrokes were actually made. The only other way I can think of to solve that is by using a step function: Send one larger packet (fragmented or the same number of individual packets) on each clock pulse if the actual data is less than some N where N is the maximum keystrokes ever recorded with some margin. Effectively almost every clock pulse will be one packet (or set of packets) of identical size. Of course if you do that then you'll end up consuming more data over time than sending random amounts of packets.
The problem is not knowing whether someone is typing, as far as I understand. But that you may extract some information about what keys are being typed, based on the small differences in timings between them.
Not related to SSH, but does the eieio.games website make anyone else's monitor flicker? When the website is fullscreen it overwhelms something. I thought my monitor's backlight was going.
> That 20ms is a smoking gun - it lines up perfectly with the mysterious pattern we saw earlier!
Speaking of smoking guns, anybody else reckon Claude overuses that term a lot? Seems anytime I give it some debugging question, it'll claim some random thing like a version number or whatever, is a "smoking gun"
Yes! While this post was written entirely by me, I wouldn't be surprised if I had "smoking gun" ready to go because I spent so much time debugging with Claude last night.
It's not just a coincidence, it's the emergence of spurious statistical correlations when observations happen across sessions rather than within sessions.
Or the "Eureka! That's not just a smoking gun, it's a classic case of LLMspeak."
Grok, ChatGPT, and Claude all have these tics, and even the pro versions will use their signature phrases multiple times in an answer. I have to wonder if it's deliberate, to make detecting AI easier?
Without knowing how LLM's personality tuning works, I'd just hazard a guess that the excitability (tendency to use excided phrases) is turned up. "smoking gun" must be highly rated as a term of excitability. This should apply to other phrases like "outstanding!" or "good find!" "You're right!" etc.
They love clichés, and hate repeating the same words for something (repetition penalty) so they'll say something like "cause" then it's a "smoking gun" then it's something else
You might see certain phrases and mdashes ;-) rather often, because … these programs are trained on data written by people (or Microsoft's spelling correction) which overused them in the last n years? So what should these poor LLMs generate instead?
smoking gun, you're absolutely right, good question, em dash, "it isn't just foo, it's also bar", real honest truth, brutal truth, underscores the issue, delves into, more em dashes, <20 different hr/corporate/cringe phrases>.
That's the point though, it doesn't reflect human usage of the word. If
delve were so commonly used by humans too, we wouldn't be discussing
how it's overused by LLMs.
Maybe we need a real AI which creates new phrases and teaches the poor LLMs?
Looking back we already had similar problems, when we had to ask our colleagues, students, whomever "Did you get your proposed solution from the answers part or the questions part of a stackoverflow article?" :-0
> I am working on a high-performance game that runs over ssh.
Found your problem.
But it is an interesting world where you can casually burrow into a crypto library and disable important security features more easily than selecting the right network layer solution.
Yea UDP is technically more performant, but then you need a crypto layer + reliable message delivery layer + bespoke client. Using a plain old SSH client is cool.
its not really a question of 'udp performs better'. in tcp we have to live to head-of-line blocking on losses and congestion control. if you don't care about receiving every packet, but only the most recent, then udp is a good choice.
running without congestion control means that you avoid slowstart. but at a certain rate you run into poorly defined 'fairness' issues where you can easily negatively impact other flows. past that point, you can actually self-interfere and cause excessive losses for yourself.
quic uses congestion control, but uses latency estimates and variance as a signal to back off. it still imposes an ordering on a per-stream basis. so it might not be ideal either.
sctp has a mode which supports reliable and unordered, which might be something to consider
so really - if you care about latency and have a different reliability model, its worth unpacking all these considerations and using them to select your transport layer or even consider writing a minimal one yourself
The really mysterious part is how ~10,000 packets per second costs ~20% of a core. That would mean SSH is bottlenecking in its code at ~50,000 packets per second per core which would be ~500 Mbps per core (assuming full packets) which is ludicrously slow. It is trivial to do 10x that packet per second rate. Is SSH really that poorly designed?
I do not know where people get the idea that encryption is that slow. Standard AES hardware acceleration instructions do ~25 Gbps per core (on a 2023 CPU) which is ~50x that rate [1]. I have heard modern cores can do ~40-50 Gbps, but I have not been able to find any independent benchmarks of that. Even the Intel i5-2500, a CPU from 2011, averages ~10 Gbps which is ~20x that rate. Even unaccelerated encryption can do ~2-5 Gbps in pure software which is 4-10x the SSH rate.
And in this situation, the amount of encrypted payload in each packet is 36 bytes which is ~40x less than a full packet of ~1500 bytes. You would almost surely hit packet per second limits before you hit payload throughput limits at these small sizes.
Encryption is slow when compared to data throughput you can get with a properly designed transport stack, but that is because it is in comparison to 100 Gbps per core even with no hardware offload. Anything less than ~10 Gbps/1 million packets per second (ignoring other bottlenecks, so only the software transport is the limit) is not merely unoptimized, it is pessimized.
>>> That makes a lot of sense for regular ssh sessions, where privacy is critical. But it’s a lot of overhead for an open-to-the-whole-internet game where latency is critical.
Switching to telnet instead of SSH might be an option.
I wonder if this is the same reason why Microsoft's Remote SSH plugin on VS Code is so flaky even with a decent internet connection. Every couple of months I try to give it another go and give up due to the poor keyboard latency I inevitably experience. And the slow reconnects whenever I glance away from my computer monitor briefly. This is on a fiber connection with a 20ms ping to the remote machine.
You surely mean the latency in its embedded terminal and not the code editor, right? I use VSCode’s remote SSH specifically so that code editing doesn’t suck. It really does not.
> Keystroke obfuscation can be disabled client-side.
please never do that (in production)
if anyone half way serious tries they _will_ be able to break you encryption end find what you typed
this isn't a hypothetical niche case obfuscation mechanism, it's a people broke SSH then a fix was found case. I don't even know why you can disable it tbh.
That doesn't sound right to me. This obfuscation isn't about a side-channel on a crypto implementation, this is about literally when your keystrokes happen. In the right circumstances, keystroke timing can reduce the search space for bruteforcing a password [1] but it's overstating to describe that as broken encryption.
I'm baffled about this "security feature". Besides from this only being relevant to timing keystrokes during the SSH session, not while typing the SSH password, I really don't understand how can someone eavesdrop on this? They'd have to have access to the client or server shell (root?) in order to be able to get the keystrokes typing speed. I've also never heard of keystroke typing speed hacking/guessing keystrokes. The odds are very low IMO to get that right.
I'd be much more scared of someone literally watching me type on my computer, where you can see/record the keys being pressed.
Anyone who can spy on the network between the client and server can see the timing. This includes basically anyone on the same LAN as you, anyone who sets up a WiFi access point with a SSID you auto-connect to, anyone at your ISP or VPN provider, the NSA and god knows who else.
And the timing is still sensitive. [1] does suggest that it can be used to significantly narrow the possible passwords you have, which could lead to a compromise. Not only that, but timing can be sensitive in other ways --- it can lead to de-anonymization by correlating with other events, it can lead to profiling of what kind of activity you are doing over ssh.
So this does solve a potentially sensitive issue, it's just nuanced and not a complete security break.
It is to prevent timing attacks but there are many ssh use cases where it is 100% computer to computer communications where there is no key based timing attack possible.
- you are listening to an SSH session between devices
- and you know what protocol is being talked over the connection (i.e. what they are talking about)
- and the protocol is reasonably predictable
then you gain enough information about the plaintext to start extracting information about the cipher and keys.
It's a non-trivial attack by all means but it's totally feasible. Especially if there's some amount of observable state about the participants being leaked by a third party source (i.e. other services hosted by the participants involved in the same protocol).
this only works for manually typed text, not computer to computer communication where you can't deduce much from what is being "typed" as it's not typed but produced by a program to which every letter is the same and there is no different delay in sending some letters (as people have when typing by hand)
I agree it is more nuanced than a simple 'good for computer-to-computer' and 'bad for person-to-computer'. I'm sure there are cases where both are wrong but I don't think that necessarily changes that it makes a reasonable baseline heuristic.
I'd love to hear more about this kind of attack being exploited in the wild. I understand it's theoretically possible, but...good luck! :)
You're guessing a cipher key by guessing typed characters with the only information being number of packets sent and the time they were sent at. Good luck. :)
I haven't given this more than 5 seconds of thought, but wouldn't it make sense to only enable the timing attack prevention for pseudo-terminal sessions (-t)?
The fix seems kind of crazy though, adding so much traffic overhead to every ssh session. I assume there's a reason they didn't go that route, but on a first pass seems weird they didn't just buffer password strokes to be sent in one packet, or just add some artificial timing jitter to each keystroke.
I'm just guessing but this chaff sounds like it wouldn't actually change the latency or delivery of your actual keystrokes while buffering or jitter would.
So the "real" keystrokes are 100% the same but the fake ones which are never seen except as network packets are what is randomized.
Hey, if ECHELON snuck a listener into my house, where six devices hang out on a local router... Good for them, they're welcome to my TODO lists and vast collection of public-domain 1950s informational videos.
(I wouldn't recommend switching the option off for anything that could transit the Internet or be on a LAN with untrusted devices. I am one of those old sods who doesn't believe in the max-paranoia setting for things like "my own house," especially since if I dial that knob all the way up the point is moot; they've already compromised every individual device at the max-knob setting, so a timing attack on my SSH packet speed is a waste of effort).
One thing you notice if you have ADSL is that some services are built as if slower connections matter and others are not. Like Google's voice and audio chat services work poorly but most of the others work well. Uploading images to Mastodon, Bluesky, Facebook, LinkedIn, Instagram and Nextdoor is reliable, but for Tumblr you have to try it twice. I don't what they are doing wrong but they are doing something wrong and not finding out what they're doing wrong because they're not testing and they're not listening to users.
Nobody consulted me about their decision not to run fiber by my house. If some committee decides to make ssh bloated they are, together with the others, conspiring to steal my livelihood and I think it would be fair for me to sue them for the $50k it would take to run that fiber myself.
It's OK if you work for Google where there is limitless dark fiber but what about people in African countries?
It's the typical corporate attitude where latency never matters: Adobe thinks it is totally normal that it takes 1-5s for a keystroke to appear when you are typing into Dreamweaver.
I agree with your general point that most companies/projects do a terrible job optimizing for slow computers/networks, but OpenSSH is from the OpenBSD people, who are well-known for supporting ancient hardware [0]. Picking a random architecture, they fully support a system with only 64MB of memory [1], and the base install includes SSH. So I suspect that OpenSSH is fairly well tested on crappy computers/networks.
There's a good chance you have other options. Regardless of how you feel about the company's head, Starlink would probably be one of them, with likely better performance than you're dealing with on ADSL.
But you cannot just sue a company because their network connected software doesn't work well on slow networks. Let alone a project like OpenSSH. It would be like me suing a game studio because my PC doesn't meet their listed minimum requirements to play the game.
Hey, it is one thing to buy a new computer, it is another thing to ask people to move.
A better analogy is a bank redlining neighborhoods. The cost to run fiber to difficult rural locations pays itself easily if you look at a 25-year time span and is an order of magnitude less than building a new housing unit on the West Coast.
You just opened a huge nostalgia portal, never thought that Dreamweaver would still be around, I used that somewhere around 2003 I believe. Good memories
Frankly I wish there was an HTML editor that delivers on what it promised. I mean, markdown is almost as rife with edge cases as YAML and somehow the link syntax still eludes me. If we could “just” template by merging at the DOM level and had decent HTML editors the world would be a different place. But yeah, Adobe probably thinks Dreamweaver isn’t worth maintaining just as they seem to think Photoshop is barely worth maintaining (they keep adding AI features that sorta work but the foundations seem to be much worse than Illustrator)
> I am working on a high-performance game that runs over ssh.
Step one, run https://www.psc.edu/hpn-ssh-home/introduction/ instead
Step two, tune TCP/IP stack
Step... much later: write your own "crypto". (I'm using quotes because, before someone points out the obvious, packets-per-keystroke isn't, itself, a cryptographic algorithm, but because it's being done to protect connections from being decrypted/etc, mess with it at your own peril.)
Telnet nowadays typically isn’t available by default for security reasons, and OP wants people to be able to play the game just by typing “ssh thegamehost”.
This should really be upstreamed as an option on the ssh library. Its good to default to sending chaff in untrusted environments, but there are plenty of places where we might as well save the bandwidth
https://github.com/openssh/openssh-portable/blob/d7950aca8ea...
Nobody is running TCP on that link, let alone SSH.
and RNode would be a better match.
This feels like a really niche use case for SSH. Exposing this more broadly could lead to set-it-and-forget-it scenarios and ultimately make someone less secure.
Another good trick for debugging ssh's exact behavior is patching in "None" cipher support for your test environment. It's about the same work as trying to set up a proxy but lets you see the raw content of the packets like it was telnet.
For terminal games where security does not matter but performance and scale does, just offering telnet in the first place can also be worth consideration.
https://news.ycombinator.com/item?id=37307708
I've used Claude a bit and it never speaks to me like that either, "Holy Cow!" etc. It sounds more annoying than interacting with real people. Perhaps AIs are good at sensing personalities from input text and doesn't act this way with my terse prompts..
I've used Claude for debugging system behavior, and I kind of agree with the author. While Claude isn't always directly helpful (hallucinations remain, or at least outdated information), it helps me 1) spell out my understanding of the system (see [1]) and 2) help me keep momentum by supplying tasks.
[1] https://en.wikipedia.org/wiki/Rubber_duck_debugging
The comment about Claude being pumped was a joke.
> I bet this mystery could gave been solved much quicker by simply looking at the packet capture in Wireshark.
For some people who are used to using Wireshark and who know what to look for, probably yes. For the vast majority of even technical people, probably not.
In my case, I did a packet capture of a single keystroke using tcpdump and imported it into Wireshark and I get just over 200 'Client: encrypted packet' and 'Server: encrypted packet' entries. Nothing useful there at all. If I tcpdump the entire SSH connection setup from scratch I get just as much useful information - nothing - but, oddly, fewer packets than my one keystroke triggered.
So yeah, I dislike LLMs entirely and dislike the reliance on LLMs that we see today, but in this case the author learned a lot of interesting stuff and shared it with us, whereas without LLMs he might have just shrugged and moved on.
Try debugging that shit. Thats right, debugging interfaces aren't safe, by some wellakshually security goon.
You want a real fun one to debug, is a SAML login to a webapp, with internal Oauth passthrough between multiple servers. Sure, I can decrypt client-server stuff with tools, but server-server is damn near impossible. The tools that work break SSL, and invalidate validation of the ssl.
Yes, Esri products suck. Bad.
Having to MITM a connection to snoop it is annoying, but the alternative appears to be still using unencrypted protocols from the 1970s within the limitations of a 6502 to operate life-safety equipment.
Particularly in today's political climate, encryption has only become more necessary.
Seems because dumping the session keys is not at all a common thing. It's just a matter of effort though - if someone put in the time to improve the SSH story for dissectors, most of the groundwork is there.
> there's no built-in decryption
Is that because wireshark can't do that just from packet captures?
This looks like an actual productivity boost with AI.
That'd be like saying "I, an emergency room doctor, do not need AI assistance to interpret an EKG"
Consider that your expertise is atypical.
I'm still waiting for a systems engineering tool that can log every layer, and handle SSL the whole pipe wide.
Im covering everything from strafe and ltrace on the machine, file reads, IO profiling, bandwidth profiling. Like, the whole thing, from beginning to end.
Theres no tool that does that.
Hell, I can't even see good network traces within a single Linux app. The closest you'll find is https://github.com/mozillazg/ptcpdump
But especially with Firefox, good luck.
We have only ourselves to blame that there aren't better tools (publicly) available. If I hypothetically (really!) had such a tool, it would be an advantage over every other SRE out there that could use it. Trying to sell it directly comes with more headaches than money, selling it to corporations has different headaches, open-sourcing it don't pay the bills, nevermind the burnout (people don't donate for shit). So the way to do it is make a pitch deck, get VC funding so you're able to pay rent until it gets acquired by Oracle/RedHat/IBM (aka the greatest hits for Linux tool acquisition), or try and charge money for it when you run out of VC funding, leading to accusations of "rug pull" and development of alternatives (see also: docker) just to spite you.
In the base case you sell Hashimoto and your bank account has two (three!) commas, but worst case you don't make rent and go homeless when instead you could've gone to a FAANG and made $250k/yr instead of getting paid $50k/yr as the founder and burning VC cash and eating ramen that you have to make yourself.
I agree, that would be an awesome tool! Best case scenario, a company pays for that tool to be developed internally, the company goes under, it gets sold as an asset and whomever buys it forms a compnay and tries to sell it directly and then that company goes under but that whomever finally open sources it because they don't want it to slip into obscurity but if falls into obscurity anyway because it only works on Linux 5.x kernels and can't be ported to the 6.x series that we're on now easily.
Disabling TCP_NODELAY would also reduce number of packets + be portable & simpler to implement - but would incur a latency penalty.
I am aware of TCP_NODELAY (funny enough I recently posted about TCP_NODELAY to HN[1] when I was thinking about it for the same game that I wrote about here). But I think the latency hit from disabling it just doesn't work for me.
[1] https://news.ycombinator.com/item?id=46359120
I got a kick out of this comment [0]. "BenjiWiebe" made a comment about the SSH packets you stumbled across in that thread. Obviously making the connection between what you were seeing in your game and this random off-hand comment would be insane (if you had seen the comment at all), but I got a smile out of it.
[0] https://news.ycombinator.com/item?id=46366291
For people who don't feel like googling it:
1. You TCP_CORK a socket
2. You put data into it and the kernel buffers it
3. If you uncork the socket, or if the buffer hits MSS, the kernel sends the packet
Basically, the kernel waits until it has a full packet worth of data, or until you say you don't have any more data to send, and then it sends. Sort of an extreme TCP_YESDELAY.
See https://catonmat.net/tcp-cork for where I learned it all from.
For example, "nc" (netcat) is pre-installed on all platforms where ssh is.
There are two issues with it:
- a primary is not a totality: if "security is the #1 consideration for SSH", that implies there's a #2, maybe even a #3 and so on consideration. So the question that follows becomes tautological: "but if the author doesn't need security, why use ssh?" -> surely for one or more of the #2, #3, etc. considerations, right?
- overabstraction (*): you ended up strawmanning the author. What they had issue with was keystroke timing obfuscation, which is a privacy feature. Timing attacks are (in part) a privacy concern, and privacy is a security concern, yes, but security is not just privacy concern, and privacy concerns are not just about timing attacks; these groups are not equal. For example, they might very well want the transmitted keypresses themselves to remain confidential, or they might very well want to retain cryptographic assurance of their integrity. These are security features they can continue to utilize by sticking with SSH.
All of this is to say, it's not even necessarily them using SSH for a hypothetical #2 or #3 (...etc...) reason, but likely because they still very much want to make use of large chunks of #1, which disabling keypress obfuscation does not actually rid SSH of, only at most weakens it in ways they clearly seem to be okay with.
(*) although if I zoom out enough, this is once again just "a primary is not a totality", just implicitly
This is technically incorrect, because Windows now includes SSH too!
Now that's solving the problem the wrong way. If you really want that, send all typed characters at 50ms intervals, to bound the timing resolution.
Wouldn't this just change the packet interval from 20ms to 50ms? Or did you mean a constant stream of packets at 50ms intervals, nonstop?
I think the idea behind the current implementation is that the keystrokes are batched in 20ms intervals, with the optimization that a sufficiently long silence stops the chaff stream, so the keystroke timing is obfucated with an increased error bar of 20ms multiplied by number of chaff packets.
So a clock doesn't solve the problem. The amount of data sent on each clock pulse also tells you something about what was sent.
The Chaff packets already fire on a timer. They inject random extra fake keystrokes so you can't tell how many keystrokes were actually made. The only other way I can think of to solve that is by using a step function: Send one larger packet (fragmented or the same number of individual packets) on each clock pulse if the actual data is less than some N where N is the maximum keystrokes ever recorded with some margin. Effectively almost every clock pulse will be one packet (or set of packets) of identical size. Of course if you do that then you'll end up consuming more data over time than sending random amounts of packets.
Speaking of smoking guns, anybody else reckon Claude overuses that term a lot? Seems anytime I give it some debugging question, it'll claim some random thing like a version number or whatever, is a "smoking gun"
https://xkcd.com/3126/
Soon the Andy 3000 will finally be a reality...
I still do - but I used to, too.
Btw, is the injection of "absolutely" and "in $YEAR" prevalent in other LLMs as well, or is it just in Gemini's dialect?
Grok, ChatGPT, and Claude all have these tics, and even the pro versions will use their signature phrases multiple times in an answer. I have to wonder if it's deliberate, to make detecting AI easier?
https://pshapira.net/2024/03/31/delving-into-delve/
Maybe it has something to do with your profile/memories?
It's nauseating.
Looking back we already had similar problems, when we had to ask our colleagues, students, whomever "Did you get your proposed solution from the answers part or the questions part of a stackoverflow article?" :-0
Considering what these LLMs bring to the table, I think a little tolerance for their cringe phrases is in order.
Oh shoot! A shooting.
So the TL;DR of this post is: don't change this setting unless you know what you're doing.
Found your problem.
But it is an interesting world where you can casually burrow into a crypto library and disable important security features more easily than selecting the right network layer solution.
The problems you run into when doing things you shouldn't do are often really fun.
[1] https://news.ycombinator.com/item?id=42342382
[2] https://news.ycombinator.com/item?id=37810144
[3] https://news.ycombinator.com/item?id=42674116
You should feel free to explore / abuse all options :)
However, there are existing libraries for exactly this use case - see https://github.com/ValveSoftware/GameNetworkingSockets
I guess QUIC libraries would also work.
running without congestion control means that you avoid slowstart. but at a certain rate you run into poorly defined 'fairness' issues where you can easily negatively impact other flows. past that point, you can actually self-interfere and cause excessive losses for yourself.
quic uses congestion control, but uses latency estimates and variance as a signal to back off. it still imposes an ordering on a per-stream basis. so it might not be ideal either.
sctp has a mode which supports reliable and unordered, which might be something to consider
so really - if you care about latency and have a different reliability model, its worth unpacking all these considerations and using them to select your transport layer or even consider writing a minimal one yourself
Is this not a performance consideration?
Either way, using plain old SSH means a metric bajillion computers have a client for your game built in.
Also I was unfamiliar with SSH being vulnerable in the past to keystroke timing!
2023 discussion about it here.
When making this statement, are you taking into account that SSH encrypts the traffic by default?
And in this situation, the amount of encrypted payload in each packet is 36 bytes which is ~40x less than a full packet of ~1500 bytes. You would almost surely hit packet per second limits before you hit payload throughput limits at these small sizes.
Encryption is slow when compared to data throughput you can get with a properly designed transport stack, but that is because it is in comparison to 100 Gbps per core even with no hardware offload. Anything less than ~10 Gbps/1 million packets per second (ignoring other bottlenecks, so only the software transport is the limit) is not merely unoptimized, it is pessimized.
[1] https://calomel.org/aesni_ssl_performance.html
Switching to telnet instead of SSH might be an option.
please never do that (in production)
if anyone half way serious tries they _will_ be able to break you encryption end find what you typed
this isn't a hypothetical niche case obfuscation mechanism, it's a people broke SSH then a fix was found case. I don't even know why you can disable it tbh.
[1] https://people.eecs.berkeley.edu/~daw/papers/ssh-use01.pdf
I'm baffled about this "security feature". Besides from this only being relevant to timing keystrokes during the SSH session, not while typing the SSH password, I really don't understand how can someone eavesdrop on this? They'd have to have access to the client or server shell (root?) in order to be able to get the keystrokes typing speed. I've also never heard of keystroke typing speed hacking/guessing keystrokes. The odds are very low IMO to get that right.
I'd be much more scared of someone literally watching me type on my computer, where you can see/record the keys being pressed.
And the timing is still sensitive. [1] does suggest that it can be used to significantly narrow the possible passwords you have, which could lead to a compromise. Not only that, but timing can be sensitive in other ways --- it can lead to de-anonymization by correlating with other events, it can lead to profiling of what kind of activity you are doing over ssh.
So this does solve a potentially sensitive issue, it's just nuanced and not a complete security break.
[1] https://people.eecs.berkeley.edu/~daw/papers/ssh-use01.pdf
- you are listening to an SSH session between devices
- and you know what protocol is being talked over the connection (i.e. what they are talking about)
- and the protocol is reasonably predictable
then you gain enough information about the plaintext to start extracting information about the cipher and keys.
It's a non-trivial attack by all means but it's totally feasible. Especially if there's some amount of observable state about the participants being leaked by a third party source (i.e. other services hosted by the participants involved in the same protocol).
You're guessing a cipher key by guessing typed characters with the only information being number of packets sent and the time they were sent at. Good luck. :)
So the "real" keystrokes are 100% the same but the fake ones which are never seen except as network packets are what is randomized.
It's actually really clever.
(I wouldn't recommend switching the option off for anything that could transit the Internet or be on a LAN with untrusted devices. I am one of those old sods who doesn't believe in the max-paranoia setting for things like "my own house," especially since if I dial that knob all the way up the point is moot; they've already compromised every individual device at the max-knob setting, so a timing attack on my SSH packet speed is a waste of effort).
> And they’re sent to servers that advertise the availability of the [email protected] extension. What if we just…don’t advertise [email protected]?
The extension is "[email protected]." It shows up in the blog reliably for me across several browsers and devices.
One thing you notice if you have ADSL is that some services are built as if slower connections matter and others are not. Like Google's voice and audio chat services work poorly but most of the others work well. Uploading images to Mastodon, Bluesky, Facebook, LinkedIn, Instagram and Nextdoor is reliable, but for Tumblr you have to try it twice. I don't what they are doing wrong but they are doing something wrong and not finding out what they're doing wrong because they're not testing and they're not listening to users.
Nobody consulted me about their decision not to run fiber by my house. If some committee decides to make ssh bloated they are, together with the others, conspiring to steal my livelihood and I think it would be fair for me to sue them for the $50k it would take to run that fiber myself.
It's OK if you work for Google where there is limitless dark fiber but what about people in African countries?
It's the typical corporate attitude where latency never matters: Adobe thinks it is totally normal that it takes 1-5s for a keystroke to appear when you are typing into Dreamweaver.
[0]: https://www.openbsd.org/plat.html
[1]: https://www.openbsd.org/landisk.html#hardware
But you cannot just sue a company because their network connected software doesn't work well on slow networks. Let alone a project like OpenSSH. It would be like me suing a game studio because my PC doesn't meet their listed minimum requirements to play the game.
A better analogy is a bank redlining neighborhoods. The cost to run fiber to difficult rural locations pays itself easily if you look at a 25-year time span and is an order of magnitude less than building a new housing unit on the West Coast.
If you want a “1990s” mode, add it yourself or pay some to do it for you.
This is funny to me, because ADSL used to be the fast thing, as opposed to dialup modems.
I mean, for modern version of Openssh it's not exactly wrong. The failure was to tell you why that is the normal behavior.
Step one, run https://www.psc.edu/hpn-ssh-home/introduction/ instead Step two, tune TCP/IP stack Step... much later: write your own "crypto". (I'm using quotes because, before someone points out the obvious, packets-per-keystroke isn't, itself, a cryptographic algorithm, but because it's being done to protect connections from being decrypted/etc, mess with it at your own peril.)
And with good reason. This CVE is from yesterday:
https://nvd.nist.gov/vuln/detail/CVE-2026-24061
> telnetd in GNU Inetutils through 2.7 allows remote authentication bypass via a "-f root" value for the USER environment variable.
Vibe coders man...
WAT. Please no.