You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH):
This doesn't do most of what dumbpipe claims to do: it doesn't use QUIC, doesn't avoid using relays when possible, doesn't pick a relay for you, and doesn't keep your devices connected as network connections change. It also depends on you doing the ssh key management out-of-band, while dumbpipe appears to put the keys into random ASCII strings.
I really wish the people who feel like bringing up that comment would do a modicum of research instead of perpetuating unhelpful and wrong information.
For one, that wasn’t “HN’s reaction”. Read the thread, it’s full of praise. You’re talking about one specific comment.
For another, the commenter was incredibly respectful and even conceded some of the points after they got a reply. Again, read it, it’s right there in your link.
That is an excellent example of how to communicate, and we would be lucky if all interactions followed that example. We should be praising it, not deriding it.
You could also set up a wg server, have both clients connect to it and then pass data between the two IPs. There's still a central relay passing data around, NAT or no NAT.
Having run servers on OpenVPN, IPSec and Wireguard.. Wireguard is very mundane software.
I still get the chills at the deep and arcane configuration litanies you have to dictate over calls to get a tunnel configured. And sometimes, if you had to integrate different implementations of IPSec with each other, it just wouldn't work and eventually you'd figure out that one or two parameters on one side are just wrong.
And if you don't want to manage IPTables/nftables manually to firewall the traffic from the VPN (which is ugly, I agree), ufw or firewalld introduced forwarding rule management (route, and policies) recently.
Interested to know how you've been burnt by wireguard; what did it not do that you were expecting? What failures have you experienced with it that were the fault of wireguard?
I've been using it (fairly simply, mind you) and it's been pretty solid for a number of years, and was as administrative relief in comparison to OpenVPN which I'd been using before wireguard existed. Single UDP port usage makes me query your comment about impenetrable IP table rulesets.
(OpenVPN was great for it's time too, the sales reps at the company where I introduced it loved the ability to work from the road, way back early 2000's)
Every time someone calls a product “dumb,” I get a little excited, because it usually means it’s actually smart. The internet is drowning in “smart” stuff that mostly just spies on you and tries to sell you socks. Sometimes, I just want a pipe that does what it says on the tin; move my bits, shut up, and don’t ask for my mother’s maiden name.
I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!
I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and have them communicate/transfer files. With same protocol in all OSes, of course. Seems like it should have been one of the first features USB could have had since the beginning, imho
I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this
Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.
You actually can connect two machines via USB-C (USB4 / Thunderbolt) and you get a network connection.
You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.
The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.
AIUI, most NICs these days do what is called "auto-crossover"; i.e., they'll detect the situation and just do the "crossover" in the NIC itself. A normal cable works.
The incredible technology you're describing was possible on the Nintendo DS without wires and no need for a LAN either. It's a problem that's been solved in hundreds of different ways over the last 40 years but certain people don't want that problem to ever be solved without cloud services involved.
This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.
> It's a problem that's been solved in hundreds of different ways over the last 40 years
If we put the requirements of,
1. E2EE
2. Does not rely on Google. (Or ideally, any other for profit corporation.)
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.
But as you say,
> but certain people don't want that problem to ever be solved without cloud services involved.
ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.
(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)
starlink doesn't even give you publicly routable ipv6 unless you bypass the starlink router.
My starlink is such that i cannot install/set up things like pfsense/opnsense because the connection drops sometimes, and when either of those installers fail, they fail all the way back to "format the drive y/n?" Also, things like ipcop and monowall et al don't seem to support ipv6.
I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
I mean, windows users install things they’ve never heard of all the time.
If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.
And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.
But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.
there are USB 2.0 (and probably 1.x) devices with usb-A on both sides and a small box in the middle that acts as a network crossover between two machines, i've seen them in stores. I've never used one because i know how to set CIDR. And, as others have mentioned, this does just work with usb-c.
"I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and them communicate/transfer files."
After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.
Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.
One … can. I have a script for this myself, but I only set that up after wanting to do ad hoc and then realizing that it was basically impossible to do from scratch. Ad hoc requires an Internet connection to download the knowledge necessary to do ad hoc, and that utterly defeats the point of it all. (Except in how I've now cached that into a script.)
Ad hoc requires the machines be in "WiFi shouting range".
I was about to talk about how online help files are forgotten these days, and should guide you to the right information to set up an ad-hoc network, but I was disappointed three times over by macOS.
macOS does not have any offline documentation like pretty much every OS used to. When I turn off my WiFi and then open "Mac User Guide" or "Tips for your Mac", they both tell me they require an internet connection.
When I re-enable my internet connection, neither of those apps have information about how to set up an ad-hoc wifi network.
When I looked up how to create an ad-hoc network in other sources, I discovered that the ability to create an ad-hoc network was apparently removed from the GUI in macOS 11, and now requires CLI commands.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
> Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.
And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.
You can plug an ethernet cable in between machines and send files over it! So that period where this would be useful already had a pretty good solution (I vividly remember doing this like 3 times in the same day with some family members for some reason (probably nobody having a USB drive at the moment!))
I actually have a USB-A to USB-A cable. It came with priority Windows software on an 80mm CD-ROM. It wasn't long enough to connect two desktops in the same room if not on the same table, and I just never tried with a laptop because all my laptops have run Debian or some variant thereof since 2005 or so.
You used to be able to connect two PC’s together via the parallel port. I had to do this once to re-install Windows 95 on a laptop with a hard drive and floppy. It was painfully slow but it worked.
I realize you are asking for cross-OS, but Mac OS X was doing this in 2002 (and probably earlier) for PowerBook models with an ethernet cable between them. As I recall, iBooks didn't do this even if they had the port, but PowerBooks would do the auto-crossover, then Finder/AFP would support the machines showing up for each other.
Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.
Of course, MBPs also have the "no port" problem above.
> Or using local WiFi (direct or not)
If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.
Somewhat relevant, I have a list of (mostly browser based + few no-setup cli) tools [1] to send files from A to B. I keep sharing this list here to fish more tools whenever something like this comes up.
One limitation of iOS is the inability to use Bluetooth to transfer an image/video file to a Bluetooth receiver such as Windows. The Apple documentation requires a wired connection. https://support.apple.com/en-ca/120267
If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
It should work, though I haven't actually tried it. That's not a limitation of iOS, just Apple's own syncing app/protocol. LocalSend is basically an http client/server with network device discovery, as far as I know.
Recently there is this project that caught my attention. The project claim to support multi different protocol, on various web browser(even IE6), and extremely easy to setup(single python file). I have not given it a try, just want to share.
> In the iroh world, you dial another node by its NodeId, a 32-byte ed25519 public key. Unlike IP addresses, this ID is globally unique, and instead of being assigned,
ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.
this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers
it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!
but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs
We do use DNS, but we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised.
And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.
In practice, the "ticket" provided by dumbpipe contains your machine's IP and port information. So I believe two machines could connect without any need for discovery infra, in situations that use tickets. (And have UPnP enabled or something.)
$ ./dumbpipe listen
...
To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?
forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?
The value proposition of the ticket is that it is just a single string that is easy to copy and paste into chats and the like, and that it has a stable text encoding which we aim to stay compatible with for some time.
>These dumb pipes use QUIC over a magic socket. It may be dumb, but it still has all the features of a full QUIC connection: UDP-based, stream-multiplexing and encrypted.
How is multiplexing used here? On the surface it looks like a single stream. Is the file broken into chunks and the chunks streamed separately?
I wonder how much reimplementation there is between this and Tailscale, as it seems like there are many needs in common. One would think that there are already low level libraries out there to handle going through NATs, etc. (but maybe this is just the first of said libraries!)
Who cares at this point, Tailscale itself is the 600th reimplementation of the same idea, with predecessors like nebula and tinc. They came at the right time, with WireGuard being on the rise, and poured millions into advertisements that their community "competitors" didn't have since most of them isn't riding on VC money.
TailScale sells certificate escrow, painless SSO, high-quality integrations/co-sell with e.g. Mullvad, full-take netlogging, and "Enterprise Look and Feel" wrapped around the real technology. You can run WireGuard yourself, and sometimes I do, but certificate management is tricky to get right, the rest is a pain in the ass, and TailScale is cheap. The hackers behind it (bfitz et all) are world-class, and you can get it past most "Enterprise" gatekeeping.
It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.
Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.
I've met a lot of people who think Tailscale invented what it does.
Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.
Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.
Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.
These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.
The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.
As one of the iroh developers I must say thank you for creating ZeroTier! It absolutely was part of the inspiration and it's seamless functioning continues to amaze me daily. Something that continues to drive me to strive for as seamless an experience in iroh.
I love the fact we can make different tools learning from each other and approaching making p2p usable in different ways.
It's good as long as everything works out of the box, but it's a nightmare when something doesn't work. Or at least that has been my experience. I'm used to always troubleshoot first when I have any issue, but with Tailscale I decided I'm done trying to fight it, next time something doesn't work I'll just open a ticket and make it the ops team problem.
This is true for all systems that hide a lot of complexity. Apple is great until something doesn't work and you get things like "Error: try again later." A car is great until it doesn't start, and there are numerous reasons that can happen.
I know it wasn't a "new" idea, but still, ZT was a paradigm shift for me. I was suddenly on the same LAN with people I cared about. Thank you for making it happen.
I remember running Hamachi and NoIP DUC's (Dynamic Update Client) as a kid in late 2000's to expose private server addresses for games or for multiplayer through direct network addresses
NoIP was also the recommended "easy" option for configuring RAT (Trojan) host addresses at the time IIRC.
As others have said Hamachi was very popular in some gaming communities. I don't know quite how it fits technologically, but a similar user experience seems to come from playit.gg[1].
My friends and I used Hamachi in the early 2000s to play StarCraft and other games over the internet without involving online services. Worked great. I’ve got a soft spot for it.
Iroh is much better suited for the application layer. You can multiplex multiple QUIC streams over the same connection, each for a specific purpose. All you need is access to QUIC, no virtual network interface.
It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).
This is made using iroh, which aims to be a low level framework for distributed software. Involves networking but also various data structures that enable replication and consistency between networked nodes.
Does it include reconnection logic? I presume that's not considered "low level", but it does always annoyingly have to be reimplemented every time you deal with long-lived socket connections in production.
yes, to an extent. It will time out if the connection completely dies for more than the timeout interval, but all connections are designed to survive changes to network changes like IP address or network interface (eg: switching from WiFi to ethernet, or cellular)
Theres overlap but i can see complementary uses as well. It uses some of the same STUN-family of tecniques. I have no plans to stop using TailScale (or socat) but i think i use this every day now too.
Part of the problem with libp2p is that the canonical implementations are in Go which isn’t really well-suited to use from C++, JS, or Rust. The diversity of implementations in other languages makes for varying levels of quality and features. They really should have just picked one implementation that would be well-suited to use via C FFI and provided ergonomic wrappers for it.
Question: What is the security level behind this? I guess if it is "dumb" _anyone_ can input your identifier for the pipe and connect to it?? Or even listen on it?
I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)
Thank you for the praise! It is nice to hear that people enjoy these workshops.
I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.
Well pipe.pico.sh always uses a proxy server so throughput and latency are worse, but you have your own namespace for the pipes and thus don't have to synchronize random connection strings
I didn't know about this tool, that's pretty useful!
Too bad the broken nature of NAT means this approach will just ignore any firewall rules you have configured and any malicious device or program can leverage it to open inbound connections.
If you are in the mood for a slightly less dumb pipe, I’ve been building a tunnel manager CLI built on Iroh. Supports forwarding ports over TCP, UDP, and UNIX sockets. https://gitlab.com/CGamesPlay/qtm
I wonder how much different it is from Wireguard + netcat. Both run encrypted channels over UDP, but somehow differently. What does QUIC offer that Wireguard does not?
Wireguard doesn't, which is why tailscale took off so much, since it offers basically that at its core (with a bunch of auxiliary features on top).
Show me some wireguard discovery/relay servers if I'm wrong.
Also, QUIC is more language-agnostic. The canonical user-space implementation of wireguard is in Go, which can't really do C FFI bindings, and the abstractions are about dealing with "wireguard devices", not "a single dump pipe", so wireguards userspace library also makes it surprisingly difficult to implement this simple thing without also bringing a ton of baggage (like tun devices, gateways, ip address management, etc) along for the ride.
If you already have a robust wireguard setup, then of course you don't need this and can just use socat or whatever.
They both run over UDP and always encrypt data. Beyond that superficial similarity they are completely different.
QUIC is a transport protocol that provides a stream abstraction (like TCP), with some improvements over TCP (like built-in support for multiplexing streams on the same connection, without head-of-line blocking issues).
Wireguard provides a network interface abstraction that acts as NIC. You can run TCP on top of a wireguard NIC (or QUIC for that matter).
Wireguard is a tunneling protocol. Netcat lets you write things over a socket. But netcat doesn't implement mechanisms for guaranteeing that all your packets arrive over UDP mode, so you're forced to tunnel TCP over UDP for reliability.
QUIC is all UDP, handling the encryption, resending lost packets, and reordering packets if they arrive out of order. The whole point of QUIC is to make it so you can get files transferred quickly.
WireGuard doesn't know the data you're sending, and netcat+TCP is stuck with the limitations of every packet needing to be sent and acknowledged sequentially.
Wireguard is opaque about the independent streams in its connection. So, while they both can encapsulate multiple concurrent streams in one connection, QUIC can do things like mitigate Head-of-Line Blocking and manage encryption at the transport layer. It also uses a connection ID on these substreams which helps make transitioning across network changes seamless.
If you set up multiple TCP connections over Wireguard, there is no head-of-line blocking either. And Wireguard also transitions across network changes.
In fact, it's one of the main reasons I use Wireguard. I can transition between mobile network and wifi without any of the applications noticing.
Does anyone know if this tech (or Iroh) is suitable for real-time networking for games? Basically, once connection is established, what's the overhead on top of UDP in terms of latency and bandwidth?
Edit: after digging a little, Iroh uses QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
Now what I'd love to figure out is if there's a way to use their relay hopping and connection management but send/receive data through a dumb UDP pipe.
> QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
This isn't right, as a sibling comment mentions. QUIC is a UDP-based protocol that handles stream multiplexing and encryption, but you can send individual, unordered, unreliable datagrams over the QUIC connection, which effectively boils down to UDP with a bit of overhead for the QUIC header. The relevant method in Iroh is send_datagram: https://docs.rs/iroh-net/latest/iroh_net/endpoint/struct.Con...
And especially not run the script while it's downloading. The remote server can detect timing difference (let's say the script has "sleep 30" and the buffer fills) and send a different response (really easy if using chunked encoding or HTTP2 data frames).
We (n0) are running one set of relays. They are rate limited, so they basically only help with the hole punching process.
Projects or companies that use iroh can either run their own relays or use our service https://n0des.iroh.computer/ , which among many other things allows spinning up a set of dedicated relays.
Very handy. We've developed an industrialized variant of this in RelayKit designed for fleets of fielded devices at scale with Anycast, mTLS, multiplexing of services through a single tunnel, Bring Your Own PKI and some other fleet management features that together become a somewhat smarter pipe: https://farlight.io
The surface being http is super nice to have. It's a streams-over-http general utility, quic powered.
I'm struggling to remember what but there's a simple http service called like patchbay or some such that's a store and forward pattern. This idea of very simple very generic http powered services has a high appeal to me.
Looking forward to a future version that can do WebTransport
I remember doing something like this with Skype many years ago (at least 15, I guess).
The old Skype, the one that was a real p2p app and before it got bought by Microsoft, was very good slicing through firewalls and NATs and it offered a plugin api, so it was easy to implement a TCP tunnel with it.
Thanks for the response. This statement confuses me a bit. What is a relay? Does traffic go through it at all, or is it for connection negotiation, or some of both?
sibling comment with links to docs is the more accurate, but to summarize, it's some of both:
* all connections are always e2ee (even when traffic flows through a relay)
* relays are both for connection negotiation, and as a fallback when a direct connection isn't possible
* initial packet is always sent through the relay to keep a fast time-to-first-byte, while a direct connection is negotiated in parallel. typical connections send a few hundred bytes over the relay & the rest of the connection lifetime is direct
Dumbpipe is using our set of relays. It is meant as a standalone tool as well as a showcase for what you can do with iroh.
If you use iroh as a library, you can specify your own relays.
It is important to mention that relays are interoperable, so you don't have isolated bubbles of nodes using certain relay networks. I can have the n0 relays specified and still talk to another node that is using a different set of relays.
It'd be nice if the Getting Started link on the n0des page went here instead of immediately asking me to sign up before I know what the hell I'm signing up for
The marketing is brilliant. The name of the company (number0) is mad hackerish man, right up my alley in the words of Charlie Murphy. I'm going to try this in my GCE on bare metal "unvirtualizer" today (number0 is what a Linux kernel would call the first tuntap with number as its prefix if you had such a patch).
10 years ago I made an "encrypted voice channel" by chaining the following 3 commands together (I dont remember exactly how it looked, this is just a sketch):
I don't remember exactly which audio device I used back then. It worked okay-ish, but there was definitely lag from somewhere. Just kind of neat that you can build something so useful without a bloated app, just chaining a few commands together.
Kinda related to this, but is there something that runs a daemon on your local machine, where if a "file request document" is uploaded to mega or Google drive or something similar the (,polling) daemon recognizes the request and pushed the document/file to the file store service?
You could describe this same project as "a smart pipe that punches through NATs & stays connected (...)" and it wouldn't be any more surprising or inaccurate than the current description. So maybe it is not that descriptive.
WireGuard is more similar.
For one, that wasn’t “HN’s reaction”. Read the thread, it’s full of praise. You’re talking about one specific comment.
For another, the commenter was incredibly respectful and even conceded some of the points after they got a reply. Again, read it, it’s right there in your link.
That is an excellent example of how to communicate, and we would be lucky if all interactions followed that example. We should be praising it, not deriding it.
Neither did BrandonM: https://news.ycombinator.com/item?id=42394739
I want less magic, not more impenetrable ip table rulesets (in linux at least).
I still get the chills at the deep and arcane configuration litanies you have to dictate over calls to get a tunnel configured. And sometimes, if you had to integrate different implementations of IPSec with each other, it just wouldn't work and eventually you'd figure out that one or two parameters on one side are just wrong.
And if you don't want to manage IPTables/nftables manually to firewall the traffic from the VPN (which is ugly, I agree), ufw or firewalld introduced forwarding rule management (route, and policies) recently.
I've been using it (fairly simply, mind you) and it's been pretty solid for a number of years, and was as administrative relief in comparison to OpenVPN which I'd been using before wireguard existed. Single UDP port usage makes me query your comment about impenetrable IP table rulesets.
(OpenVPN was great for it's time too, the sales reps at the company where I introduced it loved the ability to work from the road, way back early 2000's)
It doesn't even assume ssh.
I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!
I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this
Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.
You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.
See https://christian.kellner.me/2018/05/24/thunderbolt-networki... or https://superuser.com/a/1784608
The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.
https://en.wikipedia.org/wiki/Medium-dependent_interface#Aut...
This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.
If we put the requirements of,
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.But as you say,
> but certain people don't want that problem to ever be solved without cloud services involved.
ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.
(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)
My starlink is such that i cannot install/set up things like pfsense/opnsense because the connection drops sometimes, and when either of those installers fail, they fail all the way back to "format the drive y/n?" Also, things like ipcop and monowall et al don't seem to support ipv6.
I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.
And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.
But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.
I guess now you can find the solution that you need by telling the requirements to LLMs who have now indexed a lot of the tradeoffs
The use-case of a wired connection between two PCs was already solved years before USB --- with Ethernet.
After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.
Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.
Pretty sure one can set up an ad hoc wifi network for this.
Ad hoc requires the machines be in "WiFi shouting range".
macOS does not have any offline documentation like pretty much every OS used to. When I turn off my WiFi and then open "Mac User Guide" or "Tips for your Mac", they both tell me they require an internet connection.
When I re-enable my internet connection, neither of those apps have information about how to set up an ad-hoc wifi network.
When I looked up how to create an ad-hoc network in other sources, I discovered that the ability to create an ad-hoc network was apparently removed from the GUI in macOS 11, and now requires CLI commands.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
I suspect it's deliberate, especially when said company also sells cloud services.
Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.
And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.
This could be done on Amiga too, using parnet https://crossconnect.tripod.com/PARNET.HTML
I recall it being easier to set up than a dialup modem (since the latter also required installing a TCP/IP stack)
[0] https://en.wikipedia.org/wiki/Laplink
Up to 2MB/s effective throughput, better than 10M Ethernet. Likely it was slower for you due to other limitations.
It looks like MS also had one, but only on Windows CE for some reason https://www.microsoft.com/en-us/download/details.aspx?id=933...
usb probably works too if you google a bit
Or using Bluetooth? Or using local WiFi (direct or not).
If both machines have an Ethernet port.
> Or using Bluetooth?
Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.
Of course, MBPs also have the "no port" problem above.
> Or using local WiFi (direct or not)
If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
https://github.com/localsend/localsend
If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
https://github.com/9001/copyparty
ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.
this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers
it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!
but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs
And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.
See also https://www.iroh.computer/docs/concepts/discovery
forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?
We have a tool https://ticket.iroh.computer/ that allows you to see exactly what's in a ticket.
How is multiplexing used here? On the surface it looks like a single stream. Is the file broken into chunks and the chunks streamed separately?
It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.
Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.
Tailscale makes it even more convenient and adds some goodies on top. I'm a happy (free tier) user.
[0] I also managed an OpenVPN setup with a few hundred nodes a few decades back. Boy do we have it easy now...
Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.
Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.
Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.
These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.
The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.
I love the fact we can make different tools learning from each other and approaching making p2p usable in different ways.
It's good as long as everything works out of the box, but it's a nightmare when something doesn't work. Or at least that has been my experience. I'm used to always troubleshoot first when I have any issue, but with Tailscale I decided I'm done trying to fight it, next time something doesn't work I'll just open a ticket and make it the ops team problem.
NoIP was also the recommended "easy" option for configuring RAT (Trojan) host addresses at the time IIRC.
[1] https://playit.gg/
It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).
Dumbpipe is meant to be an useful standalone tool, but also a very simple showcase for what you can do with iroh.
The real feature of Tailscale is being able to connect to devices without worrying about where they are.
Once connected, the connection is encrypted using TLS with the raw public keys in TLS extension ( https://datatracker.ietf.org/doc/html/rfc7250 ).
I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)
I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.
https://github.com/samyk/pwnat
It has more edges and doesn't handle all cases, but it also avoids the need for any kind of intermediary.
https://github.com/samyk/pwnat/issues/18
My tool of choice is https://github.com/hyprspace/hyprspace
Too bad the broken nature of NAT means this approach will just ignore any firewall rules you have configured and any malicious device or program can leverage it to open inbound connections.
Wireguard doesn't, which is why tailscale took off so much, since it offers basically that at its core (with a bunch of auxiliary features on top).
Show me some wireguard discovery/relay servers if I'm wrong.
Also, QUIC is more language-agnostic. The canonical user-space implementation of wireguard is in Go, which can't really do C FFI bindings, and the abstractions are about dealing with "wireguard devices", not "a single dump pipe", so wireguards userspace library also makes it surprisingly difficult to implement this simple thing without also bringing a ton of baggage (like tun devices, gateways, ip address management, etc) along for the ride.
If you already have a robust wireguard setup, then of course you don't need this and can just use socat or whatever.
QUIC is a transport protocol that provides a stream abstraction (like TCP), with some improvements over TCP (like built-in support for multiplexing streams on the same connection, without head-of-line blocking issues).
Wireguard provides a network interface abstraction that acts as NIC. You can run TCP on top of a wireguard NIC (or QUIC for that matter).
QUIC is all UDP, handling the encryption, resending lost packets, and reordering packets if they arrive out of order. The whole point of QUIC is to make it so you can get files transferred quickly.
WireGuard doesn't know the data you're sending, and netcat+TCP is stuck with the limitations of every packet needing to be sent and acknowledged sequentially.
In fact, it's one of the main reasons I use Wireguard. I can transition between mobile network and wifi without any of the applications noticing.
Edit: after digging a little, Iroh uses QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
Now what I'd love to figure out is if there's a way to use their relay hopping and connection management but send/receive data through a dumb UDP pipe.
This isn't right, as a sibling comment mentions. QUIC is a UDP-based protocol that handles stream multiplexing and encryption, but you can send individual, unordered, unreliable datagrams over the QUIC connection, which effectively boils down to UDP with a bit of overhead for the QUIC header. The relevant method in Iroh is send_datagram: https://docs.rs/iroh-net/latest/iroh_net/endpoint/struct.Con...
QUIC can do both reliable & unreliable streams, as can iroh
[maintainer of iroh here]
Projects or companies that use iroh can either run their own relays or use our service https://n0des.iroh.computer/ , which among many other things allows spinning up a set of dedicated relays.
I'm struggling to remember what but there's a simple http service called like patchbay or some such that's a store and forward pattern. This idea of very simple very generic http powered services has a high appeal to me.
Looking forward to a future version that can do WebTransport
It's not. Use tailscale.
The old Skype, the one that was a real p2p app and before it got bought by Microsoft, was very good slicing through firewalls and NATs and it offered a plugin api, so it was easy to implement a TCP tunnel with it.
we can definitely add a config argument to skip the hardcoded relays & provide custom ones!
* all connections are always e2ee (even when traffic flows through a relay)
* relays are both for connection negotiation, and as a fallback when a direct connection isn't possible
* initial packet is always sent through the relay to keep a fast time-to-first-byte, while a direct connection is negotiated in parallel. typical connections send a few hundred bytes over the relay & the rest of the connection lifetime is direct
https://www.iroh.computer/docs/concepts/relay \
https://crates.io/crates/iroh
If you use iroh as a library, you can specify your own relays.
It is important to mention that relays are interoperable, so you don't have isolated bubbles of nodes using certain relay networks. I can have the n0 relays specified and still talk to another node that is using a different set of relays.
https://github.com/n0-computer/iroh/blob/main/iroh/docs/loca...
https://github.com/n0-computer/iroh/blob/main/iroh-relay/src...
Remote:
Local:Receiver (listening to port 31337):
`nc -l -p 31337`
Sender (connecting to receiver IP):
`nc <receiver_ip> 31337`
Want to send a message to the receiver:
`echo "Hello from Kocial" | nc <receiver_ip> 31337`
== if you want to send a file ==
Receiver:
`nc -l -p 31337 > hackernews.pdf`
Sender:
`nc <receiver_ip> 313337 < hackernews.pdf`
These are my kind of people!
Or whatever ftp thing they mentioned on the Dropbox show HN ;)
That's a huge assumption I wouldn't make after reading "dumb".
And from the article:
> Easy, direct connections that punch through NATs & stay connected as network conditions change.
This sounds more like a pipe that is trying to be smart. According to your principle, not something to build a secure system with.