This is tangentally-related, but I feel it is very wrong that so many smaller governments (e.g. smaller US cities) host "public information" on private servers (e.g. links to PDFs from a Google Drive)... or even worse inside some walled-garden (e.g. Facebook).
My own personal DNS does not resolve to any Google/Facebook products, reducing profiling; but by denying their ad-revenue, I also deny myself access to information which IMHO should be truly available to the public (without using a private company's infrastructure).
I absolutely understand that many people will just say "don't block them, then." My argument is that governments should not host public items on private servers.
I had body camera video sent to me over a "private" YouTube link. I would have welcomed GDrive over that. On the plus side, I took advantage of the automatic transcript generation to review the obnoxious things the officer said without having to watch it all.
I think we'll see some stratification in the self hosting community over the next few years. The current community, centered around /r/selfhosted and /r/homelab, is all about articles like this. The complexity and learning are sources of fun and an end in themselves. That's awesome.
But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.
I think of it similar to kit car builders vs someone who just wants to buy a car to use. Right now, self hosting is dominated by kit cars.
If self hosting is ever going to be as turnkey as driving a car, I think we're going to need a new term. I've been leaning towards "indie hosting" personally.
I've wanted to do this for years, but trying to secure a server is the stuff of nightmares for me.
Are there resources out there about what I need to know about making sure my stuff is secure enough and I'm not just leaving my stuff wide open for people to hack it? I've always been interested in hosting my own email server, but the security parts have kept me from doing it.
Any resources you can point me to would be much appreciated.
A Linux server (e.g. stock Debian) on a well-reputed VPS is pretty secure by default, in my experience. Use software packages from the Linux distribution whenever possible (certainly for email software) and configure unattended security updates.
Note that you generally can’t host email from a residential IP, so you’ll probably want to use a VPS. Making services on your home network publicly accessible (i.e. not just via VPN) obviously comes with more risks; personally I wouldn’t do that.
Absolutely. I got my wife hooked on self hosting too.
I am currently writing a new web server to solve for this space that is ridiculously simple to configure for dummies like me, has proxy and TLS built in, serves http over web sockets, and can scale to support any number of servers each supporting any number of domains provided port availability. The goal is maximum socket concurrency.
I am doing this just for the love of self hosting demands in my own household. Apache felt archaic to configure and my home grown solution is already doing things Apache struggles with. I tried nginx but the proxy configuration wasn’t simple enough for me. I just want to specify ports and expect magic to happen. The best self hosted solutions ship as docker compose files that anybody can install within 2 minutes.
The term is "managed vps" and/or some variation of "marketplace image", I think it's linode that has a particularly... Vibrant (not in an all positive way) selection. AWS' is pretty good, but not as diverse. I assume due to the increased technical aptitude of the average customer and the learning curve.
One thing I strongly agree with you here is being open to the cloud. Self hosting strongly favors running on your own hardware, but indie hosting focuses more about the tangible benefits, ie data ownership, mobility which breeds competition, etc.
That said, I think the VPS marketplace is still too complicated. What about updates, backups, TLS certs, domains, etc?
> Self hosting strongly favors running on your own hardware
In comparison, tenant (storage, colocation, cloud, VPS) hosting contracts often encompass Terms of Service, metered quotas/billing, acceptable use definitions, and regulatory compliance.
> data ownership, mobility which breeds competition
Historically, the buyers of commodity "web hosting" and IaaS have benefited from many competing vendors. Turnkey vertical SaaS often have price premiums and vendor lock-in. If "indie hosting" gains traction with easy to deploy and manage software, there may be upward pressure on pricing and downward pressure on mobility.
I agree with Christian about pretty much everything here. We self-host for multiple reasons, and we don't necessarily need others to necessarily understand our rationale, although that'd be nice.
For me, one thing that stands out as something driving the desire to self-host everything is that large corporations, given enough time, invariably let us down. Christian's experience with Contabo illustrates the one game that I will do any amount of work to avoid: people who pretend to know what they're talking about but who really only waste our time in hopes to put off dealing with an issue until someone else actually fixes it.
The one place where I can't avoid this truly stupid game is with getting and maintaining Internet for my clients. You're not paying for "enterprise", with "enterprise" pricing of $750 a month for 200 Mbps? Then tough cookies - you'll get the same junk we force on our residential customers, and you'll never, ever be able to talk to a human who has any clue what you're talking about, but you'll be able to talk to plenty who'll pretend to know and will waste hours of your time.
The more time they waste of mine, the more energy I'll expend looking for ways to subvert or replace them, until I eventually rely on corporations for the absolute minimum possible.
> you'll get the same junk we force on our residential customers
In locations with few competing providers for wired broadband, 5G "home internet" has brought some competition to entrenched telcos. While mobile data latency is not competitive with cable/fiber, it can serve as a backup for wired connections without an SLA.
I self-host a lot of things myself. There is one scary downside I've learned in a painful way.
A friend and I figured all this out together since we met in college in the 1980s. He hosted his stuff and I hosted mine. For example, starting in 1994, we had our own domain names and hosted our own email. Sometimes we used each other for backup (e.g., when we used to host our own DNS for our domains at home as well as for SMTP relays). We also hosted for family and some friends at the same time.
Four years ago he was diagnosed with cancer and a year later we lost him. It was hard enough to lose one of the closest friends I ever had. In his last weeks, he asked if I could figure out how to support his family and friends in migrating off the servers in his home rack and onto providers that made more sense for his family's level of technical understanding. This was not simple because I had moved 150 miles away, but of course I said yes.
Years later, that migration is close to complete, but it has been far more difficult than any of us imagined. Not because of anything technical, but because every step of it is a reminder of the loss of a dear friend. And that takes me out of the rational mindset I need to be in to migrate things smoothly and safely.
But, he did have me as a succession plan. With him gone, I don't have someone who thinks enough like me to be the same for my extended family. I'm used to thinking about things like succession plans at work, but it's an entirely new level to do it at home.
So, I still host a lot, but the requirements are much more thoroughly thought through. For example, we use Paperless-ngx to manage our documents. Now there's a cron job that rsync's the collection of PDFs to my wife's laptop every hour so that she will have our important papers if something happens to me.
Thinking carefully enough to come up with reliable backups like this makes things noticeably harder because not all solutions are as obvious and simple. And it's not something that ever occurred to us in our 20s and 30s, but our families were one tragedy away from not knowing how to access things that are important soon after we were gone (as soon as the server had trouble). There is more responsibility to this than we previously realized.
I have nothing to say about the technical stuff, just that I’m sorry for your loss, and that from perspective you were a true friend by taking that task on after they were gone.
I've given this some thought too and am doing some documenting for friends. Hard to know the answer.
I have paperless photos seafile and a few other things copying to a usb drive nightly that my spouse may remember to grab unencrypted. I'm tempted to throw a 2tb ssd in her laptop to just mirror it too. But access my nas let alone setting it up somewhere else after a move or with new network equipment, email hosting for our domain, domain registration are all going to be voodoo to my spouse without some guidance. I'm tempted to switch to bitwarden proper instead of self hosted too.
Data recovery instructions can be documented on paper in the same physical location used for financial accounts, e.g. fireproof safe, trusted off-site records, estate attorney. These recovery instructions are also required for data hosted by third parties.
That's why you really need to rethink your 'if you are hearing this I musta croaked' procedure.
Thing is, 99% of the files on your NAS and whatever never would be accessed after your death. And anything of importance should be accessible even if you are alive but incapacitated but your NAS is dead.
So the best thing to do is to make a list of Very Important Documents and have it in the printed form in two locations, eg your house for the immediate access and someone's parents who are close enough. And update it every year, with a calendar reminder in both of yours calendars. You can throw a flash drive[0] there too, with the files which can't be printed but you think they have a sentimental value.
[0] personally I don't believe SSDs are for the task of the long term storage, but flash drives I've seen to survive at least 5 years
Continuity and Recovery are required by all infrastructure plans, since the number of 3rd-party suppliers is never zero, even with "self" hosted infrastructure.
- proxmox hits an SSD pretty hard, continuously. I think with zfs, it probably hits even harder. A lot of it is every second keeping state for a cluster, even if you have only one machine.
- I bought mikrotik routers for openwrt. I tried out routeros, but it seemed to phone home. So I got openwrt going and didn't look back. I am switching to zyxel since you can have an openwrt switch with up to 48-ports.
- I used to run small things on a pi, but after getting proficient at proxmox, they've been moved to a vm or container.
- the most wonderful milestone in self-hosting was when I got vlans set up. Having vlans that stayed 100% in the house was huge.
- next good milestone was setting up privoxy. Basically a proxy with a whitelist. All the internal vlan machines could update, but no nonsense.
- it is also nice to browse the web with a browser pointing at privoxy. You'd be surprised at the connections your browser will make. Firefox internally phones home all. the. time.
Home labs are great. They are a good learning tool to understand systems in _isolation_.
They're terrible for understanding emergent properties of production systems and how to defend yourself against active and passive attacks. Critically you also need to know how to unwind an attack after you have been bitten by one. These are the most important parts of "self hosting."
Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.
Has anyone tried those Lithium Ion UPSes? ~5 years ago we removed the UPS from our dev/stg stack because in the previous 5 years we had more outages caused by the UPS than issues with the utility power. A better battery technology sounds compelling.
For production, of course, it's all dual feed, generator, UPS with 10 year batteries, N+1.
Just today I had to sign up for a service and went to bitwarden app on my phone to generate password (linked to self hosted vaultwarden server) but the new password entry couldn’t be saved into the app because the server was unreachable.
Then I had to go restart my VM and reconnect my VPN. I am now thinking about switching to bitwarden premium and opt-out of self hosting for password managers.
Author here. Bitwarden (as much as I appreciate them!) isn't something I self host, since it’s too critical an application for me (similar to email). I pay for 1Password.
I went back and read some previous blog posts. He was part of the great 2023 layoff. I'm curious where such a talented guy landed. Did he find a position?
One^W two things what makes self-hosting a bit more attractive:
a) besides the some bootstrapping nuances you are not forced to have a working phone number to be able to use some resource. It's usually not a problem until... well until it became a problem. Just like for me yesterday when for whatever I tried but I couldn't register a new Google account. There is just no other option than SMS confirmation.
b) there is way less things to change 'for your own convenience', like a quiet removal of any option to pre-pay for Fastmail.
PS oh and Dynadot (which I was happy using for more than 10 years) decided (for my convenience, of course) to change the security mechanism they used for years. Of course I don't remember the answer for the security question and now I forced to never ever migrate from them, because I literally can't.
For years I just uploaded a lump sum and it was spent as I used the service per year. This way I didn't need to bother what I would be without email when the paid period is over and I didn't need to overpay much in case I would need to cancel early.
Now I need to be sure I be around with a working CC at the time the bought period is over... and what if won't? Do I need to jump through cancel-and-reorder hoops every couple of years? More importantly, am I sure what a couple years later these hoops would work or through some very unlucky coincidence it would wipe my decade+ emails? Sure, Fastmail guys would be so sorry but... that wouldn't help me.
I love this but I'd like to know more about the hardware.
As an aside, I find it amusing that commenters here say that they "self host" in the cloud. It ain't self hosting unless the server is under the same roof as the family!
All re-used / commodity hardware. Drives are mostly WD Reds with mixed capacity zfs arrays (mirrored), as well as Costco sourced external drives for the laptop's storage (Seagate? I'd have to check) connected via USB (not ideal, but beats $1k+ for a rack mounted server w/ SATA or SAS drives).
All this was on a Hetzner server that was nominally set up to be correct on restart. But I was always scared of that because I built this up from when I was a teenager onwards and didn't trust my younger self and couldn't find the time to audit. 10 years afterwards, with 10 years uptime, and no consequences of data loss or theft (it might have occurred, just that nothing affected me or my friends) Hetzner actually warned me they were going to decomm the underlying instance and no longer supported that VPS.
I backed everything up, copied it, and for the last 8 years have faithfully moved from home to home carefully transporting these hard-drives and doing nothing with them.
When I finally set up everything again, I did it much more manageably this time, with backups to Cloudflare R2 for the database and resources, and Dockerfiles for the code. I restarted the machine and brought everything up.
And now I use GSuite instead of my own mail. I use Cloudflare instead of my own DNS. There's a lot I outsource despite "self-hosting". It's just far more convenient.
So the answer is that I had no BCDR on the old thing. Maybe I'll train my kids and have them be my BCDR for the new thing.
The nice thing about OpenBSD is that HTTP, SMTP, DNS, and many other common services are bundled into and developed with the OS. Every 6 months a new release comes out, and each release comes with an upgrade FAQ with step-by-step instructions for any service that might require configuration changes. Sometimes major changes in popular ports/packages, like PHP or PostgreSQL, are mentioned as well. See, for example, https://www.openbsd.org/faq/upgrade74.html. Note that it's a single page, and for a release with quite a few changes, fairly easy to read and understand--major changes within and across subsystems are planned to minimize the total amount of changes per release.
This upgrade FAQ is priceless, and has accompanied each release for the past 20 years (since 3.5 in 2004). OpenBSD is deliberately developed for self-hosting all the classic network services.
The trick to minimizing complexity is to keep your system as close to stock as possible, and to keep up with the 6-month cadence. Over the long haul this is easier with OpenBSD--most of the time your self-hosting headaches are some OpenBSD developer's self-hosting headaches.
A small suggestion about resources: try using NixOS/Guix System instead of containers to deploy home services, you'll discover that in a fraction of resources you get much more, stability, documentation and easy replication included.
Containers now, like full-stack virtualization on x86 are and was advertisement stuff pushed because proprietary software vendors and cloud providers need them, other do not need them at all and devs who works for themselves and generic users should learn that: if you sell VPS et al. obviously you need them, if you made your own infra from bare metal adding them it's just wasting resources and add dependencies instead of simplify life.
We use nix at work. I'm not a huge fan - I find it too opinionated. Appreciate it for what it is, though, and understand its fans.
Since at $work, we run K8s + containers in some shape or form (as well as in... basically all previous jobs), using the tech that I use in the "real world" at home is somewhat in line with my reasoning about why the time investment for self hosting is worth it as a learning exercise.
Containers allow running software that does not have a nix package available and one could not be bothered to write. My lab is fully on NixOS, but a couple of services are happily chugging along as containers in podman.
I agree that removing the container would be better on resources.
However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.
chromium = {
enable = true;
# see Chrome Web Store ext. URL
extensions = [
"cjpalhdlnbpafiamejdnhcphjbkeiagm" # ublock origin
"pkehgijcmpdhfbdbbnkijodmdjhbjlgp" # privacy badger
"edibdbjcniadpccecjdfdjjppcpchdlm" # I still don't care about cookies
"ekhagklcjbdpajgpjgmbionohlpdbjgc" # Zotero Connector
# ...
]; # extensions
# see https://chromeenterprise.google/policies/
extraOpts = {
"BrowserSignin" = 0;
"SyncDisabled" = true;
"AllowSystemNotifications" = true;
"ExtensionManifestV2Availability" = 3; # sino a 06/25
"AutoplayAllowed" = false;
"BackgroundModeEnabled" = false;
"HideWebStorePromo" = false;
"ClickToCallEnabled" = false;
"BookmarkBarEnabled" = true;
"SafeSitesFilterBehavior" = 0;
"SpellcheckEnabled" = true;
"SpellcheckLanguage" = [
"it"
"fr"
"en-US"
];
}; # extraOpts
}; # chromium
Etc etc etc. You configure the entire deploy and get it generated, a custom live? With auto-partitioning and auto-install? Idem. A set of hosts in a network similar (NixOps/Disnix) and so on. The configuration language do all, fetching sources and build if a pre-built binary is not there, setting up a DB, setting up NGINX+let's encrypt SSL certs, there are derivation (package) per derivation options you can set, some you MUST set, defaults etc., it's MUCH easier than anything else, only issue is how many ready-made derivations are there, and in packaging terms Guix is very well placed, NixOS is more than Arch, even if something will be always not there or incomplete as long as devs do not learn alone the system and start using Nix/Guix also to develop, so deps are really tested in dedicated environments and so on, and users always get a clean system, can change and boot in a previous version and so on.
My thinking is not to get emotionally attached to the stories you submitted and the comments you wrote. I treat them as “fire and forget.”
I like stories, and I’ve my niche of stories I bring to Hacker News—people like them, ignore them, hate them—the list goes on. If not mine, someone else’s got the attention, and it is loved—it is the same story.
Please don't chase or promote conspiracy theories here. The place manages karma better than any other site, period. It's a very hard thing to do well. The fact that it's hard to do well is not evidence of conspiracies here, it's evidence that HN is in fact exceptionally well run an managed.
No theory chased or promoted, only numeric data for the historical record.
Have you seen any HN story go missing entirely from the list of stories? This is most likely a bug. If there is more than one instance of missing stories, it should be easier to find the root cause. Currently this story is:
#276 on #new (10th page)
not present in first 500 stories (17 pages)
> HN is in fact exceptionally well run an managed.
Absolutely. Which is why invisible-except-to-new is such an anomaly.
My own personal DNS does not resolve to any Google/Facebook products, reducing profiling; but by denying their ad-revenue, I also deny myself access to information which IMHO should be truly available to the public (without using a private company's infrastructure).
I absolutely understand that many people will just say "don't block them, then." My argument is that governments should not host public items on private servers.
Some works of the US federal government are not subject to copyright and can be mirrored freely.
What licenses do city government use to release public information?
But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.
I think of it similar to kit car builders vs someone who just wants to buy a car to use. Right now, self hosting is dominated by kit cars.
If self hosting is ever going to be as turnkey as driving a car, I think we're going to need a new term. I've been leaning towards "indie hosting" personally.
Are there resources out there about what I need to know about making sure my stuff is secure enough and I'm not just leaving my stuff wide open for people to hack it? I've always been interested in hosting my own email server, but the security parts have kept me from doing it.
Any resources you can point me to would be much appreciated.
Note that you generally can’t host email from a residential IP, so you’ll probably want to use a VPS. Making services on your home network publicly accessible (i.e. not just via VPN) obviously comes with more risks; personally I wouldn’t do that.
I am currently writing a new web server to solve for this space that is ridiculously simple to configure for dummies like me, has proxy and TLS built in, serves http over web sockets, and can scale to support any number of servers each supporting any number of domains provided port availability. The goal is maximum socket concurrency.
I am doing this just for the love of self hosting demands in my own household. Apache felt archaic to configure and my home grown solution is already doing things Apache struggles with. I tried nginx but the proxy configuration wasn’t simple enough for me. I just want to specify ports and expect magic to happen. The best self hosted solutions ship as docker compose files that anybody can install within 2 minutes.
That said, I think the VPS marketplace is still too complicated. What about updates, backups, TLS certs, domains, etc?
In comparison, tenant (storage, colocation, cloud, VPS) hosting contracts often encompass Terms of Service, metered quotas/billing, acceptable use definitions, and regulatory compliance.
> data ownership, mobility which breeds competition
Historically, the buyers of commodity "web hosting" and IaaS have benefited from many competing vendors. Turnkey vertical SaaS often have price premiums and vendor lock-in. If "indie hosting" gains traction with easy to deploy and manage software, there may be upward pressure on pricing and downward pressure on mobility.
[1] https://tinygrad.org/#tinybox
[0]: https://umbrel.com/
[1]: https://forum.indiebits.io/t/open-identities/500/12
For me, one thing that stands out as something driving the desire to self-host everything is that large corporations, given enough time, invariably let us down. Christian's experience with Contabo illustrates the one game that I will do any amount of work to avoid: people who pretend to know what they're talking about but who really only waste our time in hopes to put off dealing with an issue until someone else actually fixes it.
The one place where I can't avoid this truly stupid game is with getting and maintaining Internet for my clients. You're not paying for "enterprise", with "enterprise" pricing of $750 a month for 200 Mbps? Then tough cookies - you'll get the same junk we force on our residential customers, and you'll never, ever be able to talk to a human who has any clue what you're talking about, but you'll be able to talk to plenty who'll pretend to know and will waste hours of your time.
The more time they waste of mine, the more energy I'll expend looking for ways to subvert or replace them, until I eventually rely on corporations for the absolute minimum possible.
In locations with few competing providers for wired broadband, 5G "home internet" has brought some competition to entrenched telcos. While mobile data latency is not competitive with cable/fiber, it can serve as a backup for wired connections without an SLA.
A friend and I figured all this out together since we met in college in the 1980s. He hosted his stuff and I hosted mine. For example, starting in 1994, we had our own domain names and hosted our own email. Sometimes we used each other for backup (e.g., when we used to host our own DNS for our domains at home as well as for SMTP relays). We also hosted for family and some friends at the same time.
Four years ago he was diagnosed with cancer and a year later we lost him. It was hard enough to lose one of the closest friends I ever had. In his last weeks, he asked if I could figure out how to support his family and friends in migrating off the servers in his home rack and onto providers that made more sense for his family's level of technical understanding. This was not simple because I had moved 150 miles away, but of course I said yes.
Years later, that migration is close to complete, but it has been far more difficult than any of us imagined. Not because of anything technical, but because every step of it is a reminder of the loss of a dear friend. And that takes me out of the rational mindset I need to be in to migrate things smoothly and safely.
But, he did have me as a succession plan. With him gone, I don't have someone who thinks enough like me to be the same for my extended family. I'm used to thinking about things like succession plans at work, but it's an entirely new level to do it at home.
So, I still host a lot, but the requirements are much more thoroughly thought through. For example, we use Paperless-ngx to manage our documents. Now there's a cron job that rsync's the collection of PDFs to my wife's laptop every hour so that she will have our important papers if something happens to me.
Thinking carefully enough to come up with reliable backups like this makes things noticeably harder because not all solutions are as obvious and simple. And it's not something that ever occurred to us in our 20s and 30s, but our families were one tragedy away from not knowing how to access things that are important soon after we were gone (as soon as the server had trouble). There is more responsibility to this than we previously realized.
I have paperless photos seafile and a few other things copying to a usb drive nightly that my spouse may remember to grab unencrypted. I'm tempted to throw a 2tb ssd in her laptop to just mirror it too. But access my nas let alone setting it up somewhere else after a move or with new network equipment, email hosting for our domain, domain registration are all going to be voodoo to my spouse without some guidance. I'm tempted to switch to bitwarden proper instead of self hosted too.
That's why you really need to rethink your 'if you are hearing this I musta croaked' procedure.
Thing is, 99% of the files on your NAS and whatever never would be accessed after your death. And anything of importance should be accessible even if you are alive but incapacitated but your NAS is dead.
So the best thing to do is to make a list of Very Important Documents and have it in the printed form in two locations, eg your house for the immediate access and someone's parents who are close enough. And update it every year, with a calendar reminder in both of yours calendars. You can throw a flash drive[0] there too, with the files which can't be printed but you think they have a sentimental value.
[0] personally I don't believe SSDs are for the task of the long term storage, but flash drives I've seen to survive at least 5 years
a couple points
- proxmox hits an SSD pretty hard, continuously. I think with zfs, it probably hits even harder. A lot of it is every second keeping state for a cluster, even if you have only one machine.
- I bought mikrotik routers for openwrt. I tried out routeros, but it seemed to phone home. So I got openwrt going and didn't look back. I am switching to zyxel since you can have an openwrt switch with up to 48-ports.
- I used to run small things on a pi, but after getting proficient at proxmox, they've been moved to a vm or container.
- the most wonderful milestone in self-hosting was when I got vlans set up. Having vlans that stayed 100% in the house was huge.
- next good milestone was setting up privoxy. Basically a proxy with a whitelist. All the internal vlan machines could update, but no nonsense.
- it is also nice to browse the web with a browser pointing at privoxy. You'd be surprised at the connections your browser will make. Firefox internally phones home all. the. time.
They're terrible for understanding emergent properties of production systems and how to defend yourself against active and passive attacks. Critically you also need to know how to unwind an attack after you have been bitten by one. These are the most important parts of "self hosting."
Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.
Make it real once in a while.
For production, of course, it's all dual feed, generator, UPS with 10 year batteries, N+1.
Then I had to go restart my VM and reconnect my VPN. I am now thinking about switching to bitwarden premium and opt-out of self hosting for password managers.
a) besides the some bootstrapping nuances you are not forced to have a working phone number to be able to use some resource. It's usually not a problem until... well until it became a problem. Just like for me yesterday when for whatever I tried but I couldn't register a new Google account. There is just no other option than SMS confirmation.
b) there is way less things to change 'for your own convenience', like a quiet removal of any option to pre-pay for Fastmail.
PS oh and Dynadot (which I was happy using for more than 10 years) decided (for my convenience, of course) to change the security mechanism they used for years. Of course I don't remember the answer for the security question and now I forced to never ever migrate from them, because I literally can't.
Eh?
I just purchased a new 12 month Fastmail plan for my business with no issue a few weeks ago.
Up to 36 months is still listed on their pricing page...
For years I just uploaded a lump sum and it was spent as I used the service per year. This way I didn't need to bother what I would be without email when the paid period is over and I didn't need to overpay much in case I would need to cancel early.
Now I need to be sure I be around with a working CC at the time the bought period is over... and what if won't? Do I need to jump through cancel-and-reorder hoops every couple of years? More importantly, am I sure what a couple years later these hoops would work or through some very unlucky coincidence it would wipe my decade+ emails? Sure, Fastmail guys would be so sorry but... that wouldn't help me.
As an aside, I find it amusing that commenters here say that they "self host" in the cloud. It ain't self hosting unless the server is under the same roof as the family!
This is more or less still accurate hardware wise, although it predates me using proxmox: https://chollinger.com/blog/2019/04/building-a-home-server/
This is my SAS HBA setup for the zfs drives: https://chollinger.com/blog/2023/10/moving-a-proxmox-host-wi...
The last node (besides the Pi 5 I mention) is a 2019 System76 Gazelle: https://chollinger.com/blog/2023/04/migrating-a-home-server-...
All re-used / commodity hardware. Drives are mostly WD Reds with mixed capacity zfs arrays (mirrored), as well as Costco sourced external drives for the laptop's storage (Seagate? I'd have to check) connected via USB (not ideal, but beats $1k+ for a rack mounted server w/ SATA or SAS drives).
Your blog post inspired a discussion on "Coding on iPad using self-hosted VSCode, Caddy, and code-server" (80+ comments), https://news.ycombinator.com/item?id=41448412
1. My blog
2. My friends' blogs
3. BIND for all this
4. A mail-server on this
5. A MySQL database on this
All this was on a Hetzner server that was nominally set up to be correct on restart. But I was always scared of that because I built this up from when I was a teenager onwards and didn't trust my younger self and couldn't find the time to audit. 10 years afterwards, with 10 years uptime, and no consequences of data loss or theft (it might have occurred, just that nothing affected me or my friends) Hetzner actually warned me they were going to decomm the underlying instance and no longer supported that VPS.
I backed everything up, copied it, and for the last 8 years have faithfully moved from home to home carefully transporting these hard-drives and doing nothing with them.
When I finally set up everything again, I did it much more manageably this time, with backups to Cloudflare R2 for the database and resources, and Dockerfiles for the code. I restarted the machine and brought everything up.
And now I use GSuite instead of my own mail. I use Cloudflare instead of my own DNS. There's a lot I outsource despite "self-hosting". It's just far more convenient.
So the answer is that I had no BCDR on the old thing. Maybe I'll train my kids and have them be my BCDR for the new thing.
This upgrade FAQ is priceless, and has accompanied each release for the past 20 years (since 3.5 in 2004). OpenBSD is deliberately developed for self-hosting all the classic network services.
The trick to minimizing complexity is to keep your system as close to stock as possible, and to keep up with the 6-month cadence. Over the long haul this is easier with OpenBSD--most of the time your self-hosting headaches are some OpenBSD developer's self-hosting headaches.
Containers now, like full-stack virtualization on x86 are and was advertisement stuff pushed because proprietary software vendors and cloud providers need them, other do not need them at all and devs who works for themselves and generic users should learn that: if you sell VPS et al. obviously you need them, if you made your own infra from bare metal adding them it's just wasting resources and add dependencies instead of simplify life.
Since at $work, we run K8s + containers in some shape or form (as well as in... basically all previous jobs), using the tech that I use in the "real world" at home is somewhat in line with my reasoning about why the time investment for self hosting is worth it as a learning exercise.
However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.
Let's say you want Jellyfin?
under services and you get it. You want a more complex thing, let's say Paperless? Chromium with extensions etc? Etc etc etc. You configure the entire deploy and get it generated, a custom live? With auto-partitioning and auto-install? Idem. A set of hosts in a network similar (NixOps/Disnix) and so on. The configuration language do all, fetching sources and build if a pre-built binary is not there, setting up a DB, setting up NGINX+let's encrypt SSL certs, there are derivation (package) per derivation options you can set, some you MUST set, defaults etc., it's MUCH easier than anything else, only issue is how many ready-made derivations are there, and in packaging terms Guix is very well placed, NixOS is more than Arch, even if something will be always not there or incomplete as long as devs do not learn alone the system and start using Nix/Guix also to develop, so deps are really tested in dedicated environments and so on, and users always get a clean system, can change and boot in a previous version and so on.I need to run uptime kuma, Here is Docker Compose: https://github.com/louislam/uptime-kuma/blob/1.23.X/docker/d...
What is equivalent in Nix?
I like stories, and I’ve my niche of stories I bring to Hacker News—people like them, ignore them, hate them—the list goes on. If not mine, someone else’s got the attention, and it is loved—it is the same story.
Have fun and continue to be curious.
If the behavior is expected, user understanding can be revised.
As text analysis automation improves, it will be easier to differentiate between recurring patterns and unique anomalies.
Have you seen any HN story go missing entirely from the list of stories? This is most likely a bug. If there is more than one instance of missing stories, it should be easier to find the root cause. Currently this story is:
> HN is in fact exceptionally well run an managed.Absolutely. Which is why invisible-except-to-new is such an anomaly.