Per the ongoing Freedesktop discussion, AWS offered to host but Freedesktop is leaning towards self-hosting on Hetzner so they can control their own destiny and sponsors can contribute cash towards the bill instead of donating hardware.
I saw their original announcement and they said that their infra (3 AMD EPYC from generations ago, 3 Intel servers from 2 generations ago, 2 80-core ARM servers) would cost $24k/month at Equinix prices. I checked Hetzner's equivalent offerings, it would be ~$1.5k/month for newer AMD servers. It would probably be even less if they went with older servers listed at their auction. And it probably would be even less if they just moved their CI runners to virtual servers on Hetzner's cloud.
Seriously, Hetzner provides so much move value per dollar, sometimes I fear that one day they will find out and just jack up the prices to match the rest.
VPS business is very different than the "cloud" space.
Yes yes there are cloud features now offered by VPS providers, but they are add ons to chase demand, they aren't positioning their offering to appeal to users wanting a comprehensive suite of services on the platform. Managed databases, SMTP as a service, deployment as a service etc etc etc. For that reasons market rates are different.
For Hetzner to bump their prices significantly they would need to build a cloud platform a la AWS/GCP/Azure. Won't happen by Xmas even if went all in. They are good at what they do and make money so they stick to that.
Of course they are not in the hyperscaler space, but they are far from being "just" a VPS provider.
Their cloud always had on-demand, per-hour billing of servers and block storage volumes, all very easy to manage and provision via their API. Recently they got into object storage space. They even provide a switch to connect their cloud servers with a dedicated one, so you can have, e.g, a beefy GPU server running a LLM model and your web service auto running on the cheap.
I believe that the only thing that really holds Hetzner at their price levels is that the price-sensitive people can always threaten to move to OVH.
OVH or to hundreds of other less known infra providers.
The barrier to entry for these providers is simply, "low". So margins got to be low.
The thing that holds Hetzner and the likes is you can't purchase a package, follow the setup instructions to make cross disciplines engineering departments. It is no wonder Amazon, Google and Microsoft built a comprehensive cloud. They were in the engineering business.
It doesn't imply Hetzner aren't doing a stunning job at what they do or that running an infrastructure farm is a walk in the park.
Hetzner also has the interesting choice of consumer-grade machines which probably work fine in cases where you are constrained by CPU power rather than memory capacity/bandwidth. You'll also lose a bit of redundancy and reliability but that might not be as big of a deal since the machines are managed by them and you can probably get things replaced quickly. For example depending on the workload the CCX43s might be replaceable by the AX52.
Meanwhile for CI runners you probably could split the big bare metal servers down into smaller individual machines and run less jobs of them. Depending on the CI load profile it might also make even more sense to scale out to the cloud on high demand as opposed to having a bunch of mostly idle machines.
Hetzner has a great price but it plays not in the same league as AWS. It's cheap and good enough for some applications but I wouldn't call Hetzner a professional service.
The USA being "Free" has mostly been a myth. It's "Free" if you're a member of the owner class, else you're freedom is subject to the whims of politicians and the wealthy. Aristocracy, really.
Yes agreed, although I think it's worth pointing out that the same is true of virtually every other country as well. Historically at least the US (generally speaking, plenty of people are exceptions throughout the entire history) aspired to freedom and equality, despite falling short.
True, but the spirit of the law died a long time ago, and the vision originally created by the founders has been distorted and "mythified" in favor of the desires and vices of monied interests.
Everybody I know is happy with what Hetzner provides, even at production level. OTOH, "arguably better" DigitalOcean sent me a "Your physical host died, so we restarted it. If it persists, we'll migrate" e-mail, which just shows the reality of the hardware.
On your question, while I do not have services on Hetzner yet, I manage a lot of servers, so I know dynamics of a datacenter and what it entails to keep one up.
You're linking this everywhere, but it has absolutely zero relevance.
Your beef is not with Hetzner, but whoever decided to run the service. Unless the customer violates local legislation or the hosting providers ToS, the appropriate action is to leave the service running, be it AWS, GCP or Hetzner.
I would quite frankly have been very disappointed with them if they had done anything in response to you request.
There is no beef, just evidence that they do to run their service as professionally as other more established providers, which was the original argument and premise of the discussion.
Hetzner is German company and subject to German law. The website they were hosting did not have the (in Germany) mandatory legal notice (Impressum) or any contact details. This shifts the responsibility to the provider (Providerhaftung). Ignoring legal requirements is hardly professional.
Also I would like to note that I did neither request them to give me their customer's details nor to shut down the site. All I wanted was them to work with their customer to have the offending image removed.
Also I am convinced the porn image was not malice but an accident. The scraper replaced all profile images with ones they probably scraped from a forum. I was just unlucky to get a very indecent one.
Had Hetzner collaborated I'm pretty sure this could have been resolved in no time.
Your expectations of what you wanted from Hetzner has nothing to do with their professionalism. They most certainly acted with outmost professionalism given the situation you describe, and the behavior to be expected from any hosting provider.
I get your frustration in the situation, but even if it would have been helpful to you, it
would be extremely unprofessional for a hosting provider to invervene in the operations of its customers unless the customer violates the Terms of Service (an agreement you have no part in), a court order is made, or law enforcement makes a legally supported demand. Random legal notices sent by mail from strangers are not on this list, but could lead to Hetzner evaluating if a ToS violation is in effect that must be resolved.
If the customer is a business, it could be that they have breached the Impressum laws. It would have helped you if they had the impressum, but until legal action is taken it remains undecided if there is a violation, and if there was the consequence would be that the customer is liable for a written warning and in some cases a fine, as decided by the relevant authorities or a court of law.
I disagree. For one thing I think you misinterpret the legal situation. This is Germany, the provider is on the hook. Secondly, you do not have insight into the communication and I won't share it but I assure you their conduct was unprofessional.
Single machines die (and can't be started again for a few minutes to hours) every few months, but that's acceptable for me, and we also similar things happen at AWS.
I do. I just rent 2 computers from Hetzner. One is main another is failover. If main dies failover kicks in while main is restored on another computer. Still way way cheaper than AWS which I would not touch with wooden pole unless required by client.
They were hosting a StackOverflow copy, which is OK because of Creative Commons. All my answers were still under my name as it should be.
The only difference to the original was my profile picture, which was an explicit porn image.
For several weeks everyone searching for my name or my StackOverflow answers saw that. I'm glad that I was not looking for a job at the time.
I exhausted all possibilities on all channels to rectify this situation short of using a lawyer.
I put a lot of effort into getting things in order and seriously considered going the legal route but ultimately decided against it mainly because courts are hit and miss here when it comes to reputation damage of regular individuals as opposed to companies or celebrities.
EDIT: I should have answered your question how this is Hetzner's fault more directly. The website they were hosting did not have the (in Germany) mandatory legal notice (Impressum) or any contact details. This shifts the responsibility to the provider (Providerhaftung). Also I would like to note that I did neither request them to give me their customer's details nor to shut down the site. All I wanted was them to work with their customer to have the offending image removed.
Also I am convinced the porn image was not malice but an accident. The scraper replaced all profile images with ones they probably scraped from a forum. I was just unlucky to get a very indecent one. Had Hetzner collaborated I'm pretty sure this could have been resolved in no time.
Hi there, would you mind giving me the abuse ID for this case. (When someone creates an abuse report with us, our confirmation email includes an abuse ID in the subject line.) If you still have that, and you would like me to ask a colleague about this case, please let me know.
--Katie, Hetzner Online
To the best of my recollection I was never assigned or at least told one. I was not a customer so when my contact attempts were not ignored or answered with boilerplate, I was asked for my customer details, which I could not provide. No customer details, no support.
The channels to reach you were pretty limited and hard to find when I tried. I assure you, I tried everything short of getting a lawyer to send you paper mail.
I still have the communication history in my archives, but it would be a little effort to dig it out.
EDIT:
Found the documentation. My first complaints were in March 2011. At some point I got assigned an abuse ID MU-000B0F55-1:18. In May the issue was still unresolved.
Hi again, Looking on the date for that, we don't have data about that case simply because of data protection laws that require us to delete data after a certain time. Does the abuse still exist? If so, perhaps you can create a new ticket (and include the old abuse ID from 2011 so that they understand you tried to report this in the past. You can also include a link to this thread in hackernews and mention my name.) When you submit your abuse report, you will receive an automated response with a new abuse ID. You do not need to be a customer to submit an abuse report with us. https://abuse.hetzner.com/en?lang=en Unfortunately, I cannot submit an abuse report on your behalf. --Katie
The WireGuard project is also in the same situation, due to Equinix Metal shutting down. If anybody would like to host us, please reach out to team at wireguard dot com. Thanks!
Saw that. Looks appealing, but I'm not particularly keen on, "We only require that you keep one sudo-enabled account on the system for us to use as needed for troubleshooting." [1] Do I want to give root access to the project's master git server to somebody I've never met, who is probably a good & nice person, but not really directly associated with the project? In general, I'm wary of places with relaxed enough informal policies that somebody could just walk over to a machine and fiddle with it. It's not that I actually intend to do some kind of top secret computing on Internet-facing machines like those, but I also don't want to have to be _as_ concerned about those edge cases when I'm deciding which things to run or host on it.
Thank you for wireguard - it's been a hugely impactful piece of software.
Do you think it would be helpful to outline what hardware resources you would need to successfully migrate the project and all the CI/CD computations to a new home? This would help people determine if they can help with hosting.
No, it's considerably more involved than that. For example, there's extensive CI: https://www.wireguard.com/build-status/ This thing builds a fresh kernel and boots it for every commit for a bunch of systems. And there's also a lot of long running fuzzing and SAT solving and all sorts of other heavy computation happening during different aspects of development. Development is a bit more than just pushing some code up to Github and hoping for the best.
Oregon State University's Open Source Lab (https://osuosl.org/) offers managed and unmanaged hosting to open source projects. They even have IBM Z and POWER10 hosting if you're into that sort of thing.
This has been brought up with freedesktop and they handwaved it away. They claim they want to DIY with donation money but they don't have a donation mechanism and I suspect they don't know how much work just handling money is.
> This has been brought up with freedesktop and they handwaved it away. They claim they want to DIY with donation money
You are making all of this sound way more definitive than the ticket. The donation money approach is the personal opinion of the sysadmin. The OSUOSL is brought up, some explanations are added that make it more attractive and remove some doubts, and beyond that it's waiting for the board to decide what's next.
Is colocation knowledge lost now? Do people no longer know how to configure a server or three, bring them to colo and run them? I don't understand how this is a story worthy of an Ars Technica article. Where's the issue?
If the issue is cost, slightly older Epyc hardware is quite affordable, and colo deals can be found for extremely reasonable costs. If it's expertise, then all they have to do is ask.
I'm sure this isn't relevant everywhere, but all my old colo hotspots within driving distance have started charging exorbitant $$ for egress, just like the cloud.
Still more economical than cloud, but it seems like this has become far too common.
Storage is MUCH cheaper when you colo, and bandwidth requirements are a large part of why you colocate instead of just running servers out of an office building that has at least two upstream connections.
I'm really curious what you think they're using now. Certainly you read the article... It says they're using bare metal servers. That's basically colo where the provider owns, but doesn't control, the hardware.
NAS is cheap -truenas will sell you 200 TB systems for under $10k, and you can run your web server in a VM. Put second one in a different location and set them to up with the right backup and you have most of what you need. You can probably do much cheaper depending on your needs.
What I don't know is how to make your web servers failover graceful if one goes down (the really hard problem is if the internet splits so both servers are active and making changes). I assume other people still know how to do this.
With the likes of AWS they tell you the above comes free with just a monthly charge. Generally open source projects like having more control over the hardware (which has advantages and disadvantages) and so want to colo. They would probably be happy running in my basement (other than they don't trust me with admin access and I don't want to be admin - but perhaps a couple admins have the ability to do this).
You are aware that adding backups and redundancy increases the costs both in hardware an management, right? I mean, _of course_ you can do that, it's computing.
Perhaps the comparison should be apples to apples when thinking about price.
You're handwaving. Anything can increase cost and complexity. Using Amazon S3 increases cost and complexity.
Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.
I do not see any concrete data from you either. This is a forum. Typically we'd just call this a "conversation."
> Anything can increase cost and complexity. Using Amazon S3 increases cost and complexity.
Yes, and you get something in return for that cost and complexity, so do you care to map out the differences or are you just going to stick to your simple disagreement?
> Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.
You forgot monitoring. You forgot that when this inevitably breaks I'm going to have to go fix it and that you can't schedule failures to only happen during working hours. You're ignoring more than you're considering.
He's not really forgetting monitoring (etc). You'll still need monitoring in place regardless of whether you're monitoring your own servers (colo, etc) or monitoring Cloud servers.
And "when stuff breaks" happens regardless of whether you've chosen Cloud or to run your own servers. It all breaks on occasion (though hopefully rarely).
Simple disagreements suffice. I don't have to make an argument for something just because you bring it up. I'm just pointing out that you bringing it up (hmmm - without backing it up!) doesn't make it a valid point.
You seem to have a bone to pick. I said owned storage is cheaper, and you made up something about it not being redundant. You're not making a salient point as much as you're trying to handwave and dismiss owned hardware as complex, expensive, not worth the "value" one might get from S3, whatever.
If you REALLY think that storage can't be cheaper than "cloud" unless it's not redundant, then show us numbers. Otherwise, you're just making shit up.
You mention all your other things as if you're saying, "Gotcha!" You still have to monitor Amazon. You still have to manage backups. You still have to monitor resource utilization. You're not being clever by trying to imagine that other admins are as bad as you are because all those things seem hard. Good admins do those things no matter where their stuff is running.
> But as we all know, RHEL/IBM only wants to take free labor and not really give back these days :(
Ludicrous. Red Hat and IBM are far from perfect, but they are absolute heroes for open source. Listing all the projects that Red Hat pays to develop would be very difficult because it's so long. They've even acquired proprietary companies and open sourced their products (while the product was still selling and highly useful!), something virtually nobody does.
Sure, they likely could. But then the complaint would be, "Arrg, I can't believe Freedesktop.org and Alpine are now effectively owned by Red Hat/IBM now, arrg!"
Also, Red Hat typically only "sponsors" open source projects that they have some business dependency on. Freedesktop.org might be a good candidate, but Alpine could be harder to justify. I don't know of any RH product that uses Alpine directly. (Most enterprises only have exposure to Alpine through container images.)
Redhat, Canonical, IBM, Oracle, Google, hell even Microsoft... There are a bunch of big actors in the Linux space that could and probably should be financing this. Also there's the Linux Foundation that is made for financing Linux projects.
Red Hat's gambit has always been to hire engineers to do good work on Linux broadly, in many areas. As opposed to just dropping crates full of money at random on projects
Of course they still get tarred and feathered with the "Red Hat wants to control Linux!" brush because they...contribute the bulk of development to projects like GNOME
I've nothing against red hat and i love that an organization is doing work in the free software space. But it's a choice as an engineer if you want to devote your loss of salary to do honest work.
I do open source in my free time because my family can't sustain itself otherwise. Kudos to all the devs out there working on open software.
Oh man that sucks! I wonder if we could pull Alpine into our colo, we recently upgraded to a full rack from 2U (it was cheaper than a quarter rack!) and have a ton of space. Plus all of our libvirt/KVM HVMs run Alpine.
Aside from some major examples, like most of the big tech companies funding the Linux kernel and maybe the Rust and/or Python Foundations in decent numbers, for the most part, corporations don't pay for open-source. That's why they love it so much: it costs ~$0, but generates immense business value for them (in that they don't have to write, debug, or maintain any of that, often essential, code or infra).
I can think of maybe three exceptions my entire career, and none of them were especially huge contributions.
Indeed, we donate to several open source projects on which we depend, but we're also a small two-person operation. No medium/large company I've worked for ever donated monetarily to open source projects, though one did encourage us to fix bugs and submit patches/pull request, which is at least something!
Slackware's the same way, most donations come from individuals and very small companies.
Broader question, but whatever happened to every university with a CS department hosting mirrors of popular distros? I always assumed CDNs replaced them, but seeing this, maybe they didn't.
Maybe not every university, but plenty of distro mirrors are still hosted by universities, both in the US and internationally. Another example is Oregon State University mentioned elsewhere in this thread that still provides hosting + CI services; eg postmarketOS recently moved from gitlab.com to a self-hosted GitLab on OSU-provided and -hosted hardware.
I haven't looked into it, so there might be a good reason, but why isn't peer-to-peer technology utilized more and more for stuff like this? I had hoped that BitTorrent would have made these things a solved problem. I looked into Storj earlier, but it seemed too controlled/unpredictable/centralized. Anybody have some good insights into this?
I think it's the same reason PGP never caught on. The learning curve is just too steep.
There are 3 major concepts: understanding how to run the comnand, understanding the idea of public key crypto, and actually using it (i.e. NOT imaging the ISO unless the signature passes).
What it needs is something like a torrent client that 1) doesn't let you download unless you supply the expected SHA first, and perhaps 2) that it verifies that the hash came from the signed webpage where you got the torrent link. Too many people (myself included) think it's not going to happen to them (download a backdoored program/OS).
After 20 years in the industry I'm just now learning how certificates work and how to work with them.
> I think it's the same reason PGP never caught on. The learning curve is just too steep.
This is what I always emphasise: usability first. If a solution is secure on paper, but confusing to use, then it's not secure - the user can get confused and do the wrong thing. Defaults matter.
> What it needs is something like a torrent client that 1) doesn't let you download unless you supply the expected SHA first, and perhaps 2) that it verifies that the hash came from the signed webpage where you got the torrent link.
This is already a solved problem. Just provide a magnet link. You already have to trust the website to provide the checksum, so why not trust the link?
As for packages, Debian experimented with a BitTorrent transport for apt a long while ago, but I suppose it didn't catch on. Perhaps this was before BitTorrent had HTTP fallback? Either way, this would be an interesting avenue for research.
The learning curve myth needs to die. It can be solved with good UX but there is no real profit in that so you will never see any company dedicate marketing dollars towards it. So truly decentralized and distributed technologies die because no one wants to spend money to market them for free.
When it comes to mirror sponsorships, we (IPinfo) offer IP location data sponsorship. We spoke with Alma Linux and they used our IP location data to route traffic for their mirror system: https://almalinux.org/blog/2024-08-07-mirrors-1-to-400/
At the moment, we operate 900 servers. We evaluated the idea of hosting mirrors on some of our servers, but our servers are not super powerful, and we have to pay for bandwidth. We use these servers in our production pipeline. Maintenance alone is a massive task, and hosting distro mirrors could be incredibly challenging. We are not at that scale yet.
We could provide IP location data sponsorship to popular distro mirror systems, which would make traffic routing and load distributions more effective.
While I have mixed feelings about Cloudflare, I don't see why you have been downvoted. This is on topic for the discussion, already implemented for a couple distros, etc.
The question of "who is responsible": anyone is free to run their own mirror, after all this software is freely redistributable.
As for "why not", if I were to lead a project like Alpine, I would insist that the org stays in control of its own infrastructure. Mirrors are also only one chunk of the problem; you also need builder machines.
presumably because it's a silly idea, given Cloudflare isn't a colo or dedicated server company and they won't let you just rack machines in their DCs for the same reason Google won't.
>Both services have largely depended on free server resources ...
Running on 'large donations' is not a viable strategy for any long-term goal. Perhaps its time for Linux to consider running a tiny datacenter of its very own to both dog-feed itself and give itself extra momentum or inertia from donation stall-out?
so, the linux foundation (or torvalds himself, whatever) should run a entire datacenter right now?
You know running an anctual datacenter with all its cooling ,storage, networking and power requirements is a full time job for several people right? Why not put bare metal in someplace which already does that for far cheaper and better?
Saying it's far cheaper without actual numbers is not saying anything. 10 years in a DC is 14% cheaper than going in with a provider. This donation money can simply be better spent if long term goals are considered.
So far, the Equinix Metal shutdown affects Freedesktop, Alpine, WireGuard, and Flathub. Why can't these organizations use VMs? Is there something special about bare-metal services, or has Equinix not offered their VM service to these organizations?
VMs introduce security issues that bare metal don't have. Those security issues are mostly academic for most people and many projects, but not for software where a supply chain compromise could severely impact all users of that software.
Imagine if Wireguard were backdoored because someone working for the ISP that runs the VMs compromised their VMs through the hypervisor. How would a project audit an ISP? How could anything be trusted? Bottom line: it can't. ISPs don't give that kind of information to customers unless you're special (government, spend crazy money).
While it's still possible to compromise a machine through physical access, it's MUCH more difficult. How do you bring it in to single user mode to introduce a privileged user without people noticing that it's down, even momentarily, or that the uptime is now zero? Compromise like this is possible, but worlds more difficult to pull off than compromise through hypervisor.
Possible I'm just not remembering the history right, but I think this is from when "Equinix metal" was packet.com. I think this is a handshake deal they had from before they were bought, and it's going away as packet.com becomes more integrated into Equinix.
How are VMs solving this issue? You cannot just snapshot them and migrate them to another provider. You'll get different local-IPv4 and different IPv6, etc.
Old-school open source projects got hosting from hundreds of mirrors, mostly universities and ISPs, and some businesses. If you have lots of mirrors you don't need as much traffic per host.
> https://gitlab.freedesktop.org/freedesktop/freedesktop/-/iss...
Seriously, Hetzner provides so much move value per dollar, sometimes I fear that one day they will find out and just jack up the prices to match the rest.
Yes yes there are cloud features now offered by VPS providers, but they are add ons to chase demand, they aren't positioning their offering to appeal to users wanting a comprehensive suite of services on the platform. Managed databases, SMTP as a service, deployment as a service etc etc etc. For that reasons market rates are different.
For Hetzner to bump their prices significantly they would need to build a cloud platform a la AWS/GCP/Azure. Won't happen by Xmas even if went all in. They are good at what they do and make money so they stick to that.
Their cloud always had on-demand, per-hour billing of servers and block storage volumes, all very easy to manage and provision via their API. Recently they got into object storage space. They even provide a switch to connect their cloud servers with a dedicated one, so you can have, e.g, a beefy GPU server running a LLM model and your web service auto running on the cheap.
I believe that the only thing that really holds Hetzner at their price levels is that the price-sensitive people can always threaten to move to OVH.
The barrier to entry for these providers is simply, "low". So margins got to be low.
The thing that holds Hetzner and the likes is you can't purchase a package, follow the setup instructions to make cross disciplines engineering departments. It is no wonder Amazon, Google and Microsoft built a comprehensive cloud. They were in the engineering business.
It doesn't imply Hetzner aren't doing a stunning job at what they do or that running an infrastructure farm is a walk in the park.
Meanwhile for CI runners you probably could split the big bare metal servers down into smaller individual machines and run less jobs of them. Depending on the CI load profile it might also make even more sense to scale out to the cloud on high demand as opposed to having a bunch of mostly idle machines.
US sanction laws are legal nightmare, quicksand that constantly changes. Major global infrastructure projects like Freedesktop should avoid US!
PS: Sorry for the off-topic discussion.
Sure, servers might die at times, but this also happens at AWS and can be avoided by using multiple servers in a HA configuration.
Everybody I know is happy with what Hetzner provides, even at production level. OTOH, "arguably better" DigitalOcean sent me a "Your physical host died, so we restarted it. If it persists, we'll migrate" e-mail, which just shows the reality of the hardware.
On your question, while I do not have services on Hetzner yet, I manage a lot of servers, so I know dynamics of a datacenter and what it entails to keep one up.
Your beef is not with Hetzner, but whoever decided to run the service. Unless the customer violates local legislation or the hosting providers ToS, the appropriate action is to leave the service running, be it AWS, GCP or Hetzner.
I would quite frankly have been very disappointed with them if they had done anything in response to you request.
Hetzner is German company and subject to German law. The website they were hosting did not have the (in Germany) mandatory legal notice (Impressum) or any contact details. This shifts the responsibility to the provider (Providerhaftung). Ignoring legal requirements is hardly professional.
Also I would like to note that I did neither request them to give me their customer's details nor to shut down the site. All I wanted was them to work with their customer to have the offending image removed.
Also I am convinced the porn image was not malice but an accident. The scraper replaced all profile images with ones they probably scraped from a forum. I was just unlucky to get a very indecent one.
Had Hetzner collaborated I'm pretty sure this could have been resolved in no time.
I get your frustration in the situation, but even if it would have been helpful to you, it would be extremely unprofessional for a hosting provider to invervene in the operations of its customers unless the customer violates the Terms of Service (an agreement you have no part in), a court order is made, or law enforcement makes a legally supported demand. Random legal notices sent by mail from strangers are not on this list, but could lead to Hetzner evaluating if a ToS violation is in effect that must be resolved.
If the customer is a business, it could be that they have breached the Impressum laws. It would have helped you if they had the impressum, but until legal action is taken it remains undecided if there is a violation, and if there was the consequence would be that the customer is liable for a written warning and in some cases a fine, as decided by the relevant authorities or a court of law.
Nothing that would involve you as a third-party individual.
It's highly available though and the redundant replicas run on their separated physical machines
https://docs.hetzner.com/cloud/placement-groups/overview/
Single machines die (and can't be started again for a few minutes to hours) every few months, but that's acceptable for me, and we also similar things happen at AWS.
"Hey your customer is hosting a website with random pictures on it. One of them of me is unflattering. Please make them fix it"
The only difference to the original was my profile picture, which was an explicit porn image.
For several weeks everyone searching for my name or my StackOverflow answers saw that. I'm glad that I was not looking for a job at the time.
I exhausted all possibilities on all channels to rectify this situation short of using a lawyer. I put a lot of effort into getting things in order and seriously considered going the legal route but ultimately decided against it mainly because courts are hit and miss here when it comes to reputation damage of regular individuals as opposed to companies or celebrities.
EDIT: I should have answered your question how this is Hetzner's fault more directly. The website they were hosting did not have the (in Germany) mandatory legal notice (Impressum) or any contact details. This shifts the responsibility to the provider (Providerhaftung). Also I would like to note that I did neither request them to give me their customer's details nor to shut down the site. All I wanted was them to work with their customer to have the offending image removed.
Also I am convinced the porn image was not malice but an accident. The scraper replaced all profile images with ones they probably scraped from a forum. I was just unlucky to get a very indecent one. Had Hetzner collaborated I'm pretty sure this could have been resolved in no time.
The channels to reach you were pretty limited and hard to find when I tried. I assure you, I tried everything short of getting a lawyer to send you paper mail.
I still have the communication history in my archives, but it would be a little effort to dig it out.
EDIT: Found the documentation. My first complaints were in March 2011. At some point I got assigned an abuse ID MU-000B0F55-1:18. In May the issue was still unresolved.
They host quite a few open source projects there. And seem to be one of the few that also hosts for ARM and POWERPC projects.
[1] https://osuosl.org/services/hosting/details/
Hope you find a host soon!
Thank you for wireguard - it's been a hugely impactful piece of software.
Do you think it would be helpful to outline what hardware resources you would need to successfully migrate the project and all the CI/CD computations to a new home? This would help people determine if they can help with hosting.
The code is small and integrated into the kernel at this point.
Aren't your needs primarily for distributing Windows/Mac packages at this point?
You are making all of this sound way more definitive than the ticket. The donation money approach is the personal opinion of the sysadmin. The OSUOSL is brought up, some explanations are added that make it more attractive and remove some doubts, and beyond that it's waiting for the board to decide what's next.
https://osuosl.org/services/hosting/details/
If the issue is cost, slightly older Epyc hardware is quite affordable, and colo deals can be found for extremely reasonable costs. If it's expertise, then all they have to do is ask.
Still more economical than cloud, but it seems like this has become far too common.
Storage is MUCH cheaper when you colo, and bandwidth requirements are a large part of why you colocate instead of just running servers out of an office building that has at least two upstream connections.
I'm really curious what you think they're using now. Certainly you read the article... It says they're using bare metal servers. That's basically colo where the provider owns, but doesn't control, the hardware.
Probably because it's not redundant or automatically backed up at any interval. The worst days of my life have been during hardware failures at colos.
Purchased, fully redundant storage is MUCH cheaper than anything in the cloud when talking about any time frame of a year or more.
Obviously you can "rent" storage for a month for less than the cost of purchasing it, but only idiot startup CTOs try to argue a comparison like that.
Two sets of storage are still cheaper, and we all have rsync.
Does it automatically repair itself when it fails? Are you sure you're making a proper comparison?
> Two sets of storage are still cheaper, and we all have rsync.
Yes, you can in fact solve 90% of the problem with 10% of the effort, what's your plan for the rest of the problem? Just call in sick?
What I don't know is how to make your web servers failover graceful if one goes down (the really hard problem is if the internet splits so both servers are active and making changes). I assume other people still know how to do this.
With the likes of AWS they tell you the above comes free with just a monthly charge. Generally open source projects like having more control over the hardware (which has advantages and disadvantages) and so want to colo. They would probably be happy running in my basement (other than they don't trust me with admin access and I don't want to be admin - but perhaps a couple admins have the ability to do this).
Perhaps the comparison should be apples to apples when thinking about price.
Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.
I do not see any concrete data from you either. This is a forum. Typically we'd just call this a "conversation."
> Anything can increase cost and complexity. Using Amazon S3 increases cost and complexity.
Yes, and you get something in return for that cost and complexity, so do you care to map out the differences or are you just going to stick to your simple disagreement?
> Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.
You forgot monitoring. You forgot that when this inevitably breaks I'm going to have to go fix it and that you can't schedule failures to only happen during working hours. You're ignoring more than you're considering.
And "when stuff breaks" happens regardless of whether you've chosen Cloud or to run your own servers. It all breaks on occasion (though hopefully rarely).
You seem to have a bone to pick. I said owned storage is cheaper, and you made up something about it not being redundant. You're not making a salient point as much as you're trying to handwave and dismiss owned hardware as complex, expensive, not worth the "value" one might get from S3, whatever.
If you REALLY think that storage can't be cheaper than "cloud" unless it's not redundant, then show us numbers. Otherwise, you're just making shit up.
You mention all your other things as if you're saying, "Gotcha!" You still have to monitor Amazon. You still have to manage backups. You still have to monitor resource utilization. You're not being clever by trying to imagine that other admins are as bad as you are because all those things seem hard. Good admins do those things no matter where their stuff is running.
RHEL benefits from freedesktop and X, and as a show of good faith they could support Alpine too.
But as we all know, RHEL/IBM only wants to take free labor and not really give back these days :(
Ludicrous. Red Hat and IBM are far from perfect, but they are absolute heroes for open source. Listing all the projects that Red Hat pays to develop would be very difficult because it's so long. They've even acquired proprietary companies and open sourced their products (while the product was still selling and highly useful!), something virtually nobody does.
Also, Red Hat typically only "sponsors" open source projects that they have some business dependency on. Freedesktop.org might be a good candidate, but Alpine could be harder to justify. I don't know of any RH product that uses Alpine directly. (Most enterprises only have exposure to Alpine through container images.)
Of course they still get tarred and feathered with the "Red Hat wants to control Linux!" brush because they...contribute the bulk of development to projects like GNOME
I do open source in my free time because my family can't sustain itself otherwise. Kudos to all the devs out there working on open software.
Alpine Linux: https://opencollective.com/alpinelinux
Freedesktop [edit]: ..no crowdsource option at the moment
I can think of maybe three exceptions my entire career, and none of them were especially huge contributions.
Slackware's the same way, most donations come from individuals and very small companies.
Now go ask your employer to donate.
I'm not sure how many mirrors are run by the university directly, though AFAIK MIT and RIT host theirs directly.
There are 3 major concepts: understanding how to run the comnand, understanding the idea of public key crypto, and actually using it (i.e. NOT imaging the ISO unless the signature passes).
What it needs is something like a torrent client that 1) doesn't let you download unless you supply the expected SHA first, and perhaps 2) that it verifies that the hash came from the signed webpage where you got the torrent link. Too many people (myself included) think it's not going to happen to them (download a backdoored program/OS).
After 20 years in the industry I'm just now learning how certificates work and how to work with them.
This is what I always emphasise: usability first. If a solution is secure on paper, but confusing to use, then it's not secure - the user can get confused and do the wrong thing. Defaults matter.
> What it needs is something like a torrent client that 1) doesn't let you download unless you supply the expected SHA first, and perhaps 2) that it verifies that the hash came from the signed webpage where you got the torrent link.
This is already a solved problem. Just provide a magnet link. You already have to trust the website to provide the checksum, so why not trust the link?
As for packages, Debian experimented with a BitTorrent transport for apt a long while ago, but I suppose it didn't catch on. Perhaps this was before BitTorrent had HTTP fallback? Either way, this would be an interesting avenue for research.
At the moment, we operate 900 servers. We evaluated the idea of hosting mirrors on some of our servers, but our servers are not super powerful, and we have to pay for bandwidth. We use these servers in our production pipeline. Maintenance alone is a massive task, and hosting distro mirrors could be incredibly challenging. We are not at that scale yet.
We could provide IP location data sponsorship to popular distro mirror systems, which would make traffic routing and load distributions more effective.
But i don't know who is responsible for that.
The question of "who is responsible": anyone is free to run their own mirror, after all this software is freely redistributable.
As for "why not", if I were to lead a project like Alpine, I would insist that the org stays in control of its own infrastructure. Mirrors are also only one chunk of the problem; you also need builder machines.
presumably because it's a silly idea, given Cloudflare isn't a colo or dedicated server company and they won't let you just rack machines in their DCs for the same reason Google won't.
Running on 'large donations' is not a viable strategy for any long-term goal. Perhaps its time for Linux to consider running a tiny datacenter of its very own to both dog-feed itself and give itself extra momentum or inertia from donation stall-out?
https://en.wikipedia.org/wiki/Eating_your_own_dog_food
You know running an anctual datacenter with all its cooling ,storage, networking and power requirements is a full time job for several people right? Why not put bare metal in someplace which already does that for far cheaper and better?
Imagine if Wireguard were backdoored because someone working for the ISP that runs the VMs compromised their VMs through the hypervisor. How would a project audit an ISP? How could anything be trusted? Bottom line: it can't. ISPs don't give that kind of information to customers unless you're special (government, spend crazy money).
While it's still possible to compromise a machine through physical access, it's MUCH more difficult. How do you bring it in to single user mode to introduce a privileged user without people noticing that it's down, even momentarily, or that the uptime is now zero? Compromise like this is possible, but worlds more difficult to pull off than compromise through hypervisor.
Even Microsoft updates can use end user device distributed hosting.
Admittedly I am rather surprised by the storage requirements from Freedesktop.
Small by on premise enterprise standards is 4-16 cores and 128GB of RAM.
Our big VMs have 64 cores and 1TB of RAM (db servers).