Tell HN: Cloudflare is blocking Pale Moon and other non-mainstream browsers

Hello.

Cloudflare's Browser Intergrity Check/Verification/Challenge feature used by many websites, is denying access to users of non-mainstream browsers like Pale Moon.

Users reports began on January 31:

https://forum.palemoon.org/viewtopic.php?f=3&t=32045

This situation occurs at least once a year, and there is no easy way to contact Cloudflare. Their "Submit feedback" tool yields no results. A Cloudflare Community topic was flagged as "spam" by members of that community and was promptly locked with no real solution, and no official response from Cloudflare:

https://community.cloudflare.com/t/access-denied-to-pale-moo...

Partial list of other browsers that are being denied access:

Falkon, SeaMonkey, IceCat, Basilisk.

Hacker News 2022 post about the same issue, which brought attention and had Cloudflare quickly patching the issue:

https://news.ycombinator.com/item?id=31317886

A Cloudflare product manager declared back then: "...we do not want to be in the business of saying one browser is more legitimate than another."

As of now, there is no official response from Cloudflare. Internet access is still denied by their tool.

374 points | by Hold-And-Modify 3 hours ago

34 comments

  • windsignaling 1 hour ago
    As a website owner and VPN user I see both sides of this.

    On one hand, I get the annoying "Verify" box every time I use ChatGPT (and now due its popularity, DeepSeek as well).

    On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.

    I honestly don't know what the solution is.

    • rozap 45 minutes ago
      What is a "junk" request? Is it hammering an expensive endpoint 5000 times per second, or just somebody using your website in a way you don't like? I've also been on both sides of it (on-call at 3am getting dos'd is no fun), but I think the danger here is that we've gotten to a point where a new google can't realistically be created.

      The thing is that these tools are generally used to further entrench power that monopolies, duopolies, and cartels already have. Example: I've built an app that compares grocery prices as you make a shopping list, and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day. What I thought would be a quick little project has turned out to be wildly adversarial. But now spite driven development is a factor so I will press on.

      It will always be a cat and mouse game, but we're at a point where the cat has a 46 billion dollar market cap and handles a huge portion of traffic on the internet.

      • jeroenhd 35 minutes ago
        I've such bots on my server. Some Chinese Huawei bot as well as an American one.

        They ignored robots.txt (claimed not to, but I blacklisted them there and they didn't stop) and started randomly generating image paths. At some point /img/123.png became /img/123.png?a=123 or whatever, and they just kept adding parameters and subpaths for no good reason. Nginx dutifully ignored the extra parameters and kept sending the same images files over and over again, wasting everyone's time and bandwidth.

        I was able to block these bots by just blocking the entire IP range at the firewall level (for Huawei I had to block all of China Telecom and later a huge range owned by Tencent for similar reasons).

        I have lost all faith in scrapers. I've written my own scrapers too, but almost all of the scrapers I've come across are nefarious. Some scour the internet searching for personal data to sell, some look for websites to send hack attempts at to brute force bug bounty programs, others are just scraping for more AI content. Until the scraping industry starts behaving, I can't feel bad for people blocking these things even if they hurt small search engines.

      • makeitdouble 36 minutes ago
        > somebody using your website in a way you don't like?

        This usually includes people making a near-realtime updated perfect copy of your site and serving that copy for either scam or middle-manning transactions or straight fraud.

        Having a clear category of "good bots" from either a verified or accepted companies would help for these cases. Cloudflare has such a system I think, but then a new search engine would have to go to each and every platform provider to make deals and that also sounds impossible.

    • lynndotpy 9 minutes ago
      > On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc. > > I honestly don't know what the solution is.

      The solution is good security-- Cloudflare only cuts down on the noise. I'm looking at junk requests and hacking attempts flow through to my sites as we speak.

    • boomboomsubban 43 minutes ago
      >On one hand, I get the annoying "Verify" box every time I use ChatGPT (and now due its popularity, DeepSeek as well).

      Though annoying, it's tolerable. It seemed like a fair solution. Blocking doesn't.

    • markisus 49 minutes ago
      If I were hosting a web page, I would want it to be able to reach as many people as possible. So in choosing between CDNs, I would choose the one that provides greater browser compatibility, all other things equal. So in principle, the incentives are there for Cloudflare to fix the issue. But the size of the incentive may be the problem. Not too many customers are complaining about these non-mainstream browsers.
    • inetknght 1 hour ago
      > On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.

      Yup!

      > I honestly don't know what the solution is.

      Force law enforcement to enforce the laws.

      Or else, block the countries that don't combat fraud. That means... China? Hey isn't there a "trade war" being "started"? It sure would be fortunate if China (and certain other fraud-friendly countries around Asia/Pacific) were blocked from the rest of the Internet until/unless they provide enforcement and/or compensation their fraudulent use of technology.

      • marginalia_nu 1 hour ago
        A lot of this traffic is bouncing all over the world before it reaches your server. Almost always via at least one botnet. Finding the source of the traffic is pretty hopeless.
        • patrick451 52 minutes ago
          When the government actually cares, they're able to track these things down. But they don't except in high profile cases.
      • jacobr1 26 minutes ago
        Slightly more complicated because a ton of the abuse comes from IPs located western countries, explicitly to evade fraud and abuse detection. Now you can go after the western owners of those systems (and all the big ones do have have large abuse teams to handle reports) but enforcement has a much higher latency. To be effective you would need a much more aggressive system. Stronger KYC. Changes in laws to allow for less due-process and more "guilty by default" type systems that you then need to prove innocence to rebut.
      • jeroenhd 33 minutes ago
        A lot of the fake browser traffic I'm seeing is coming from American data centres. China plays a major part, but if we're going by bot traffic, America will end up on the ban list pretty quickly.
      • RIMR 47 minutes ago
        A wild take only possible if you don't understand how the Internet works.
    • gjsman-1000 1 hour ago
      Simple: We need to acknowledge that the vision of a decentralized internet as it was implemented was a complete failure, is dying, and will probably never return.

      Robots went out of control, whether malicious or the AI scrapers or the Clearview surveillance kind; users learned to not trust random websites; SEO spam ruined search, the only thing that made a decentralized internet navigable; nation state attacks became a common occurrence; people prefer a few websites that do everything (Facebook becoming an eBay competitor). Even if it were possible to set rules banning Clearview or AI training, no nation outside of your own will follow them; an issue which even becomes a national security problem (are you sure, Taiwan, that China hasn't profiled everyone on your social media platforms by now?)

      There is no solution. The dream itself was not sustainable. The only solution is either a global moratorium of understanding which everyone respectfully follows (wishful thinking, never happening); or splinternetting into national internets with different rules and strong firewalls (which is a deal with the devil, and still admitting the vision failed).

      • supportengineer 3 minutes ago
        A walled garden where each a real, vetted human being is responsible for each network device. It wouldn't scale but it could work locally.
      • benatkin 1 minute ago
        Luckily the decentralization community has always been decentralized. There are plenty of decentralized networks to support.
      • stevenAthompson 28 minutes ago
        I hate that you're right.

        To make matters worse, I suspect that not even a splinternet can save it. It needs a new foundation, preferably one that wasn't largely designed before security was a thing.

        Federation is probably a good start, but it should be federated well below the application layer.

      • Aeolun 8 minutes ago
        The great firewall, but in reverse.
        • gjsman-1000 3 minutes ago
          What other choice do we have?

          Countries, whether it be Ukraine or Taiwan, can't risk other countries harvesting their social media platforms for the mother of all purges. I never assume that anything that happened historically can never happen again - no Polish Jew would have survived the Nazis with this kind of information theft. Add AI into the mix, and wiping out any population is as easy as baking pie.

          Countries are tired of actual or perceived intellectual property theft. Just ask my grandmother, who has had her designs stolen and mass produced from eBay. Not just companies - many free and open source companies cannot survive with such reckless competition.

          Countries are tired of bad behavior from other countries online. How many grandmothers does it take being scammed? How many educational systems containing data on minors needs to be stolen?

          Startups are tired of paying Cloudflare protection money, and trying to evade the endless sea of SEO spam. How can a startup compete with Google with so much trash and no recourse?

          Now we have AI, gasoline and soon to be dynamite on the fire. For the first time ever, a malicious country can VPN into the internet of a friendly nation, track down all critics on their social media, and destroy their lives in a real world attack. We are only beginning to see this in Ukraine - are we delusional enough to believe that the world is past warfare? That the UN can continue keeping countries in line?

  • zlagen 3 hours ago
    I'm using chrome on linux and noticed that this year cloudflare is very agressive in showing the "Verify you are a human" box. Now a lot of sites that use cloudflare show it and once you solve the challenge it shows it again after 30 minutes!

    What are you protecting cloudflare?

    Also they show those captchas when going to robots.txt... unbelievable.

    • rurp 1 hour ago
      Cloudflare has been even worse for me on Linux + Firefox. On a number of sites I get the "Verify" challenge and after solving it immediately get a message saying "You have been blocked" every time. Clearing cookies, disabling UBO, and other changes make no difference. Reporting the issue to them does nothing.

      This hostility to normal browsing behavior makes me extremely reluctant to ever use Cloudflare on any projects.

      • mmh0000 36 minutes ago
        At least you can get past the challenge. For me, every-single-time it is an endless loop of "select all bikes/cars/trains". I've given up even trying to solve the challenge anymore and just close the page when it shows up.
      • nbernard 1 hour ago
        Check that you are allowing webworker scripts, that did the trick for me. I still have issues on slower computers (Raspberry pies and the like) as they seem to be to slow to do whatever Cloudflare wants as a verification in the allotted time, however.
      • lta 1 hour ago
        Yeah, same here. I've avoided it for a most of my customers for that very reason already
      • sleepybrett 1 hour ago
        Yeah, Lego and Etsy are two sites I can now only visit with safari. It sucks. Firefox on the same machine it claims I'm a bot or a crawler. (not even on linux, on a mac)
    • fcq 2 hours ago
      I have Firefox and Brave set to always clear cookies and everything when I close the browser... it is a nightmare when I come back the amount of captchas everywhere....

      It is either that or keep sending data back to the Meta and Co. overlords despite me not being a Facebook, Instagram, Whatsapp user...

    • progmetaldev 2 hours ago
      Whoever configures the Cloudflare rules should be turning off the firewall for things like robots.txt and sitemap.xml. You can still use caching for those resources to prevent them becoming a front door to DDoS.
    • viraptor 3 hours ago
      The captcha on robots is a misconfiguration in the website. CF has lots of issues, but this one is on their costumer. Also they detect Google and other bots, so those may be going through anyway.
      • jasonjayr 1 hour ago
        Sure; but sensible defaults ought to be in place. There are certain "well known" urls that are intended for machine consuption. CF should permit (and perhaps rate limit?) those by default, unless the user overrides them.
    • potus_kushner 2 hours ago
      using palemoon, i don't even get a captcha that i could solve. just a spinning wheel, and the site reloads over and over. this makes it impossible to use e.g. anything hosted on sourceforge.net, as they're behind the clownflare "Great Firewall of the West" too.
    • nerdralph 54 minutes ago
      I don't bother with sites that have cloudflare turnstyle. Web developers supposedly know the importance of page load time, but even worse than a slow loading page is waiting for cloudflare's gatekeeper before I can even see the page.
      • fbrchps 6 minutes ago
        That's not turnstile, that's a Managed Challenge.

        Turnstile is the in-page captcha option, which you're right, does affect page load. But they force a defer on the loading of that JS as best they can.

        Also, turnstile is a Proof of Work check, and is meant to slow down & verify would-be attack vectors. Turnstile should only be used on things like Login, email change, "place order", etc.

    • likeabatterycar 1 hour ago
      I run a honeypot and I can say with reasonable confidence many (most?) bots and scrapers use a Chrome on Linux user-agent. It's a fairly good indication of malicious traffic. In fact I would say it probably outweighs legitimate traffic with that user agent.

      It's also a pretty safe assumption that Cloudflare is not run by morons, and they have access to more data than we do, by virtue of being the strip club bouncer for half the Internet.

      • rurp 1 hour ago
        User-agent might be a useful signal but treating it as an absolute flag is sloppy. For one thing it's trivial for malicious actors to change their user-agent. Cloudflare could use many other signals to drastically cut down on false positives that block normal users, but it seems like they don't care enough to be bothered. If they cared more about technical and privacy-conscious users they would do better.
        • likeabatterycar 1 hour ago
          > For one thing it's trivial for malicious actors to change their user-agent.

          Absolutely true. But the programmers of these bots are lazy and often don't. So if Cloudflare has access to other data that can positively identify bots, and there is a high correlation with a particular user agent, well then it's a good first-pass indication despite collateral damage from false positives.

          • sangnoir 41 minutes ago
            > So if Cloudflare has access to other data that can positively identify bots

            They do not - not definitively [1]. This cat-and-mouse game is stochastic at higher levels, with bots doing their best to blend in with regular traffic, and the defense trying to pick up signals barely above the noise floor. There are diminishing returns to battling bots that are indistinguishable from regular users.

            1. A few weeks ago, the HN frontpage had a browser-based project that claimed to be undetectable

            • fbrchps 5 minutes ago
              > a browser-based project that claimed to be undetectable

              For now

          • ok_dad 1 hour ago
            I would hope Cloudflare would be way, way beyond a “first pass” at this stuff. That’s logic you use for a ten person startup, not the company who’s managed to capture the fucking internet under their network.
        • sleepybrett 1 hour ago
          I mean, do we need to replace user agent with some kind of 'browser signing'?
      • lta 1 hour ago
        Sure, but does that means that we, Linux users, can't go on the web anymore ? It's way easier for spammers and bots to move to another user agent/system than for legitimate users. So whatever causes this is not a great solution to this problem. You can do better CF
        • zamadatix 1 hour ago
          I'm a Linux user as well but I'm not sure what Cloudflare is supposed to be doing here that makes everybody happy. Removing the most obvious signals of botting because there are some real users that look like that too may be better for that individual user but that doesn't make it a good answer for legitimate users as a whole. SPAM, DoS, phishing, credential stuffing, scraping, click fraud, API abuse, and more are problems which impact real users just as extra checks and false positive blocks do.

          If you really do have a better way to make all legitimate users of sites happy with bot protections then by all means there is a massive market for this. Unfortunately you're probably more like me, stuck between a rock and a hard place of being in a situation where we have no good solution and just annoyance with the way things are.

  • tibbar 1 hour ago
    This echoes the user agent checking that was prevalent in past times. Websites would limit features and sometimes refuse to render for the "wrong" browser, even if that browser had the ability to display the website just fine. So browsers started pretending to be other browsers in their user agents. Case in point - my Chrome browser, running on an M3 mac, has the following user agent:

    "'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36'"

    That means my browser is pretending to be Firefox AND Safari on an Intel chip.

    I don't know what features Cloudflare uses to determine what browser you're on, or if perhaps it's sophisticated enough to get past the user agent spoofing, but it's all rather funny and reminiscent just the same.

    • johnmaguire 1 hour ago
      As a counterpoint, I asked Claude to write a script to fetch Claude usage and expose it as a Prometheus metric. As no public API exists, Claude suggested I grab the request from the Network tab. I copied it as cURL, and attempted to run it, and was denied with a 403 from CF.

      I forgot the script open, polling for about 20 minutes, and suddenly it started working.

      So even sending all the same headers as Firefox, but with cURL, CF seemed to detect automated access, and then eventually allowed it through anyway after it saw I was only polling once a minute. I found this rather impressive. Are they using subtle timings? Does cURL have an easy-to-spot fingerprint outside of its headers?

      Reminded me of this attack, where they can detect when a script is running under "curl | sh" and serve alternate code versus when it is read in the browser: https://news.ycombinator.com/item?id=17636032

      • schroeding 1 hour ago
        > Does cURL have an easy-to-spot fingerprint outside of its headers?

        If it's a https URL: Yes, the TLS handshake. There are curl builds[1] which try (and succeed) to imitate the TLS handshake (and settings for HTTP/2) of a normal browser, though.

        [1] https://github.com/lwthiker/curl-impersonate

    • ZeWaka 1 hour ago
      > if perhaps it's sophisticated enough to get past the user agent spoofing

      As a part of some browser fingerprinting I have access to at work, there's both commercial and free solutions to determine the actual browser being used.

      It's quite easy even if you're just going off of the browser-exposed properties. You just check the values against a prepopulated table. You can see some of such values here: https://amiunique.org/fingerprint

      Edit: To follow up, one of the leading fingerprinting libraries just ignores useragent and uses functionality testing as well: https://github.com/fingerprintjs/fingerprintjs/blob/master/s...

    • wongarsu 14 minutes ago
      They are pretending to be an ancient Mozilla version from the time after Netscape but before Firefox, KHTML (which was forked to webkit), Firefox (Gecko engine), Chrome and Safari. The only piece of browser history it's missing is somehow pretending to be IE.
  • ai-christianson 2 hours ago
    How many of you all are running bare metal hooked right up to the internet? Is DDoS or any of that actually a super common problem?

    I know it happens, but also I've run plenty of servers hooked directly to the internet (with standard *nix security precautions and hosting provider DDoS protection) and haven't had it actually be an issue.

    So why run absolutely everything through Cloudflare?

    • matt_heimer 2 hours ago
      Yes, [D]DoS is a problem. Its not uncommon for a single person with residential fiber to have more bandwidth than your small site hosted on a 1u box or VPS. Either your bandwidth is rate limited and they can denial of service your site or your bandwidth is greater but they can still cause you to go over your allocation and cause massive charges.

      In the past you could ban IPs but that's not very useful anymore.

      The distributed attacks tend to be AI companies that assume every site has infinite bandwidth and their crawlers tend to run out of different regions.

      Even if you aren't dealing with attacks or outages, Cloudflare's caching features can save you a ton of money.

      If you haven't used Cloudflare, most sites only need their free tier offering.

      It's hard to say no to a free service that provides feature you need.

      Source: I went over a decade hosting a site without a CDN before it became too difficult to deal with. Basically I spent 3 days straight banning ips at the hosting company level, tuning various rate limiting web server modules and even scaling the hardware to double the capacity. None of it could keep the site online 100% of the time. Within 30 mins of trying Cloudflare it was working perfectly.

      • johnmaguire 1 hour ago
        > It's hard to say no to a free service that provides feature you need.

        Very true! Though you still see people who are surprised to learn that CF DDOS protection acts as a MITM proxy and can read your traffic plaintext. This is of course by design, to inspect the traffic. But admittedly, CF is not very clear about this in the Admin Panel or docs.

        Places one might expect to learn this, but won't:

        - https://developers.cloudflare.com/dns/manage-dns-records/ref...

        - https://developers.cloudflare.com/fundamentals/concepts/how-...

        - https://imgur.com/a/zGegZ00

        • sophacles 53 minutes ago
          How would you do DDoS protection without having something in path?
          • johnmaguire 35 minutes ago
            I hoped it was apparent from my comment that "this is of course by design, to inspect the traffic" meant I understood they are doing it to detect DDoS traffic and separate it from legitimate traffic. But many Cloudflare users are not so technical. I would simply advocate for being more upfront about this behavior.

            That said, their Magic Transit and Spectrum offerings (paid) provide L3/L4 DDoS protection without payload inspection.

            • sophacles 3 minutes ago
              Honestly, I was confused because both pages you link are full of the word proxy, have links to deeper discussions of what a proxy does (including explicit mentions of decryption/re-encryption), and are literally developer docs. Additionally Cloudflare's blog explaining these things in depth are high in search results, and also make the front page here on the regular.

              I incorrectly interpreted your comment as one of the multitude of comments claiming nefarious reasons for proxying without any thought for how an alternative would work.

              Magic Transit is interesting - hard to imagine how it would scale down to a small site though, they apparently advertise whole prefixes over BGP, and most sites don't even have a dedicated IP, let alone a whole /24 to throw around.

    • professorsnep 36 minutes ago
      I run a Mediawiki instance for an online community on a fairly cheap box (not a ton of traffic) but had a few instances of AI bots like Amazon's crawling a lot of expensive API pages thousands of times an hour (despite robots.txt preventing those). Turned on Cloudflare's bot blocking and 50% of total traffic instantly went away. Even now, blocked bot requests make up 25% of total requests to the site. Without blocking I would have needed to upgrade quite a bit or play a tiring game of whack a mole blocking any new IP ranges for the dozens of bots.
    • grishka 2 hours ago
      > How many of you all are running bare metal hooked right up to the internet?

      I do. Many people I know do. In my risk model, DDoS is something purely theoretical. Yes it can happen, but you have to seriously upset someone for it to maybe happen.

      • maples37 1 hour ago
        From my experience, if you tick off the wrong person, the threshold for them starting a DDoS is surprisingly low.

        A while ago, my company was hiring and conducting interviews, and after one candidate was rejected, one of our sites got hit by a DDoS. I wasn't in the room when people were dealing with it, but in the post-incident review, they said "we're 99% sure we know exactly who this came from".

    • nijave 2 hours ago
      Small/medium SaaS. Had ~8 hours of 100k reqs/sec last year when we usually see 100-150 reqs/sec. Moved everything behind a Cloudflare Enterprise setup and ditched AWS Client Access VPN (OpenVPN) for Cloudflare WARP

      I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively

      There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage

      • gamegod 2 hours ago
        Did you put rate limiting rules on your webserver?

        Why was that not enough to mitigate the DDoS?

        • danielheath 1 hour ago
          Not the same poster, but the first "D" in "DDoS" is why rate-limiting doesn't work - attackers these days usually have a _huge_ (tens of thousands) pool of residential ip4 addresses to work with.
        • hombre_fatal 1 hour ago
          That might have been good for preventing someone from spamming your HotScripts guestbook in 2005, but not much else.
    • rpgwaiter 2 hours ago
      It’s free unless you’re rolling in traffic, it’s extremely easy to setup, and CF can handle pretty much all of your infra with tools way better than AWS.

      Also you can buy a cheaper ipv6 only VPS and run it thru free CF proxy to allow ipv4 traffic to your site

      • zelphirkalt 1 hour ago
        Easy to set up, easy to screw up user experience. Easy-peasy.
    • motiejus 1 hour ago
      I've been running jakstys.lt (and subdomains like git.jakstys.lt) from my closet, a simple residential connection with a small monthly price for a static IP.

      The only time I had a problem was when gitea started caching git bundles of my Linux kernel mirror, which bots kept downloading (things like a full targz of every commit since 2005). Server promptly went out of disk space. I fixed gitea settings to not cache those. That was it.

      Not ever ddos. Or I (and uptimerobot) did not notice it. :)

    • blablabla123 34 minutes ago
      The biggest problems I see with DDoS is metered traffic and availability. The largest Cloud providers all meter their traffic.

      The availability part on the other hand is maybe something that's not so business critical for many but for targeted long-term attacks it probably is.

      So I think for some websites, especially smaller ones it's totally feasible to not use Cloudflare but involves planning the hosting really carefully.

    • uniformlyrandom 2 hours ago
      Most exploits target the software, not the hardware. CF is a good reverse proxy.
    • Puts 1 hour ago
      Most (D)DOS attacks are just either UDP floods or SYN floods that iptables will handle without any problem. Sometimes what people think are DDOS is just their application DDOSing themself because they are doing recursive calls to some back-end micro-service.

      If it was actually a traffic based DDOS someone still needs to pay for that bandwidth which would be too expansive for most companies anyway - even if it kept your site running.

      But you can sell a lot of services to incompetent people.

      • hombre_fatal 1 hour ago
        You need an answer to someone buying $10 of booter time and sending a volumetric attack your way. If any of the traffic is even reaching your server, you've already lost, so iptables isn't going to help you because your link is saturated.

        Cloudflare offers protection for free.

      • sophacles 49 minutes ago
        What's the iptables invocation that will let my 10Gbps connection drop a a 100Gbps syn flood while also serving good traffic?
    • codexon 2 hours ago
      It is common once your website hits a certain threshold in popularity.

      If you are just a small startup or a blog, you'll probably never see an attack.

      Even if you don't host anything offensive you can be targeted by competitors, blackmailed for money, or just randomly selected by a hacker to test the power of their botnet.

    • raffraffraff 2 hours ago
      They make it easy to delegate a DNS zone to them and use their API to create records (eg: install external-dns on kubernetes and key it create records automatically for ingresses)
    • buyucu 1 hour ago
      DDoS is a problem, but for most ordinary problems it's not as bad as people make it out to be. Even something very simple like fail2ban will go a long way.
    • progmetaldev 2 hours ago
      Web scraping without any kind of sleeping in between requests (usually firing many threads at once), as well as heavy exploit scanning is a near constant for most websites. With AI technology, it's only getting worse, as vendors attempt to bring in content from all over the web without regard for resource usage. Depending on the industry, DDoS can be very common from competitors that aren't afraid to rent out botnets to boost their business and tear down those they compete against.
    • dosdosdosdos 2 hours ago
      [flagged]
  • jeroenhd 41 minutes ago
    I just downloaded Palemoon to check and it seems the CAPTCHA straight up crashes. Once it crashes, reloading the page no longer shows the CAPTCHA so it did pass something at least. I tried another Cloudflare turnstile but the entire browser crashed on a segfault, and ever since the CAPTCHAs don't seem to come up again.

    ChatGPT.com is normally quite useful for generating Cloudflare prompts, but that page doesn't seem to work in Palemoon regardless of prompts. What version browser engine does it use these days? Is it still based on Firefox?

    For reference I grabbed the latest main branch of Ladybird and ran that, but Cloudflare isn't showing me any prompts for that either.

  • picafrost 1 hour ago
    Companies like Google and Cloudflare make great tools. They give them away for free. They have different reasons for this, but these tools provide a lot of value to a lot of people. I’m sure that in the abstract their devs mean well and take pride in making the internet more robust, as they should.

    Is it worth giving the internet to them? Is something so fundamentally wrong with the architecture of the internet that we need megacorps to patch the holes?

    • zamadatix 59 minutes ago
      Whether something is "wrong" is often more a matter of opinion than a matter of fact for something as large and complex as the internet. The root of problems like this on the internet is connections don't have an innate user identity associated at the lower layers. By the time you get to an identity for a user session you've already driven past many attack points. There isn't really a "happy" way to remove that from the equation, at least for most people.
  • LeoPanthera 34 minutes ago
    Blocking Falkon is especially egregious if they're not also blocking Gnome Web. Those are the default browsers for Plasma and Gnome respectively, and some of the few browsers left that are "just browsers", with no phoning home or any kind of cloud integration.
  • lapcat 55 minutes ago
    The worst is Cloudflare challenges on RSS feeds. I just have to unsubscribe from those feeds, because there's nothing I can do.
  • chr15m 13 minutes ago
    When one of my nodejs based sites experienced DoS, I installed & configured "express-slow-down" as middleware and it resolved the issue.
  • Hold-And-Modify 1 hour ago
    Forgot to clarify: this is not about an increased amount of captchas, or an annoyance issue.

    The Cloudflare tool does not complete its verifications, resulting in an endless "Verifying..." loop and thus none of the websites in question can be accessed. All you get to see is Cloudflare.

  • jmclnx 2 hours ago
    I just went to a site that I think uses cloudflare via seamonkey. I was able to get to the site. This is on OpenBSD.

    But if someone has a site that is failing, feel free to post it and I will give it a try.

    • matt_heimer 1 hour ago
      I tested palemoon on Win with one of my Cloudflare sites and didn't see any problem either.

      It's probably dependent on the security settings the site owner has choosen. I'm guessing bot fight mode might cause the issue.

  • arielcostas 2 hours ago
    A lot of people are failing to conceive the danger that poses to the open web the fact that a lot of traffic runs through/to a few bunch of providers (namely, CloudFlare, AWS, Azure, Google Cloud, and "smaller" ones like Fastly or Akamai) who can take this kind of measures without (many) website owners knowing or giving a crap about.

    Google itself tried to push crap like Web Environment Integrity (WEI) so websites could verify "authentic" browsers. We got them to stop it (for now) but there was already code in the Chromium sources. What makes CloudFlare MITMing and blocking/punishing genuine users from visiting websites?

    Why are we trusting CloudFlare to be a "good citizen" and not block unfairly/annoy certain people for whatever reason? Or even worse, serve modified content instead of what the actual origin is serving? I mean in the cases where CloudFlare re-encrypts the data, instead of only being a DNS provider. How can we trust that not third party has infiltrated their systems and compromised them? Except "just trust me bro", of course

    • Retr0id 2 hours ago
      > Or even worse, serve modified content instead of what the actual origin is serving?

      I witnessed this! Last time I checked, in the default config, the connection between cloudflare and the origin server does not do strict TLS cert validation. Which for an active-MITM attacker is as good as no TLS cert validation at all.

      A few years ago an Indian ISP decided that https://overthewire.org should be banned for hosting "hacking" content (iirc). For many Indian users, the page showed a "content blocked" page. But the error page had a padlock icon in the URL bar and a valid TLS cert - said ISP was injecting it between Cloudflare and the origin server using a self-signed cert, and Cloudflare was re-encrypting it with a legit cert. In this case it was very conspicuous, but if the tampering was less obvious there'd be no way for an end-user to detect the MITM.

      I don't have any evidence on-hand, but iirc there were people reporting this issue on Twitter - somewhere between 2019 and 2021, maybe.

      • progmetaldev 2 hours ago
        Cloudflare recently started detecting whether strict TLS cert validation works with the origin server, and if it does, it enables strict validation automatically.
    • raffraffraff 2 hours ago
      I don't think people aren't aware that it's bad. They just don't care enough. And they think "I could keep all this money safely in my mattress or I could put it into one of those three big banks!" ... Or something like that.
    • progmetaldev 2 hours ago
      Maybe it's the customers I deal with, or my own ignorance, but what alternatives are there to a service like Cloudflare? It is very easy to setup, and my clients don't want to pay a lot of money for hosting. With Cloudflare, I can turn on DDoS and bot protection to prevent heavy resource usage, as well as turn on caching to keep resource usage down. I built a plugin for the CMS I use (Umbraco - runs on .NET) to clear the cache for specific pages, or all pages (such as when a change is made to a global element like the header). I am able to run a website on Azure with less than the minimum recommended memory and CPU for Umbraco, due to lots of performance analyzing and enhancements over the years, but also because I have Cloudflare in front of the website.

      If there were an alternative that would provide the same benefits at roughly the same cost, I would definitely be willing to take a look, even if it meant I needed to spend some time learning a different way to configure the service from the way I configure Cloudflare.

    • SpicyLemonZest 2 hours ago
      I can easily conceive the danger. But I can directly observe the danger that's causing traffic to be so centralized - if you don't have one of those providers on your side, any adversary with a couple hundred dollars to burn can take down your website on demand. That seems like a bigger practical problem for the open web, and I don't know what the alternative solution would be. How can I know, without incurring any nontrivial computation cost, that a weird-looking request coming from a weird browser I don't recognize is not a botnet trying to DDOS me?
      • hombre_fatal 1 hour ago
        Exactly. If you're going to bemoan centralization, which is fine, you also need to address the reason why we're going in that direction. And that's probably going to involve rethinking the naive foundational aspects of the internet.
      • juped 2 hours ago
        how do you know a normal-looking request coming from google chrome is not a botnet trying to ddos you?
        • SpicyLemonZest 1 hour ago
          You deploy complex proprietary heuristics to identify whether incoming requests look more like an attack or more like something a user would legitimately send. If you find a new heuristic and try to deploy it, you'll immediately notice if it throws a bunch of false positives for Chrome, but you might not notice so quickly for Pale Moon or other non-mainstream browsers.

          (And if I were doing this on my own, rather than trusting Cloudflare to do it, I would almost surely decide that I don't care enough about Pale Moon users to fix an otherwise good rule that's blocking them as a side effect.)

  • hexagonwin 3 hours ago
    Same issue. I haven't been able to visit any websites powered by Cloudflare on my SeaMonkey browser recently.
  • randunel 2 hours ago
    Chromium on linux is also frequently blocked by cloudflare. I can't use tools such as HIBP.
    • martinbaun 2 hours ago
      Same here. I just gave up on most of these websites. When I absolutely need to use a website such as for flights, I have a clean chrome browser I spin up.
    • wkat4242 2 hours ago
      Yeah and Firefox on Linux too. I do have the user agent set to one from Edge because otherwise Microsoft blocks many features in Office 365. Once it thinks it's Edge it suddenly does work just fine. But it doesn't completely fix all the cloudflare blocks and captchas.
  • garspin 1 hour ago
    I use minbrowser.org/ Some sites disallow it... min suggests changing the user-agent setting to something like - Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/121.0
  • jmbwell 2 hours ago
    I'm still in the habit of granting Cloudflare a presumption of good faith. Developers frequently make assumptions about things like browsers that can cause problems like this. Something somewhere gets over-optimized, or someone somewhere does some 80/20 calculation, or something gets copy-pasted or (these days) produced by an LLM. There are plenty of reasons why this might be entirely unintentional, or that the severity of the impacts of a change were underestimated.

    I agree that this exposes the risk of relying overmuch on handful of large, opaque, unaccountable companies. And as long as Cloudflare's customers are web operators (rather than users), there isn't a lot of incentive for them to be concerned about the user if their customers aren't.

    One idea might be to approach web site operators who use Cloudflare and whose sites trigger these captchas more than you'd like. Explain the situation to the web site operator. If the web site operator cares enough about you, they might complain to Cloudflare. And if not, well, you have your answer.

  • graemep 2 hours ago
    Most of the sites mentioned in the forum work for me with PaleMoon.

    I do get a "your browser is unsupported" message from the forums.

  • kordlessagain 1 hour ago
    Cloudflare's proxy model solved immediate security and reliability problems but created a lasting tension between service stability and user choice. Like old telecom networks that restricted equipment, Cloudflare's approach favors their paying customers' needs over end-user freedom, particularly in browser choice. While this ensures predictable revenue and service quality, it echoes historical patterns where infrastructure standardization both enables and constrains.
  • pndy 1 hour ago
    I don't have any issues so far under Librewolf, Waterfox and Ungoogled Chromium.
  • out-of-ideas 1 hour ago
    at this point, im honestly surprised that all non-mainstream browsers dont emulate the same user-agent and ssl fingerprint order of a mainstream browser - or add a flag to change behavior per "tab" (or if cli per some call or other scope) - coupled with a javascript-operating-system which also aligns with those
  • lopkeny12ko 1 hour ago
    Cloudflare has been blocking "mainstream" browsers too, if you are generous enough to consider Firefox "mainstream." The "verify you are a human" sequence gets stuck in a perpetual never-ending loop where clicking the checkbox only refreshes the page and presents the same challenge. Certain websites (most notably archive.is) have been completely inaccessible for me for years for this reason.
    • boomboomsubban 38 minutes ago
      Do you have something that blocks some amount of scripts? I need to allow third party scripts from either Google or Cloudflare to get a lot of the web to function.
  • fishgoesblub 2 hours ago
    Since 2024 to now I've had to constantly verify that I'm human just to visit certain sites due to Cloudflare. Now it's even worse since (sometimes) cdnjs.cloudflare.com loads infinitely unless I turn on my VPN. Infuriating that I have to use a service known for potential spam, to get another service that blocks spam to bloody work.
  • reify 3 hours ago
    I use Librewolf and Zen Browser

    If I am met with the dreaded cloudflare "Verify you are a human" box, which is very rare for me, I dont bother and just close the tab.

  • rollcat 2 hours ago
    On one hand, this is a scummy move from CloudFlare. All this has ever done is make browsers spoof their UAs. Mozilla/4.0 anyone?

    On the other, Pale Moon is an ancient (pre-quantum) volunteer-supported fork of Firefox, with boatloads of known and unfixed security bugs - some fixes might be getting merged from upstream, but for real, the codebases diverged almost a decade ago. You might as well be using IE 11.

  • juped 2 hours ago
    Things like "using Linux" or "having an adblocker at all" get you sent to captcha hell. Anything where you're in the minority of traffic. It's not going to change; why would it?
    • jeroenhd 1 hour ago
      Things are going to chance. Unfortunately, things are only getting worse.

      CAPTCHAs are barely sufficient against bots these days. I expect the first sites to start implementing Apple/Cloudflare's remote attestation as a CAPTCHA replacement any day now, and after that it's going to get harder and harder to use the web without Official(tm) Software(tm).

      Using Linux isn't what's getting you blocked. I use Linux, and I'm not getting blocked. These blocks are the results of a whole range of data points, including things like IP addresses.

    • flyinghamster 1 hour ago
      For me, captcha hell is very random, and when it happens, it's things like "pick all squares with stairs" where I have to decide if that little corner of a stairway counts (and it never seems to) or "pick all squares with motorcycles" where the camera seemed to have a vision problem.

      What usually works for me is to close the browser, reload, and try again.

    • DoctorOW 2 hours ago
      I have multiple blockers (Ublock Origin, Privacy Badger, Facebook Container) in Firefox and have not experienced this issue.
      • maples37 1 hour ago
        For what it's worth, this has been my experience as well. I've seen maybe a handful of full-page Cloudflare walls over the past year, and none have gotten me stuck in any kind of loop
    • linuxftw 2 hours ago
      I have been using Fedora + Firefox for years. I sometimes get a captcha from Cloudflare, but not frequently. Works just fine.

      I have not tried less mainstream browsers, just FF and Chrome.

  • stainablesteel 1 hour ago
    is spoofing not a simple solution to this?
  • buyucu 1 hour ago
    Welcome to the modern world. Any deviation from the average will get you flagged as a suspicious deviant. It's not just browsers. It's everything.
  • indigodaddy 1 hour ago
    This is totally fucked if true
  • nonrandomstring 3 hours ago
    I use w3m which makes me about as popular as a fart in a spacesuit. No Cloudflare things for me.
  • zb3 1 hour ago
    Do these browsers employ any additional tracking protections? "Browser integrity checks" are browser-specific and they might rely on the "entropy" those tracking vectors provide.
    • zb3 1 hour ago
      So this would only be "bad" move by cloudflare if you could get around it by recompiling the browser with spoofed UA/strings. Otherwise they'd have to support every possible engine which is infeasible. That saying, the "open web" is indeed dead.
  • lofaszvanitt 2 hours ago
    Cloudflare is slowly but surely turning the web into a walled garden.
    • thomassmith65 2 hours ago
      Pretty soon the internet will just be a vestigial thing that people use to connect to the cloudflare.
  • ycombonator 2 hours ago
    [dead]
  • scblock 3 hours ago
    [flagged]
    • nickburns 3 hours ago
      Not helpful to an otherwise worthwhile discussion.
      • scblock 1 hour ago
        The rest of this comment section is the same sentiment mixed in with trying to make excuses for Cloudflare. So... it is helpful. Stop allowing a private company to control and MITM the entire internet.
  • slothsarecool 1 hour ago
    Cloudflare is actually pretty upfront about which browsers they support. You can find the whole list right in their developer docs. This isn't some secret they're trying to hide from website owners or users - it's right here https://developers.cloudflare.com/waf/reference/cloudflare-c... - My guess is that there is no response because not one of the browsers you listed is supported.

    Think about it this way: when a framework (many modern websites) or CAPTCHA/Challenge doesn't support an older or less common browser, it's not because someone's sitting there trying to keep people out. It's more likely they are trying to balance the maintenance costs and the hassle involved in allowing or working with whatever other many platforms there are (browsers in this case). At what point is a browser relevant? 1 user? 2 users? 100? Can you blame a company that accommodates for probably >99% of the traffic they usually see? I don't think so, but that's just me.

    At the end, site owners can always look at their specific situation and decide how they want to handle it - stick with the default security settings or open things up through firewall rules. It's really up to them to figure out what works best for their users.

    • Hold-And-Modify 1 hour ago
      Not exactly. They say:

      "Challenges are not supported by Microsoft Internet Explorer."

      Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers, as defined by Cloudflare presumably. That wouldn't sound too legal, honestly.

      Below that: "Visitors must enable JavaScript and cookies on their browser to be able to pass any type of challenge."

      These conditions are met.

      • slothsarecool 1 hour ago
        > * If your visitors are using an up-to-date version of a major browser * > * they will receive the challenge correctly. *

        I'm unsure what part of this isn't clear, major browsers, as long as they are up to date, are supported and should always pass challenges. Palemoon isn't a major browser, neither are the other browsers mentioned on the thread.

        > * Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers *

        Challenge pages is what your browser is struggling to pass, you aren't seeing a block page or a straight up denying of the connection, instead, the challenge isn't passing because whatever update CF has done, has clearly broken the compatibility with Palemoon, I seriously doubt this was on purpose. Regarding those annoying challenge pages, these aren't meant to be used 24/7 as they are genuinely annoying, if you are seeing challenge pages more often than you are on chrome, its likely that the site owner is actively is flagging your session to be challenged, they can undo this by adjusting their firewall rules.

        If a site owner decides to enable challenge pages for every visitor, you should shift the blame on the site owners lack of interest in properly tunning their firewall.

        • Hold-And-Modify 4 minutes ago
          Fair enough, but... if Cloudflare's challenge bugs out who is going to fix it? Aren't they responsible for their own critical tools?

          Because in the end, the result is connection denial. I don't want to connect to Cloudflare, I want to connect to the website.

          I read that part. They still do not indicate what may happen, or what is their responsibility -if any- for visitors with non-major browsers.

          Not claiming this is "on purpose" or a conspiracy, but if these legitimate protests keep getting ignored then yes, it becomes discrimination. If they can't be bothered, they should clearly state that their tool is only compatible with X browsers. Who is to blame for "an incorrectly received challenge"? The website? The user who chooses a secure, but "wrong" browser not on their whitelist?

          Cloudflare is there for security, not "major browser approval pass". They have the resources to increase response times, provide better support and deal with these incompatibility issues. But do they want to? Until now, they did.

        • ricardobeat 1 hour ago
          So.. no new browsers should ever be created? Or only created by people with enough money to get CloudFlare onboard from the start? Nothing new will ever become major if they're denied access to half the web.
          • slothsarecool 56 minutes ago
            You can create a new browser, there are plenty of modern new browsers that aren't considered major and work just fine because they run on top of recent releases of chromium.

            There are actually hundreds of smaller chromium forks that add small features, such as built-in adblock and have no issues with neither Cloudflare nor other captchas.

            • ricardobeat 55 minutes ago
              I think it's pretty clear this is about browser engines. If your view holds then Servo (currently #3 story in front page) will never make it.
    • megous 17 minutes ago
      They do not support major browsers. They support "major browsers in default configuration without any extensions" (which is of course ridiculous proposition), forcing people to either abandon any privacy/security preserving measures they use, or to abandon the websites covered by CF.

      I use uptodate Firefox, and was blocked from using company gitlab for months on end simply because I disabled some useless new web API in about:config way before CF started silently requiring it without any feature testing or meningful error message for the user. Just a redirect loop. Gitlab support forum was completely useless for this, just blaming the user.

      So we dropped gitlab at the company and went with basic git over https hosting + cgit, rather than pay some company that will happily block us via some user hostile intermediary without any resolution. I figured out what was "wrong" (lack of feature testing for web API features CF uses, and lack of meaningful error message feedback to the user) after the move.