Cloudflare was down
814 points by mektrik 20 hours ago | 518 comments
  • kaliqt 20 hours ago |
    NPM is down as a result.
    • chokominto 19 hours ago |
      Craaazzzyy
  • mercurialsolo 20 hours ago |
    As is supabase
  • Andugal 20 hours ago |
    Notion is also down as a result
  • arunaugustine 20 hours ago |
    Shopify is down.
  • Geep5 20 hours ago |
    Claude RIP
  • dinoqqq 20 hours ago |
    LinkedIn, Perplexity as well
  • mercurialsolo 20 hours ago |
    shopify.com
  • xyproto 20 hours ago |
    Yes.

    Weird that https://www.cloudflarestatus.com/ isn't reporting this properly. It should be full of red blinking lights.

    • csomar 20 hours ago |
      > In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 07:00 UTC

      Something must have gone really wrong.

      • shafyy 20 hours ago |
        Life hack: Announce bug that brings your entire network down as scheduled maintenance.
      • headmelted 20 hours ago |
        It's 1AM in San Francisco right now. I don't envy the person having to call Matthew Prince and wake him up for this one. And I feel really bad for the person that forgot a closing brace in whatever config file did this.
        • csomar 20 hours ago |
          > And I feel really bad for the person that forgot a closing brace in whatever config file did this.

          If a closing brace take your whole infra. down, my guess is that we'll see more of this.

        • artlovecode 20 hours ago |
          Agreed, I feel bad for them. But mostly because cloudflare's workflows are so bad that you're seemingly repeatedly set up for really public failures. Like how does this keep happening without leadership's heads rolling. The culture clearly is not fit for their level of criticality
          • esseph 19 hours ago |
            > The culture clearly is not fit for their level of criticality

            I don't think anyone's is.

            • everfrustrated 19 hours ago |
              How often do you hear of Akamai going down and they host a LOT more enterprise/high value sites than Cloudflare.

              There's a reason Cloudflare has been really struggling to get into the traditional enterprise space and it isn't price.

              • inferiorhuman 18 hours ago |
                A quick google turned up an Akamai outage in July that took Linode down and two in 2021. At that scale nobody's going to come up smelling like roses. I mostly dealt with Amazon crap at megacorp, but nobody that had to deal with our Akamai stuff had anything kind to say about them as a vendor.

                At first blush it's getting harder to "defend" use of Cloudflare, but I'll wait until we get some idea of what actually broke. For the time being I'll save my outrage for the AI scrapers that drove everyone into Cloudflare's arms.

              • esseph 11 hours ago |
                The last place I heard of someone deploying anything to Akamai was 15 years ago in FedGov.

                Akamai was historically only serving enterprise customers. Cloudflare opened up tons of free plans, new services, and basically swallowed much of that market during that time period.

        • viraptor 19 hours ago |
          > I don't envy the person having to call Matthew Prince

          They shouldn't need to do that unless they're really disorganised. CEOs are not there for day to day operations.

    • mikkom 20 hours ago |
      Company internal status pages are always like this. When you don't report problems they don't exist!
    • chironjit 20 hours ago |
      Yeah, their status site reports nothing but then clicking on some of the links on that site bring you the 500 error
    • 63stack 20 hours ago |
      This is just business as usual, status pages are 95% for show now. The data center would have to be under water for the status page to say "some users might be experiencing disruptions".
      • csomar 20 hours ago |
        They just did an update, and it is bad (in the sense that they are not realizing their clients are down?)

        > Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.

        > These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge.

        > Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.

        • Eikon 20 hours ago |
          > (in the sense that they are not realizing their clients are down?)

          Their own website seems down too https://www.cloudflare.com/

          --

          500 Internal Server Error

          cloudflare

          • mikkom 19 hours ago |
            >Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.

            "Might fail"

      • yapyap 19 hours ago |
        well it does say that now, so…

        which datacenter got flooded?

        • rvnx 19 hours ago |
          > In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 09:00 UTC

          It's a scheduled maintenance, so SLA should not apply right ?

    • tommek4077 20 hours ago |
      Yes, it’s really ‘weird’ that they refuse to share any details. Completely unlike AWS, for example. As if being open about issues with their own product wouldn’t be in their best interest. /s
    • darccio 20 hours ago |
      https://updog.ai/status/cloudflare reported the incident 13 minutes ago (at the moment of writing this).
    • jonathanlydall 20 hours ago |
      Now showing a message, posted at 08:56 UTC.
    • fxd123 20 hours ago |
      > Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.

      They seem to now, a few min after your comment

      • redm 20 hours ago |
        Im much more concerned with customer sites being down which indicates are not impacted. They are.. :/
    • javier2 20 hours ago |
      Yeah. I only work for a small company, but you can be certain we will not update the status page if only a small portion of customers are affected, and if we are fully down, rest assured there will be no available hands to keep the status page updated
      • s_dev 19 hours ago |
        >rest assured there will be no available hands to keep the status page updated

        That's not how status pages if implemented correctly work. The real reason status pages aren't updated is SLAs. If you agree on a contract to have 99.99% uptime your status page better reflect that or it invalidates many contracts. This is why AWS also lies about it's uptime and status page.

        These services rarely experience outages according their own figures but rather 'degraded performance' or some other language that talks around the issue rather than acknowledging it.

        It's like when buying a house you need an independent surveyor not the one offered by the developer/seller to check for problems with foundations or rotting timber.

        • 8cvor6j844qw_d6 19 hours ago |
          I imagine there will be many levels of "approvals" to get the status page actually showing down, since SLA uptime contracts is involved.
        • lucianbr 19 hours ago |
          Are the contracts so easy to bypass? Who signs a contract with an SLA knowing the service provider will just lie about the availability? Is the client supposed to sue the provider any time there is an SLA breach?
          • netdevphoenix 19 hours ago |
            Anyone who doesn't have any choice financially or gnostically. Same reason why people pay Netflix despite the low quality of most of their shows and the constant termination of tv series after 1 season. Same reason why people put up with Meta not caring about moderating or harmful content. The power dynamics resemble a monopoly
            • ozim 19 hours ago |
              Most of services are not really critical but customers want to have 99.999% on the paper.

              Most of the time people will just get by and ignore even full day of downtime as minor inconvenience. Loss of revenue for the day - well you most likely will have to eat that, because going to court and having lawyers fighting over it most likely will cost you as much as just forgetting about it.

              If your company goes bankrupt because AWS/Cloudflare/GCP/Azure is down for a day or two - guess what - you won't have money to sue them ¯\_(ツ)_/¯ and most likely will have bunch of more pressing problems on your hand.

            • lucianbr 17 hours ago |
              Why bother to put the SLA in the contract at all, if people have no choice but to sign it?

              Netflix doesn't put in the contract that they will have high-quality shows. (I guess, don't have a contract to read right now.)

          • immibis 19 hours ago |
            The company that is trying to cancel its contract early needs to prove the SLA was violated, which is very easy of the company providing the service also provides a page that says their SLA was violated. Otherwise it's much harder to prove.
          • heipei 19 hours ago |
            The client is supposed to monitor availability themselves, that is how these contracts work.
        • laurent123456 19 hours ago |
          This is weird - at this level contracts are supposed to be rock solid so why wouldn't they require accurate status reporting? That's trivial to implement, and you can even require to have it on a neutral third-party like UptimeRobot and be done with it.

          I'm sure there are gray areas in such contracts but something being down or not is pretty black and white.

          • remus 19 hours ago |
            > I'm sure there are gray areas in such contracts but something being down or not is pretty black and white.

            Is it? Say you've got some big geographically distributed service doing some billions of requests per day with a background error rate of 0.0001%, what's your threshold for saying whether the service is up or down? Your error rate might go to 0.0002% because a particular customer has an issue so that customer would say it's down for them, but for all your other customers it would be working as normal.

          • franga2000 19 hours ago |
            > something being down or not is pretty black and white

            This is so obviously not true that I'm not sure if you're even being serious.

            Is the control panel being inaccessible for one region "down"? Is their DNS "down" if the edit API doesn't work, but existing records still get resolved? Is their reverse proxy service "down" if it's still proxying fine, just not caching assets?

            • laurent123456 18 hours ago |
              I understand there are nuances here, and I may be oversimplifying, but if part of the contract effectively says "You must act as a proxy for npmjs.com" yet the site has been returning 500 Cloudflare errors across all regions several times within a few weeks while still reporting a shining 99.99% uptime, something doesn't quite add up. Still, I'm aware I don't know much about these agreements, and I'm assuming the people involved aren't idiots and have already considered all of this.
          • javier2 12 hours ago |
            > something being down or not is pretty black and white

            it really isn't. We often have degraded performance for a portion of customers, or just down for customers of a small part of the service. It has basically never happened that our service is 100% down.

        • javier2 19 hours ago |
          I work for a small company. We have no written SLA agreements.
        • redm 19 hours ago |
          SLA’s usually just give you a small credit for the exact period of the incident, which is arymetric to the impact. We always have to negotiate for termination rights for failing to meet SLA standards but, in reality, we never exercise them.

          Reality is that in an incident, everyone is focused on fixing issue, not updating status pages; automated checks fail or have false positives often too. :/

          • korm 18 hours ago |
            Yep, every SLA I've ever seen only offers credit. The idea that providers are incentivized to fudge uptime % due to SLAs makes no sense to me. Reputation and marketing maybe, but not SLAs.

            The compensation is peanuts. $137 off a $10,000 bill for 10 hours of downtime, or 98.68% uptime in a month, is well within the profit margins.

      • lawnchair 19 hours ago |
        I have to say that if an incident becomes so overwhelming that nobody can spare even a moment to communicate with customers, that points to a deeper operational problem. A status page is not something you update only when things are calm. It is part of the response itself. It is how you keep users informed and maintain trust when everything else is going wrong.

        If communication disappears entirely during an outage, the whole operation suffers. And if that is truly how a company handles incidents, then it is not a practice I would want to rely on. Good operations teams build processes that protect both the system and the people using it. Communication is one of those processes.

      • GoblinSlayer 19 hours ago |
        You won't be able to update the status page due to failures anyway.
        • PhilippGille 18 hours ago |
          Why not? A good status page runs on a different cloud provider in a different region, specifically to not be affected at the same time.
      • onion2k 19 hours ago |
        if we are fully down, rest assured there will be no available hands to keep the status page updated

        There is no quicker way for customers to lose trust in your service than it to be down and for them to not know that you're aware and trying to fix it as quickly as possible. One of the things Cloudflare gets right is the frequent public updates when there's a problem.

        You should give someone the responsibility for keeping everyone up to date during an incident. It's a good idea to give that task to someone quite junior - they're not much help during the crisis, and they learn a lot about both the tech and communication by managing it.

    • hinkley 20 hours ago |
      They were intending to start a maintenance window starting 6 minutes ago, but they were already down by then.
    • dinoqqq 20 hours ago |
      There is an update:

      "Cloudflare Dashboard and Cloudflare API service issues"

      Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.

      Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed. Dec 05, 2025 - 08:56 UTC

    • rollulus 19 hours ago |
      Not weird, that’s tradition by now.
    • jachee 19 hours ago |
      Management is always going to take too long (in an engineer’s opinion) to manually throw the alerts on. They’re pressing people for quick fixes so they can claim their SLAs are intact.
    • rvz 19 hours ago |
      The AI agents can't help out on this time.
      • rifycombine1 19 hours ago |
        maybe we can back to stackoverflow :)
    • Havoc 19 hours ago |
      It’s wild how non of the big corporations can make a functional status page
      • dncornholio 19 hours ago |
        They can. They don't want to though.
      • javier2 19 hours ago |
        They could, but accurate reporting is not good for their SLAs
    • jbuild 19 hours ago |
      Interesting, I get a 500 if I try to visit coinbase.com, but my WebSocket connections to advanced-trade-ws.coinbase.com are still live with no issues.
      • emakarov 19 hours ago |
        probably these websockets are not going through cloudflare
    • tjpnz 19 hours ago |
      They have enough data to at least automate yellow.
    • devmor 19 hours ago |
      Yes, the incident report claims this was limited to their client dashboard. It most certainly was not. I have the PagerDuty alerts to prove it...
  • headmelted 20 hours ago |
    Claude offline too. 500 errors on the web and the mobile app has been knocked out.
    • lionkor 20 hours ago |
      I had to switch to Gemini for it to help me form a thought so I could type this reply. Its dire.
  • mercurialsolo 20 hours ago |
    claude code works tho
  • sammy2255 20 hours ago |
    500 internal server error on most things:

    500 Internal Server Error cloudflare

  • pzs 20 hours ago |
    Just experienced this and came here to check, because even their website is down. The referenced link also returns with 500.
  • headmelted 20 hours ago |
    "In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 07:00 UTC"

    No need. Yikes.

  • imperfectfourth 20 hours ago |
    downdetector is also down
    • maxlin 20 hours ago |
      it being the first google result and serving the exact same error as the pages one is trying to get info from is too funny
  • AmateurAlert 20 hours ago |
    • 26d0 20 hours ago |
      hmm... https://downdetectorsdowndetector.com/

      (edit: it's working now (detecting downdetector's down))

      • aurareturn 20 hours ago |
        Ehh, so down detector for down detector is up but it is inaccurate.
      • xyproto 20 hours ago |
        A wrong downdetectordowntector is worse than a 500 one. :D
      • deveesh_shetty 20 hours ago |
        You had one job.
      • Andugal 20 hours ago |
        So DownDetector is down, but DownDetectorDownDetector does not detect it... We probably need one more DownDetector. (no)
        • halgir 20 hours ago |
          We have one. But according to Down Detector's Down Detector's Down Detector's Down Detector, that's also down.
          • Dilettante_ 20 hours ago |
            Well Down Detector's Down Detector isn't down...What we might need is a Down Detector's Down Detector Validator
        • namjh 20 hours ago |
          Yes we do have[^1] but unfortunately it looks like not checking the integrity, just reachability.

          [1]: https://downdetectorsdowndetectorsdowndetector.com/

      • manyaoman 20 hours ago |
        So down²detector was fake all along?
      • vanyauhalin 20 hours ago |
        • Recursing 20 hours ago |
        • mrducksy 19 hours ago |
          It’s down detectors all the way down!
        • superdisk 19 hours ago |
          Lol. The fact that the 4x one actually works and is correctly reporting that the 3x one is down actually makes this a lot funnier to me.
        • altmanaltman 19 hours ago |
          it's like they didn't fully think it through/expect people to actually use it so soon
        • ssolarsystem1 19 hours ago |
          downdetectorsdowndetectors didn't detect breakdown of downdetectors with 500 Error
      • aroman 20 hours ago |
        great news, schrodingersdetector.com is available!
      • maxlin 20 hours ago |
        >half the internet is down >downdetector is down >downdetector down detector reports everything is fine

        software was a mistake

      • O4epegb 20 hours ago |
        This is a fake detector that just has frontend logic for mocking realistic data, you can easily see it in the source code.
    • xx_ns 20 hours ago |
      At least it's still right in spite of being down.
  • strangeness 20 hours ago |
    Who knows, maybe it will be because of C or C++ this time. Or something else.
    • etyhhgfff 15 hours ago |
      They rewrote some of their core components from nginx+LuaJit to Rust for better perf and lateny recently. I guess there are some bugs in the new codebase.
  • piker 20 hours ago |
    At least the 500 error announces ownership.

    Imagine how productive we'll be now!

  • rvz 20 hours ago |
    Round 2 of Cloudflare outages.

    We can now see which companies have failed in their performative systems design interviews.

    Looking forward to the post-mortem.

  • sharts 20 hours ago |
    HaHa -Nelson
  • csomar 20 hours ago |
    Interestingly, my site running on workers https://codeinput.com is still functioning. Worth mentioning that I don't use Cloudflare firewall/caching (directly exposed workers)
  • c16 20 hours ago |
    CloudFlare: You can't go down if you're never up.
  • pm90 20 hours ago |
    This is not good. One major outage? Something exceptional. Several outages in a short time? As someone thats worked in operations, I have empathy; there are so many “temp havks” that are put in place for incidents. but the rest of the world won’t… they’re gonna suffer a massive reputation loss if this goes on as long as the last one.
    • karmakurtisaani 20 hours ago |
      Probably fired a lot of their best people in the past few years and replaced it with AI. They have a de-facto monopoly, so we'll just accept it and wait patiently until they fix the problem. You know, business as usual in the grift economy.
      • 5d41402abc4b 19 hours ago |
        >They have a de-facto monopoly

        On what? There are lots of CDN providers out there.

        • immibis 19 hours ago |
          There's only one that lets everyone sign up for free.
        • esseph 19 hours ago |
          They do fare more than just CDN. It's the combination of service, features, reach, price, and the integration of it all.
      • rvz 19 hours ago |
        The "AI agents" are on holiday when an outage like this happens.
      • mvdtnz 11 hours ago |
        This didn't happen at all. You're just completely making shit up.
    • PlotCitizen 20 hours ago |
      This is a good reminder for everyone to reconsider making all of their websites depend on a single centralized point of failure. There are many alternatives to the different services which Cloudflare offers.
      • koakuma-chan 19 hours ago |
        My Cloudflare Pages website works fine.
      • coffeebeqn 19 hours ago |
        We just love to merge the internet into single points of failure
        • phatfish 19 hours ago |
          This is just how free markets work, on the internet with no "physical" limitations it is simply accelerated.

          Left alone corporations to rival governments emerge, which are completely unaccountable. At least there is some accountability of governments to the people, depending on your flavour of government.

        • mschuster91 19 hours ago |
          no one loves the need for CDNs other than maybe video streaming services.

          the problem is, below a certain scale you can't operate anything on the internet these days without hiding behind a WAF/CDN combo... with the cut-off mark being "we can afford a 24/7 ops team". even if you run a small niche forum no one cares about, all it takes is one disgruntled donghead that you ban to ruin the fun - ddos attacks are cheap and easy to get these days.

          and on top of that comes the shodan skiddie crowd. some 0day pops up, chances are high someone WILL try it out in less than 60 minutes. hell, look into any web server log, the amount of blind guessing attacks (e.g. /wp-admin/..., /system/login, /user/login) or path traversal attempts is insane.

          CDN/WAFs are a natural and inevitable outcome of our governments and regulatory agencies not giving a shit about internet security and punishing bad actors.

      • berkes 19 hours ago |
        But the nature of a CDN and most other products CF offers, is central by nature.

        If you switch from CF to the next CF competitor, you've not improved this dependency.

        The alternative here, is complex or even non-existing. Complex would be some system that allows you to hotswap a CDN, or to have fallback DDOS protection services, or to build you own in-house. Which, IMO, is the worst to do if your business is elsewhere. If you sell, say, petfood online, the dependency-risk that comes with a vendor like CF, quite certainly is less than the investment needed- and risk associted with- building a DDOS protection or CDN on your own; all investment that's not directed to selling more pet-food or get higher margins at doing so.

        • agnivade 19 hours ago |
          You can load-balance between CDN vendors as well
          • otikik 19 hours ago |
            Then your load balancer becomes the single point of failure.
            • roryirvine 18 hours ago |
              BGP Anycast will let you dynamically route traffic into multiple front-end load balancers - this is how GSLB is usually done.

              Needs an ASN and a decent chunk of PI address space, though, so not exactly something a random startup will ever be likely to play with.

            • DaanDL 17 hours ago |
              Then add a load balancer in front of your load balancer, duh. /s
          • sofixa 19 hours ago |
            With what? The only (sensible) way is DNS, but then your DNS provider is your SPOF. Amazon used to run 2 DNS providers (separate NS from 2 vendors for all of AWS), but when one failed, there was still a massive outage.
        • altmanaltman 19 hours ago |
          yeah there is no incentive to do a CDN in house, esp for businesses that are not tech-oriented. And the costs of the occasional outage has not really been higher than the cost of doing it in-house. And I'm sure other CDNs gets outages as well, just CF is so huge everyone gets to know about it and it makes the news
      • inferiorhuman 18 hours ago |

          There are many alternatives
        
        Of varying quality depending on the service. Most of the anti-bot/catpcha crap seems to be equivalently obnoxious, but the handful of sites that use PerimeterX… I've basically sworn off DigiKey as a vendor since I keep getting their bullshit "press and hold" nonsense even while logged in.

        I don't like that we're trending towards a centralized internet, but that's where we are.

    • rvz 20 hours ago |
      We are now seeing which companies do not consider the third party risk of single point of failures in systems they do not control as part of their infrastructure and what their contingency plan is.

      It turns out so far, there isn't one. Other than contacting the CEO of Cloudflare rather than switching on a temporary mitigation measure to ensure minimal downtime.

      Therefore, many engineers at affected companies would have failed their own systems design interviews.

      • cryptonym 19 hours ago |
        Sometimes it's not worth it. Your plan is just to accept you'll be off for a day or two, while you switch to a competitor.
        • rvz 17 hours ago |
          Can't say that when it is a time critical service such as hospitals, banks, financial institutions or air-traffic control services.
          • cryptonym 14 hours ago |
            Only a fool would build an architecture for critical air-traffic with Cloudflare as a SPoF.
        • creamyhorror 15 hours ago |
          If there's a fitting competitor worth switching to.

          Plus most people don't get blamed when AWS (or to a lesser extent Cloudflare) goes down, since everyone knows more than half the world is down, so there's not an urgent motivation to develop multi-vendor capability.

      • throwaway42346 19 hours ago |
        Alternative infrastructure costs money, and it's hard to get approval from leadership in many cases. I think many know what the ideal solution looks like, but anything linked to budgets is often out of the engineer's hands.

        In some cases it is also a valid business decision. If you have 2 hour down time every 5 years, it may not have a significant revenue impact. Most customers think it's too much bother to switch to a competitor anyway, and even if it were simple the competition might not be better. Nobody gets fired for buying IBM

        The decision was probably made by someone else who moved on to a different company, so they can blame that person. It's only when down time significantly impacts your future ARR (and bonus) that leadership cares (assuming that someone can even prove that they actually lose customers).

      • formerly_proven 19 hours ago |
        On the other thread there were comments claiming it’s unknowable what IaaS some SaaS is using, but SaaS vendors need to disclose these things one way or another, e.g. DPAs. Here is for example renders list of subprocessors: https://render.com/security

        It’s actually fairly easy to know which 3rd party services a SaaS depends on and map these risks. It’s normal due diligence for most companies to do so before contracting a SaaS.

    • berkes 20 hours ago |
      At least this warrants a good review of anyone's dependency on cloudflare.

      If it turns out that this was really just random bad luck, it shouldn't affect their reputation (if humans were rational, that is...)

      But if it is what many people seem to imply, that this is the outcome of internal problems/cuttings/restructuring/profit-increase etc, then I truly very much hope it affects their reputation.

      But I'm afraid it won't. Just like Microsoft continues to push out software, that, compared to competitors, is unstable, insecure, frustrating to use, lacks features, etc, without it harming their reputation or even bottomlines too much. I'm afraid Cloudflare has a de-facto monopoly (technically: big moat) and can get away with offering poorer quality, for increasing pricing by now.

      • coffeebeqn 19 hours ago |
        Vibe infrastructure
        • rvz 19 hours ago |
          So that is what the best case definition of what "Vibe Engineering" is.
      • MrAureliusR 19 hours ago |
        well that's the thing, such a huge number of companies route all their traffic through Cloudflare. This is at least partially because for a long time, there was no other company that could really do what Cloudflare does, especially not at the scales they do. As much as I despise Cloudflare as a company, their blog posts about stopping attacks and such are extremely interesting. The amount of bandwidth their network can absorb is jaw-dropping.

        I've said to many people/friends that use Cloudflare to look elsewhere. When such a huge percentage of the internet flows through a single provider, and when that provider offers a service that allows them to decrypt all your traffic (if you let them install HTTPS certs for you), not only is that a hugely juicy target for nation-states but the company itself has too much power.

        But again, what other companies can offer the insane amount of protection they can?

      • zelphirkalt 19 hours ago |
        Microsoft's reputation couldn't be much lower at this point, that's their trick.

        The issue is the uninformed masses being led to use Windows when they buy a computer. They don't even know how much better a system could work, and so they accept whatever is shoved down their throats.

      • rsynnott 18 hours ago |
        > Just like Microsoft continues to push out software, that, compared to competitors, is unstable, insecure, frustrating to use, lacks features, etc, without it harming their reputation or even bottomlines too much.

        Eh.... This is _kind_ of a counterfactual, tho. Like, we are not living in the world where MS did not do that. You could argue that MS was in a good place to be the dominant server and mobile OS vendor, and simply screwed both up through poor planning, poor execution, and (particularly in the case of server stuff) a complete disregard for quality as a concept.

        I think someone who'd been in a coma since 1999 waking up today would be baffled at how diminished MS is, tbh. In the late 90s, Microsoft practically _was_ computers, with only a bunch of mostly-dying UNIX vendors for competition. And one reasonable lens through which to interpret its current position is that it's basically due to incompetence on Microsoft's part.

      • gbrindisi 17 hours ago |
        The crowdstrike incident taught us that no one is going to review any dependency whatsoever.
        • ezst 16 hours ago |
          Yep, that's what late stage capitalism leaves you with: consolidation, abuse, helplessness and complacency/widespread incompetence as a result
    • pyuser583 19 hours ago |
      Lots of big sites are down
    • belter 19 hours ago |
      This will be another post-mortem of...config file messed...did not catch...promise to be doing better next....We are sorry.

      They problem is architectural.

      • lucyjojo 5 hours ago |
        cloudflare is a huge system in active development.

        it will randomly fail. there is no way it cannot.

        there is a point where the cost to not fail simply becomes too high.

    • jcmfernandes 18 hours ago |
      Absolutely. I wouldn’t be surprised if they turned the heat up a little after the last incident. The result? Even more incidents.
    • bluerooibos 17 hours ago |
      I'm quite sure the reputational damage has already been done.

      How do they not have better isolation of these issues, or redundancy of some sort?

      • brandensilva 15 hours ago |
        The seed has been planted. It will take awhile for others to fill the void. Still the big players see this as an opportunity to steal market share if Cloudflare cannot live up to their reputation.
    • wooque 17 hours ago |
      2 days ago they had outage that affected Europe, Cloudflare seems to be going down the drain. I removed it for my personal sites.
  • dev0p 20 hours ago |
    Isn't it happening a little too often now? Did someone .unwrap in production again?
  • tovej 20 hours ago |
    Internet-level companies are having more outages recently. Is the exposed surface area increasing or is the quality of service suffering?
  • timvdalen 20 hours ago |
    Wow, just plain 500s on customer sites. That's a level of down you don't see that often.
    • disillusioned 20 hours ago |
      At least they branded it!
    • ablation 20 hours ago |
      Yeah that's a hard 500 right? Not even Cloudflare's 500 branded page like last time. What could have caused this, I wonder.
      • mckirk 20 hours ago |
        "A cable!"

        "How do you know?"

        "I'm holding it!"

      • Hamuko 19 hours ago |
        I hope it’s not another Result.unwrap().
        • singularity2001 19 hours ago |
          maybe this would cause rust to adopt exception handling, and by exception I mean panic
    • willtemperley 20 hours ago |
      Yes Claude is down with a 500 (cloudflare).
    • ransom1538 20 hours ago |
      So. I don't understand the 5 nines they promote. One bad day those nines are gone. So they next year you are pushing 2 nines.
      • kingstnap 19 hours ago |
        Its just fabricated bullshit. It's how all the companies do it. 99.999% over a year is literally 5 minutes. Or under an hour in a decade, that's wildly unrealistic.

        Reddit was once down for a full day and that month they reported 99.5% uptime instead of 99.99% as they normally claimed for most months.

        There is this amazing combination of nonsense going on to achieve these kinds of numbers:

        1. Straight up fraudulent information on status page. Reporting incendents as more minor than any internal monitors would claim.

        2. If it's working for at least a few percent of customers it's not down. Degraded is not counted.

        3. If any part of anything is working then it's not down. For example with the reddit example even if the site was dead as long as the image server is still at 1% functional with some internal ping the status is good.

        • zelphirkalt 12 hours ago |
          Funnily enough an hour in a decade on a good hoster, with a stable service running on it, occasionally updated by version number ... it might even be possible. Maybe not quite, but close, if one tries. While it seems completely impossible with cloudflare, AWS, and whatnot, who are having outages every other week these days.
    • maxekman 20 hours ago |
      A precious glimpse of the less seen page renders.
    • jondot 20 hours ago |
      its like someone-shut-down-the-power 500s
    • gwd 19 hours ago |
      Unlike the previous outage, my server seems fine, and I can use Cloudflare's tunnel to ssh to the host as well.
    • Eikon 19 hours ago |
      Mine [0] seems to be very high latency but no 500s. But yes, most cloudflare-proxied websites I tried seems to just return 500s.

      [0] https://www.merklemap.com/

  • sushidev 20 hours ago |
    Are you serious?
  • moralestapia 20 hours ago |
    Ooof, this one looks like a big one!

    canva.com

    chess.com

    claude.com

    coinbase.com

    kraken.com

    linkedin.com

    medium.com

    notion.so

    npmjs.com

    shopify.com (!)

    and many more I won't add bc I don't want to be spammy.

    Edit: Just checked all my websites hosted there (~12), they're all ok. Other people with small websites are doing well.

    Only huge sites seem to be down. Perhaps they deal with them separately, the premium-tier of Cloudflare clients, ... and those went down, dang.

    • reddalo 19 hours ago |
      My small websites are also up. I wonder if they're going to go down soon, or if we're safe.
    • shultays 19 hours ago |
      zoom
    • otherme123 19 hours ago |
      readthedocs down is hurting me the most. My small websites are doing OK.
  • atraac 20 hours ago |
    All those enterprise architects must be fuming now
  • aurareturn 20 hours ago |
    My company's services went down as well.
  • SherryWong 20 hours ago |
    LinkedIn and MEdium are also down as a result
  • chinathrow 20 hours ago |
    Looks like (some) sites behind Cloudflare still work if they do not have caching on.
    • jonathanlydall 20 hours ago |
      It's not simply about caching as we have CDN and reverse proxying which are still running without issue.
  • songtianlun1 20 hours ago |
    yes...
  • jonathanlydall 20 hours ago |
    It seems regular reverse proxying and R2 still works, as we use those and seem to be working fine still.

    Can't get to the Dashboard though.

  • thiscatis 20 hours ago |
    Somebody at Cloudflare is stretching that initial investigation time as much as possible to avoid having to update their status to being down and losing that Christmas bonus.
  • nabla9 20 hours ago |
    It's configuration error or related to configuration. It always is with this big things.

    Nice thing about Cloudflare being down is that almost everything is down at once. Time for peace and quiet.

    • norskeld 19 hours ago |
      Damn, I wish CloudFlare being down also affected local development, so I could take a break from doing frontend… :'(
  • da_grift_shift 20 hours ago |
    https://www.cloudflarestatus.com/incidents/hlr9djcf3nyp

    >We will be performing scheduled maintenance in ORD (Chicago) datacenter

    >Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region.

    Looks like it's not just Chicago that CF brought down...

    • yessferatu 19 hours ago |
      South African here. Down on our side. Huge sites, like our primary news site is down - medical services, emergency service/information etc... all down. It's been like this since 11:00am our time, so about 13minutes now.
  • maxlin 20 hours ago |
    >Go to <social media page> - 500 error from cloudflare >Google is <social media page> down -> click first link - literally the exact same 500 cloudflare error html from downdetector

    I thought we were meant to learn something ... ?

  • theginger 20 hours ago |
    I don't want to criticize cloud flare, I love what they do and understand the scale of the challenge, but most people don't and 2 in a month or so like this is going to hit their reputation.
    • The_President 6 hours ago |
      After being overly critical of Matrix the other day on here I have reeled back into another conclusion, is that talent issues are industry wide and it sucks making a bad hire where competence issues arise that don’t match the resume.
  • 3xstphvs 20 hours ago |
    aw, i cant go on rateyourmusic
  • phartenfeller 20 hours ago |
    Wow, three times in a month is really crushing their trust.
    • dabeeeenster 20 hours ago |
      3?! When was the second>
    • 8cvor6j844qw_d6 19 hours ago |
      I'll need to checkup on DigitalOcean uptime, may be better than Cloudflare.
      • phartenfeller 19 hours ago |
        My Hetzner servers have been running fine for years. Okay, there were times when I broke something, but at least I was able to fix it quickly and never felt dependent on others.
        • iso1631 18 hours ago |
          CxOs want to be dependent on someone else, specifically suppliers with pieces of paper saying "we are great, here's a 1% discount on next years renewal"

          If the in house tech team breaks something and fixes it, that's great from an engineer point of view - we like to be useful, but the person at the top is blamed.

          If an outsourced supplier (one which the consultants recommend, look at Gartner Quadrants etc) fails, then the person at the top is not blamed, even though they are powerless and the outage is 10 times longer and 10 times as frequent.

          Outsourcing is not about outcome, it's about accountability, and specifically avoiding it.

  • countWSS 20 hours ago |
    Everything i use depend on perfect cloudflare operation workflow, practically 99% of these services go down. What magical qualities it has that no competitors form for its services?
  • nicolailolansen 20 hours ago |
    They had a few good weeks.
  • arunaugustine 20 hours ago |
    They had a scheduled maintenance between 7am and 11am UTC in Chicago. But that should have re-routed traffic not take down internet right?
    • PrayagS 20 hours ago |
      I'm in India and we're affected as well.
      • J4PJ1T 19 hours ago |
        Oceania here gang and i think that it is a global issue
  • makkoncept 20 hours ago |
    https://downdetectorsdowndetector.com/ is up :) but the status is not correct.
  • matt3210 20 hours ago |
    Ooof status 500 someone’s getting fiiiiired!
  • Hashversion 20 hours ago |
    how long cloudflarestatus.com takes it to detect usually?
  • valdemarrolfsen 20 hours ago |
    No engineers from Cloudflare reading hackernews these days? Should update your status page!
  • Palmik 20 hours ago |
    This is second time this week: https://news.ycombinator.com/item?id=46140145

    The previous one affected European users for >1h and made many Cloudflare websites nearly unusable for them.

  • 0xfedcafe 20 hours ago |
    Funny how even safe Rust isn’t able to stop vibecoding without a proper validation. And the fact that it's a monopoly isn't so funny anymore.
    • dkdbejwi383 20 hours ago |
      There is no language that makes it impossible to have any kind of bug ever. The safety languages like Rust offer is around memory, not bad configuration or faulty business logic.
    • lionkor 20 hours ago |
      Rust is one of the few languages where I found AI to be very well checked. The type system can enforce so many constraints that you do avoid lots of bugs, and the AI will get caught writing shit code.

      Of course, vibe coding will always find a way to make something horribly broken but pretty.

      • nromiun 20 hours ago |
        I have noticed LLMs tend to generate very verbose code. What an average human might do in 10 LoC, LLMs will stretch that to 50-60 lines. Sometimes with comments on every line. That can make it hard to see those bugs.
      • 0xfedcafe 20 hours ago |
        Yep, that’s what I wrote. It wasn’t a sarcasm
  • m078 20 hours ago |
    Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
  • wildcard1210 20 hours ago |
    my shopify store is down
  • ojm 20 hours ago |
    Turnstile seems up still.
  • aroman 20 hours ago |
    looks like a big one. interestingly, our site, which uses a TON of Cloudflare services[0] — yet not their front-line proxy — is doing fine: https://magicgarden.gg.

    So it seems like it's just the big ol' "throw this big orange reverse proxy in front of your site for better uptime!" is what's broken...

    [0] Workers, Durable Objects, KV, R2, etc

    • bpye 20 hours ago |
      Moving off of Cloudflare for my personal domain is on my todo list for the holidays...
    • reassess_blind 19 hours ago |
      My sites that use their main proxy are seemingly up and working? Could be a regional PoP issue.
  • wildcard1210 20 hours ago |
    My Shopify store is down. My competitor stores are also down.
  • asmor 20 hours ago |
    That's the 30% vibe code they promised us.

    Cynicism aside, something seems to be going wrong in our industry.

    • nlitened 20 hours ago |
      Also “Rewrite it in Rust”.

      P.S. it’s a joke, guys, but you have to admit it’s at least partially what’s happening

      • koakuma-chan 19 hours ago |
        No, it has nothing to do with Rust.
        • zwnow 19 hours ago |
          The first one had something to do with Rust :-)
          • kortilla 19 hours ago |
            Not really. In C or C++ that could have just been a segfault.

            .unwrap() literally means “I’m not going to handle the error branch of this result, please crash”.

            • mike_hearn 19 hours ago |
              Indeed, but fortunately there are more languages in the world than Rust and C++. A language that performed decently well and used exceptions systematically (Java, Kotlin, C#) would probably have recovered from a bad data file load.
              • koakuma-chan 19 hours ago |
                There is nothing that prevents you from recovering from a bad data file load in Rust. The programmer who wrote that code chose to crash.
                • mike_hearn 19 hours ago |
                  That's exactly my point. There should be no such thing as choosing to crash if you want reliable software. Choosing to crash is idiomatic in Rust but not in managed languages in which exceptions are the standard way to handle errors.
                  • koakuma-chan 19 hours ago |
                    I am not a C# guy, but I wrote a lot of Java back in the day, and I can authoritatively tell you that it has so-called "checked exceptions" that the compiler forces you to handle. However, it also has "runtime exceptions" that you are not forced to handle, and they can happen any where and any time. Conceptually, it is the same as error versus panic in Rust. One such runtime exception is the notorious `java.lang.NullPointerException` a/k/a the billion-dollar mistake. So even software in "managed" languages can and does crash, and it is way more likely to do so than software written in Rust, because "managed" languages do not have all the safety features Rust has.
                    • GoblinSlayer 16 hours ago |
                      When dotnet has an unhandled exception, it terminates with abort.
                    • mike_hearn 12 hours ago |
                      In practice, programs written in managed languages don't crash in the sense of aborting the entire process. Exceptions are usually caught at the top level (both checked and unchecked) and then logged, usually aborting the whole unit of work.

                      For trapping a bad data load it's as simple as:

                          try {
                              data = loadDataFile();
                          } catch (Exception e) {
                              LOG.error("Failed to load new data file; continuing with old data", e);        
                          }
                      
                      This kind of code is common in such codebases and it will catch almost any kind of error (except out of memory errors).
                      • koakuma-chan 11 hours ago |
                        Here is the Java equivalent of what happened in that Cloudflare Rust code:

                          try {
                            data = loadDataFile();
                          } catch (Exception e) {
                            LOG.error("Failed to load new data file", e);
                            System.exit(1);
                          }
                        
                        So the "bad data load" was trapped, but the programmer decided that either it would never actually occur, or that it is unrecoverable, so it is fine to .unwrap(). It would not be any less idiomatic if, instead of crashing, the programmer decided to implement some kind of recovery mechanism. It is that programmer's fault, and has nothing to do with Rust.

                        Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there. Maybe it was needed in the past, but something changed, and it is no longer needed, but it will stay there, because there is no way to know unless you specifically look. Also, you don't even know the exact error types. In Rust, the error type is known in advance.

                        • mike_hearn 7 hours ago |
                          Yes, I know. But nobody writes code like that in Java. I don't think I've ever seen it outside of top level code in CLI tools. Never in servers.

                          > It is that programmer's fault, and has nothing to do with Rust.

                          It's Rust's fault. It provides a function in its standard library that's widely used and which aborts the process. There's nothing like that in the stdlibs of Java or .NET

                          > Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there.

                          I'm not getting the feeling you've worked on many large codebases in managed languages to be honest? I know you said you did but these patterns and problems you're raising just aren't problems such codebases have. Top level exception handlers are meant to be general, they aren't supposed to be specific to certain kinds of error, they're meant to recover from unpredictable or unknown errors in a general way (e.g. return a 500).

        • gwd 19 hours ago |
          But it might have something to do with the "rewrite" part:

          > The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive.

          > Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

          > Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

          > When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

          From https://www.joelonsoftware.com/2000/04/06/things-you-should-...

          • windward 19 hours ago |
            A lot of words for a 'might'. We don't know what caused the downtime.
            • gwd 19 hours ago |
              Not this time; but the rewrite was certainly implicated in the previous one. They actually had two versions deployed; in response to unexpected configuration file size, the old version degraded gracefully, while the new version failed catastrophically.
              • perching_aix 18 hours ago |
                Both versions were taken off-guard by the defective configuration they fetched, it was not a case of a fought and eliminated bug reappearing like in the blogpost you quoted.
      • kenonet 19 hours ago |
        it's never the technology, it's the implementation
      • MegaThorx 19 hours ago |
        Did you consider to rewrite your joke in rust?
      • rifycombine1 19 hours ago |
        cc: @oncall then trigger pagerduty :)
    • joenada 20 hours ago |
      Going? I think we got there a long time ago. I'm sure we all try our best but our industry doesn't take quality seriously enough. Not compared to every other kind of engineering discipline.
      • asmor 19 hours ago |
        Always been there. But it seems to be creeping into institutions that previously cared over the past few years, accelerating in the last.
    • themafia 19 hours ago |
      Salaries are flat relative to inflation and profits. I've long felt that some of the hype around "AI" is part of a wage suppression tactic.
    • iso1631 19 hours ago |
      > Cynicism aside, something seems to be going wrong in our industry.

      Started after the GFC and the mass centralisation of infrastructure

  • nromiun 20 hours ago |
    I wonder if it is another bug , like unwrap, in their rewritten code.

    Also, I don't think their every service got affected. I am using their proxy and pages service and both are still up.

  • davidcheungo123 20 hours ago |
    wtf, cannot work now
  • paweladamczuk 20 hours ago |
    I noticed this when my Claude iPhone app stopped working.
  • Hashversion 20 hours ago |
    cloudflare pages seems to be working!
  • odie5533 20 hours ago |
    How is Hacker News still up?
    • sunbum 20 hours ago |
      Because it doesn't use cloudflare duh.?
      • otherme123 19 hours ago |
        I have a handful of sites DNS/NS through Cloudflare, with their certificates, and they are working OK.
      • PrayagS 19 hours ago |
        From their response headers, it seems like the request is coming from NGINX directly. How do they defend themselves against DOS attacks?
        • sunbum 18 hours ago |
          Big server. And if it goes down it goes down? Who cares, it's hackernews.
    • grundrausch3n 19 hours ago |
      I thought they are running classic FreeBSD servers like in ye olde times.
  • yread 20 hours ago |
    Hah even Linkedin is showing 500 for me
  • computersuck 20 hours ago |
    waaay too soon
  • b_bloch 20 hours ago |
    That's quite unfortunate xD
  • SCdF 20 hours ago |
    Really disappointed that down detectors down detector[1] isn't detecting that down detector[2] is down

    [1] https://downdetectorsdowndetector.com/

    [2] https://downdetector.com/

  • meindnoch 20 hours ago |
    Maybe they should stop vibe coding and vibe reviewing their PRs?
  • Dilettante_ 20 hours ago |
    "I warned you about Cloudflare bro!!!! I told you dog!"
  • grim_io 20 hours ago |
    I wonder how many uptime SLAs will be violated this year.
  • virtualritz 20 hours ago |
    Yeah, and because of this for example Claude Code is down too because the auth goes through CF. F*cking marvelus, the decentralized web ...
  • reneberlin 20 hours ago |
    I can imagine the horror of pressure of the people responsible for resolution. On that scale of impact it is very hard to keep calm - but still the hive of minds have to cooperate and solve the puzzle while the world is basically halted and ready to blame the company you work for.
  • ricardo81 20 hours ago |
    Their uptime over the year is likely faring worse than your average hosting company, DNS provider or CDN.
    • cryptonym 20 hours ago |
      Some may experience more downtime due to their outages than they'd have from DDoS.
    • iso1631 18 hours ago |
      Their uptime over the year is faring worse than one of my pi holes, let alone the resilient service.
  • igleria 20 hours ago |
    Heads will roll at cloudfare. E-commerce customers must be furious.

    Impossible not to feel bad for whoever is tasked to cleanup the mess.

    • zppln 19 hours ago |
      Especially around christmas. I was about to buy a pair of Birkenstocks. Nope, site is down. Went on to buy a microphone holder, nope, that site is down as well. :) Sure, I'll still get around to it eventually.
  • domysee 20 hours ago |
    I'm just realizing how much we depend on Cloudflare working. Every service I use is unreachable. Even worse than last time. It's almost impossible to do any work atm.
  • yoctosec 20 hours ago |
    I use Cloudflare because of their Tunnel to protect my Raspberry Pi, but I think I will just use it without the Tunnel now. My main concern is privacy, but I'm not ready to accept so frequent downtime and dependence on them. The whole reason to host self-host was to be independent anyway. Does anyone have a recommendation for that (that is free)? Should I worry about privacy? My name and my city are on the website anyway.
    • runeb 19 hours ago |
      Checkout tailscale
      • unixfox 19 hours ago |
        Tailscale's control plane uses Cloudflare.
        • runeb 11 hours ago |
          Thanks, I did not know this. My Tailscale was unaffected by the outage.
      • yoctosec 19 hours ago |
        And what about a website I want to make public? I'm just concerned about my IP being visible, like for my personal website or my searxng instance
        • iso1631 18 hours ago |
          Personally I'd just proxy it through a vm running on hertzer, linode, rackspace, etc
        • runeb 11 hours ago |
          Tailscale Funnel, but might need a paid account
    • DocJade 19 hours ago |
      my tunnels are still working, oddly
      • yoctosec 19 hours ago |
        Now mine works again too, I guess it was a short outage
    • The_President 14 hours ago |
      You can basically do this yourself without punching holes out to the public. Create VPN for Pi and clients and access from the same private network.
  • LeonenTheDK 20 hours ago |
    Nice, just got woken up by my outage alarms, just for it to be Cloudflare again. At least it's _my_ problem!

    But my goodness, they're really struggling over the last couple weeks... Can't wait to read the next blog post.

    • koakuma-chan 19 hours ago |
      Outage alarms?
  • bytejanitor 20 hours ago |
    gitlab.com hasn't noticed yet.
    • alex_suzuki 20 hours ago |
      it has now, for me. can't access web UI (SaaS, not self-hosted, obviously)
  • erikbye 20 hours ago |
    This is getting embarrassing.
  • hasperdi 20 hours ago |
    Even LinkedIn is now down. Opening linkedin.com gives me a 500 server error and Cloudflare at the bottom. Quite embarassing.
    • asmor 19 hours ago |
      At least they were available when Front Door was down!
  • Oras 20 hours ago |
    Went to ahref to check a domain, saw 500 and came here to check.

    I have a few domains on cloudflare and all of them are working with no issues so it might not be a global issue

  • basisword 20 hours ago |
    I’m sure everybody learnt their lesson from last months outage and built in redundancy or stopped relying on Cloudflare.
  • matt3210 20 hours ago |
    Everyone says vibe coding but people are just fine at being incompetent without the AI help
    • koolba 19 hours ago |
      Sure, but with AI we can automate that incompetence.
  • jondot 20 hours ago |
    LinkedIn is down
    • CodinM 19 hours ago |
      came here for this thx
  • xingwu 20 hours ago |
    • alextingle 19 hours ago |
      "Content not available in your region."

      Please avoid Imgur.

      • sebzim4500 19 hours ago |
        Use a vpn or avoid the UK
  • rgun 20 hours ago |
    https://registry.npmjs.org/ is down, affecting our builds
  • erikbye 20 hours ago |
    Cloudflare uptime has worsened a lot lately, AI coding has increased exponentially, hmm
  • meerab 20 hours ago |
    It is up now!
    • dale1110 19 hours ago |
      You sure?
      • dale1110 19 hours ago |
        Just checked. It's up!!
        • wyboy86110 19 hours ago |
          nope... order page is still 500
  • Ueland 20 hours ago |
    Interestingly enough, also some MS/Azure services are down. For example https://www.office.com/ just returns:

    >We are sorry, something went wrong. >Please try refreshing the page in a few minutes. If the problem persists, please visit status.cloud.microsoft for updates regarding known issues.

    The status page of course says nothing

    • GeertVL 20 hours ago |
      Linkedin -> the same
      • nikanj 19 hours ago |
        For me Linkedin returns the 500 cloudflare error
    • codeisforever 19 hours ago |
      Seems all of Shopify.com is down. Every store
  • vinskabun 20 hours ago |
    pixiv.net
  • jazzyjackson 20 hours ago |
    Is it at all achievable to be fronted by a CDN but fallback to the raw server in case the front falls off? Better to be vulnerable to DDoS than be unreachable altogether
    • calyhre 19 hours ago |
      But then you end up potentially exposing the origin server. This could be an opt-in option though
    • koolba 19 hours ago |
      With CloudFlare specifically probably not. IIRC, they require DNS resolution of your domain to operate so if they’re down, I don’t see how you’d change it to route directly to the underlying site.

      Even if you could, having two sets of TLS termination is going to be a pain as well.

  • justmarc 20 hours ago |
  • ianberdin 20 hours ago |
    NPM is down. World is collapsing thanks to Cloudflare.
  • w4zz 20 hours ago |
    gitlab down aswell
  • neo_tokyo 20 hours ago |
    Someone's been vibe coding the scheduled maintenance.
  • samwreww 20 hours ago |
    claude.ai is down bc of it :( good for OpenAI as they're using something else maybe Vercel?
  • ThalesX 20 hours ago |
    I just started getting npm errors while developing something; I was like hmm, strange... then I tried to go down to isitdown. That was also down. I was like, oh this must be something local to me (I'm in a remote place visiting my gramps).

    Then I go to Hacker News to check. Lo and behold, it's Cloudflare. This is sort of worrying...

  • nlstitch 20 hours ago |
    What ever happened to "no deploys on fridays"? haha
    • kenonet 19 hours ago |
      haha for real
  • ianberdin 20 hours ago |
    I have 10B idea: cloudflare that does not fail so often.
    • reddalo 20 hours ago |
      It exists and it's called Bunny.net
    • biql 19 hours ago |
      How about: internet that is actually decentralized.
      • ianberdin 19 hours ago |
        Yes, on one hand, it was so wonderful. Cloudflare came and said, "Yeah, now we'll save everyone from DDoS, everything's perfect, we'll speed up your site," and bam, they became a bottleneck for the entire internet. It's some kind of nightmare. Why didn't several other such popular startups appear, into which more money was invested, and which would allow some failure points to be created? I don't understand this. Or at least Cloudflare itself should have had some backup mechanism, so that in case of failures, something still works, even slowly, or at least they could redirect traffic directly, bypassing their proxies. They just didn't do that at all. Something is definitely wrong.
        • viraptor 19 hours ago |
          > Why didn't several other such popular startups appear

          bunny.net

          fastly.com

          gcore.com

          keycdn.com

          Cloudfront

          Probably some more I forgot now. CF is not the only option and definitely not the best option.

          > Yeah, now we'll save everyone from DDoS, everything's perfect, we'll speed up your site,

          ... and host the providers selling DDoS services. https://privacy-pc.com/articles/spy-jacking-the-booters.html

          • ianberdin 19 hours ago |
            Thank you for sending these alternatives, they look good. And, of course, the most important thing is that Cloudflare is free, while these alternatives cost money. And they cost hundreds of dollars at my traffic volume of tens of terabytes. Of course, I really don't want to pay. So, as they say, mice wept and jabbed, but they kept gnawing on the cactus.
            • viraptor 19 hours ago |
              Nothing's free - one day they will come knocking. Better be prepared to serve at an affordable level.
      • iso1631 19 hours ago |
        Nobody got fired for choosing clownflare
    • SoKamil 17 hours ago |
      Looking at their market cap it’s 71.5B idea
  • MildlySerious 20 hours ago |
    I can't update DNS entries for my domains with Porkbun, because it's "Powered by Cloudflare".
  • blackhaz 20 hours ago |
    Anyone shorting the damn stock?
  • nish__ 20 hours ago |
    Just in time for the London work day :)
  • kinensake 20 hours ago |
    Every time Cloudflare is down I'm not sure if it's really down or not because most down detector websites use Cloudflare. Lmao