• gruez 4 days ago |
    For people who want IP certificates, keep in mind that certbot doesn't support it yet, with a PR still open to implement it: https://github.com/certbot/certbot/pull/10495

    I think acme.sh supports it though.

    • mcpherrinm 4 days ago |
      Some ACME clients that I think currently support IP addresses are acme.sh, lego, traefik, acmez, caddy, and cert-manager. Certbot support should hopefully land pretty soon.
      • sgtcodfish 4 days ago |
        cert-manager maintainter chiming in to say that yes, cert-manager should support IP address certs - if anyone finds any bugs, we'd love to hear from you!

        We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.

        We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.

        [1]: https://cert-manager.io/docs/releases/ [2]: https://cert-manager.io/docs/configuration/acme/#acme-certif...

  • ivanr 4 days ago |
    As already noted on this thread, you can't use certbot today to get an IP address certificate. You can use lego [1], but figuring out the exact command line took me some effort yesterday. Here's what worked for me:

        lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
    
    [1] https://go-acme.github.io/lego/
    • Svoka 4 days ago |
      I wonder if the support made it to Caddy yet

      (seems to be WIP https://github.com/caddyserver/caddy/issues/7399)

      • jsheard 4 days ago |
        IPv4 certs are already working fine for me in Caddy, but I think there's some kinks to work out with IPv6.
      • mholt 4 days ago |
        It works, but as another comment mentioned there may be quirks with IP certs, specifically IPv6, that I hope will be fixed by v2.11.
    • btown 3 days ago |
      Work for this in Certbot is ongoing here, with some initial work already merged, but much to go. https://github.com/certbot/certbot/issues/10346

      https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.

    • certchecksh 3 days ago |
      Thank you for posting the lego command!

      It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).

  • iamrobertismo 4 days ago |
    This is interesting, I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication, but now you don't need to depend on provisioning a record on the name server as well for something that you might be start hundreds or thousands of, that will only last for like an hour or day.
    • iamrobertismo 4 days ago |
      Yeah actually seems pretty useful to not rely on the name server for something that isn't human facing.
    • axus 4 days ago |
      No dependency on a registrar sounds nice. More anonymous.
      • organsnyder 4 days ago |
        IP addresses also are assigned by registrars (ARIN in the US and Canada, for instance).
        • buckle8017 4 days ago |
          Arguably neither is particularly secure, but you must have an IP so only needing to trust one of them seems better.
        • traceroute66 4 days ago |
          > IP addresses also are assigned by registrars (ARIN in the US and Canada, for instance).

          To be pedantic for a moment, ARIN etc. are registries.

          The registrar is your ISP, cloud provider etc.

          You can get a PI (Provider Independent) allocation for yourself, usually with the assistance of a sponsoring registrar. Which is a nice compromise way of cutting out the middleman without becoming a registrar yourself.

          • immibis 4 days ago |
            You can also become a registrar yourself - at least, RIPE allows it. However, fees are significantly higher and it's not clear why you'd want to, unless you were actually providing ISP services to customers (in which case it's mandatory - you're not allowed to use a PI allocation for that)
            • traceroute66 4 days ago |
              > and it's not clear why you'd want to

              The biggest modern-era reason is direct access to update your RPKI entries.

              But this only matters if you are doing stuff that makes direct access worthwhile.

              If your setup is mostly "set and forget" then you should just accept the lag associated with needing to open a ticket with your sponsor to update the RPKI.

      • traceroute66 4 days ago |
        > No dependency on a registrar sounds nice.

        Actually the main benefit is no dependency on DNS (booth direct and root).

        IP is a simple primitive, i.e. "is it routable or not ?".

        • saltcured 4 days ago |
          The popular HTTP validation method has the same drawback whether using DNS or IP certificates? Namely, if you can compromise routes to hijack traffic, you can also hijack the validation requests. Right?
          • zinekeller 4 days ago |
            Yes, there have been cases where this has happened (https://notes.valdikss.org.ru/jabber.ru-mitm/), but it's really now into the realm of

            1) How to secure routing information: some says RPKI, some argues that's not enough and are experimenting with something like SCION (https://docs.scion.org/en/latest/)

            2) Principal-Agent problem: jabber.ru's hijack relied on (presumably) Hetzner being forced to do it by German law agents based on the powers provided under the German Telecommunications Act (TKG)

            • traceroute66 3 days ago |
              > some says RPKI

              Part of the issue with RPKI is its taking time to fully deploy. Not as glacial as IPv6 but slower than it should be.

              If there was 100% coverage then RPKI would have a good effect.

    • pdntspa 4 days ago |
      Maybe you want TLS but getting a proper subdomain for your project requires talking to a bunch of people who move slowly?
      • iamrobertismo 4 days ago |
        Very very true, never thought about orgs like that. However, I don't think someone should use this like a bandaid like that. If the idea is that you want to have a domain associated with a service, then organizationally you probably need to have systems in place to make that easier.
        • pdntspa 4 days ago |
          Ideally, sure. But in some places you're what you're proposing is like trying to boil the oceans to make a cup of tea

          VBA et al succeeded because they enabled workers to move forward on things they would otherwise be blocked on organizationally

          Also - not seeing this kind of thing could be considered a gap in your vision. When outsiders accuse SV of living in a high-tech ivory tower, blind to the realities of more common folk, this is the kind of thing they refer to.

          • iamrobertismo 4 days ago |
            Bruh, I'm not from SV lol. I just don't work at massive orgs.
    • traceroute66 4 days ago |
      > I am guessing the use case for ip address certs is so your ephemeral services can do TLS communication

      There's also this little thing called DNS over TLS and DNS over HTTPS that you might have heard of ? ;)

      • iamrobertismo 4 days ago |
        I don't quite understand how this relates?
        • traceroute66 3 days ago |
          > I don't quite understand how this relates?

          Erm ? Do I have to spell out that I was pointing out that there was more than the "ephemeral services" that were being guessed at that could take advantage of IP certs ?

        • patmorgan23 3 days ago |
          Currently when you configure DNS over TLS/HTTPS you have to set the IP address AND the hostname of the SSL certificate used to secure the service. Getting IP Address certs makes the configuration simpler
    • jeroenhd 4 days ago |
      One thing this can be useful for is encrypted client hello (ECH), the way TLS/HTTPS can be used without disclosing the server name to any listening devices (standard SNI names are transmitted in plaintext).

      To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.

      For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.

      With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.

      • agwa 4 days ago |
        That doesn't work, as neither SNI nor the server_name field of the ECHConfig are allowed to contain IP addresses: https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.html#...

        Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns

      • jsheard 4 days ago |
        I don't really see the value in ECH for self-hosted sites regardless. It works for Cloudflare and similar because they have millions of unrelated domains behind their IP addresses, so connecting to their IPs reveals essentially nothing, but if your IP is only used for a handful of related things then it's pretty obvious what's going on even if the SNI is obscured.
      • buzer 4 days ago |
        As far as I understand you cannot use IP address as the outer certificate as per https://www.ietf.org/archive/id/draft-ietf-tls-esni-25.txt

        > In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.

    • medmunds 4 days ago |
      The July announcement for IP address certs listed a handful of potential use cases: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...
      • iamrobertismo 4 days ago |
        Thanks! This is helpful to read.
  • zamadatix 4 days ago |
    Does anyone know when Caddy plans on supporting this?
  • meling 4 days ago |
    If I can use my DHCP assigned IP, will this allow me to drop having to use self-signed certificates for localhost development?
    • michaelt 4 days ago |
      No, they will only give out certificates if you can prove ownership of the IP, which means it being publicly routable.
      • inetknght 4 days ago |
        A lot of publicly routable IP addresses are assigned by DHCP...
      • wongarsu 4 days ago |
        Finally a reason to adopt IPv6 for your local development
        • greyface- 4 days ago |
          Yes, please publish the location of your dev servers in Cert Transparency logs for everyone to see.
      • toast0 4 days ago |
        It's just control isn't it, not ownership? I can't prove ownership of the IPs assigned to me, but I can prove control.
        • einsteinx2 4 days ago |
          Yes that’s correct
      • meling 4 days ago |
        Sorry, I wasn’t precise enough. I’m at a university and our IP addresses are publicly routable, I think.
        • undersuit 2 days ago |
          Ask Google "what is my IP" and compare it to your DHCP assigned address. If they are different your DHCP address isn't publically routeable.
    • wolttam 4 days ago |
      Browsers consider ‘localhost’ a secure context without needing https

      For local /network/ development, maybe, but you’d probably be doing awkward hairpin natting at your router.

      • treve 4 days ago |
        it's nice to be able to use https locally if you're doing things with HTTP/2 specifically.
    • Sohcahtoa82 4 days ago |
      What's stopping you from creating a "localhost.mydomain.com" DNS record that initially resolves to a public IP so you can get a certificate, then copying the certificate locally, then changing the DNS to 127.0.0.1?

      Other than basically being a pain in the ass.

      • cpach 4 days ago |
        One can also use the DNS-01 challenge in that scenario.
  • hojofpodge 4 days ago |
    Something about a 6 day long IP address based token brings me back to the question of why we are wasting so much time on utterly wrong TOFU authorization?

    If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar for a name and (I'm not quite sure what?) back to the AS.for the IP.

    • ycombinatrix 4 days ago |
      Domains map one-to-one with registrars, but multiple AS can be using the same IP address.
      • hojofpodge 4 days ago |
        Then it would be a grave error to issue an IP cert without active insight into BGP. (Or it doesn't matter which chain you have.. But calling a website from a sampling of locations can't be a more correct answer.)
        • ycombinatrix 4 days ago |
          >it would be a grave error to issue an IP cert without active insight into BGP

          Why? Even regular certs are handed out via IP address.

          • hojofpodge 3 days ago |
            > why we are wasting so much time on utterly wrong TOFU authorization? If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar

            They retire challenges that were once acceptable. What happens if they require a real chain of trust? They retire http and domain names keep working on DNS/DNSSEC.

            Making IP with only http challenges is going backwards.

  • bflesch 4 days ago |
    This sounds like a very good thing, like a lot of stuff coming from letsencrypt.

    But what risks are attached with such a short refresh?

    Is there someone at the top of the certificate chain who can refuse to give out further certificates within the blink of an eye?

    If yes, would this mean that within 6 days all affected certificates would expire, like a very big Denial of Service attack?

    And after 6 days everybody goes back to using HTTP?

    Maybe someone with more knowledge about certificate chains can explain it to me.

    • iso1631 4 days ago |
      With a 6 day lifetime you'd typically renew after 3 days. If Lets Encrypt is down or refuses to issue then you'd have to choose a different provider. Your browser trusts many different "top of the chain" providers.

      With a 30 day cert with renewal 10-15 days in advance that gives you breathing room

      Personally I think 3 days is far too short unless you have your automation pulling from two different suppliers.

      • bflesch 4 days ago |
        Thank you, I missed the part with several "top of the chain" providers. So all of them would need to go down at the same time for things to really stop working.

        How many "top of chain" providers is letsencrypt using? Are they a single point of failure in that regard?

        I'd imagine that other "top of chain" providers want money for their certificates and that they might have a manual process which is slower than letsencrypt?

        • cpach 4 days ago |
          “Are they a single point of failure in that regard?”

          It depends. If the ACME client is configured to only use Let’s Encrypt, then the answer is yes. But the client could fall-back to Google’s CA, ZeroSSL, etc. And then there is no single point of failure.

          • bflesch 4 days ago |
            Makes sense. I assume each of them is in control and at the whims of US president?
            • mholt 4 days ago |
              They are not in control of the US president.
              • bflesch 4 days ago |
                I'm pretty sure that the .org TLD can be shut off by the US at any point in time.
                • cpach 4 days ago |
                  That’s not relevant though. These CAs will gladly give you a .se/.dk/.in/whatever cert as long as validation passes.
                  • bflesch 4 days ago |
                    I hope so, but can we really be sure that .se or .de would still work in such a scenario? Is the TLD root management really split up vertically or is the (presumably US-based) TLD parent organization also the final authority for every country TLD?

                    It would be nice to at least have a very high level contingency plan because in worst case I won't be able to google it.

                    • cpach 4 days ago |
                      Not sure what the exact concern is here. So far, virtually all countries on Earth are still represented in DNS. Venezuela, Iran, Somalia, etc etc.

                      You can also read a lot of anti-Trump articles and comments on countless web-sites, some under .com and some under other top-domains. As lunatic as Trump is, he hasn’t shut that down.

                      “Is the TLD root management really split up vertically”

                      AFAIK, yes, it is.

                      But if the global DNS would somehow break down I guess you either have to find an alternative set of root servers. Or communicate outside of the regular Internet. Such an event surely would shock the global economy.

                      • bflesch 4 days ago |
                        That's actually a really good point. Totally missed it.
                      • iso1631 3 days ago |
                        Global DNS servers are spread across the world. Most are operated by America but three are operated by Sweden, Japan and Netherlands.

                        The majority of people use their own ISP or an anycast address from a US company (cloudflare, google, opendns). Quad9 is European.

                        However any split in the root dns servers signals the end of an interconnected global network. Any ISP can advertise anycast addresses into its own network, so if the US were to be cut off from the world that wouldn't be an issue per-se, but the breakdown of the internet in the western world would be a massive economic shock.

                        It wouldn't surprise me if it happens in the next decade or two though.

                • iso1631 4 days ago |
                  Lets Encrypt do not control the US president.

                  You could argue that The Don in charge of the US is in control of letsencrypt

                  • bflesch 4 days ago |
                    Yeah, it's a bit far fetched but after Cloudflare CEO basically threatening to cut off Italy I was wondering what would happen if US really invades Greenland.

                    A simple windows to linux migration is not enough. If certificates expire without a way to refresh you'd either need to manually touch every machine to swap root certificates or have some of other contingency plan.

                    • cpach 4 days ago |
                      Remember that there are lots of CAs, and quite many of them are based outside of the US. Those CAs currently do not offer ACME services for free, but there’s nothing stopping them from doing so.

                      I would say that the WebPKI system seems to be quite resilient, even in the face of strong geopolitical tension.

                    • iso1631 3 days ago |
                      Windows (and apple, google, mozilla) trust dozens of root certificates. I've got 148 pems in my /etc/ssl/certs directory on my laptop. 59 are from the US and thus 89 aren't. 10 are from China, 9 Germany, 7 UK. Others are India, Japan, Korea etc.

                      The far bigger problem is the American government forcing Microsoft/Apple/Google to push out a windows/iphone|mac/android|chrome update which removes all CAs not approved by the American government.

                      Canonical/Suse may be immune to such overt pressure, but once you get to that point you're way past the end of the international internet and it doesn't really matter anyway.

                  • alwillis 4 days ago |
                    > You could argue that The Don in charge of the US is in control of letsencrypt

                    He's not in control of letsencrypt or any other US-based CA.

                    It may not be well known, but Trump's administration loses about 80% of the time when they've been sued by companies, cities and states.

                    There's much more risk of state-sponsored cyber attacks against US companies.

            • cpach 4 days ago |
              It seems that currently most free CAs have a big presence in the US, and employ quite a few US employees.

              ZeroSSL/HID Global seems to be quite multi-national though, and it’s owned by a Swedish company (Assa Abloy).

              I don’t know what what kind of mitigations these orgs have in place if the shit really hits the fan in the US. It’s an interesting question for sure.

              • iso1631 4 days ago |
                Fundamentally, Microsoft, Google and Apple are all run by American citizens living in America. Firefox is pretty much the same.

                The US has strong institutions which prevent the President or Government at large controlling these on a whim. If those institutions fail then they could all push out an update which removes all "top of chain" trusted certificate authorities other than ones approved by the US government.

                In that situation the internet is basically finished as it stands now, and the OSes would be non-trustworthy anyway.

                Fixing the SSL problems is the easy part, the free world would push its own root certificate out -- which people would have to manually install from a trusted source, but that's nothing compared to the real problem.

                Sure, Ubuntu, Suse etc aren't based in the US, but the number of phones without a US based OS is basically zero, you'd have to start from scratch with a forked version of android which likely has NSA approved backdoors in it anyway. Non-linux based machines would also need to be wiped.

            • alwillis 4 days ago |
              > Makes sense. I assume each of them is in control and at the whims of US president?

              Absolutely not.

              If the president attempted to force a US-based CA to do something bad they don't want to do, they would sue the government. So far, this administration loses 80% of the lawsuits brought against it.

              • iso1631 3 days ago |
                You're putting a lot of trust in US institutions (courts etc). The rest of the world is starting to see them as not a strong and independent as they were once assumed.

                And that's before more overt issues. Microsoft/Google/etc could sue to stop the US ordering them to do what they should. Is the CEO really willing to risk their life to do that? Be a terrible shame if their kids got caught up in a traffic accident.

                • alwillis 4 hours ago |
                  > You're putting a lot of trust in US institutions (courts etc)

                  I don't have a lot of trust in US institutions actually. The most powerful universities, corporations and law firms have capitulated to him.

                  So far, the tech companies have placated Trump by contributing to his causes and heaping praise upon him and not speaking out regarding the tariffs. That's enough for now.

                  > Is the CEO really willing to risk their life to do that?

                  We're not at that point; at least not so far. Besides, it's much easier to blackmail them for more money or for the Department of Justice to open an investigation or to stop a merger they want to do.

                  Also these companies aren't just sitting around doing nothing. Apple reworked their supply chain; all iPhones sold in the US are now made in India.

        • mholt 4 days ago |
          LE has 2 primary production data centers: https://letsencrypt.status.io/

          But in general, one of the points of ACME is to eliminate dependence on a single provider, and prevent vendor lock-in. ACME clients should ideally support multiple ACME CAs.

          For example, Caddy defaults to both LE and ZeroSSL. Users can additionally configure other CAs like Google Trust Services.

          This document discusses several failure modes to consider: https://github.com/https-dev/docs/blob/master/acme-ops.md#if...

  • qwertox 4 days ago |
    I have now implemented a 2 week renewal interval to test the change to the 45 days, and now they come with a 6-day certificate?

    This is no criticism, I like what they do, but how am I supposed to do renewals? If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.

    I'm certain there are some who need this, but it's not me. Also the rationale is a bit odd:

    > IP address certificates must be short-lived certificates, a decision we made because IP addresses are more transient than domain names, so validating more frequently is important.

    Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.

    • bigstrat2003 4 days ago |
      The push for shorter and shorter cert lifetimes is a really poor idea, and indicates that the people working on these initiatives have no idea how things are done in the wider world.
      • Sohcahtoa82 4 days ago |
        It's really security theater, too.

        Though if I may put on my tinfoil hat for a moment, I wonder if current algorithms for certificate signing have been broken by some government agency or hacker group and now they're able to generate valid certificates.

        But I guess if that were true, then shorter cert lives wouldn't save you.

        • wang_li 4 days ago |
          My browser on my work laptop has 219 root certificates trusted. Some of those may be installed from my employer, but I suspect most of them come from MS as it's Edge on Windows 11. I see in that list things like "Swedish Government Root Authority" "Thailand National Root Certification Authority" "Staat der Nederlanden Root CA" and things like "MULTICERT Root Certification Authority" "ACCVRAUZ1". I don't think there is any reason to believe any certificate. If a government wants a cert for a given DNS they will get it, either because they directly control a trusted root CA, or because they will present a warrant to a company that wants to do business in their jurisdiction and said company will issue the cert.

          TLS certs should be treated much more akin to SSH host keys in the known hosts file. Browsers should record the cert the first time they see it and then warn me if it changes before it's expiration date, or some time near the expiration date.

          • londons_explore 4 days ago |
            Certificate transparency effectively means that any government actually uses a false certificate on the wider web and their root cert will get revoked.

            Obviously you might still be victim #1 of such a scheme... But in general the CA's now aren't really trusted anymore - the real root of trust is the CT logs.

            • PunchyHamster 4 days ago |
              > Certificate transparency effectively means that any government actually uses a false certificate on the wider web and their root cert will get revoked.

              the ENTIRE reason the short lifetime is used for the LE certs is that they haven't figured out how to make revoking work at scale.

              Now if you're on latest browser you might be fine but any and every embedded device have their root CAs updated only on software update, which means compromise of CA might easily get access to hundreds of thousands devices.

              • Dylan16807 4 days ago |
                > the ENTIRE reason the short lifetime is used for the LE certs is that they haven't figured out how to make revoking work at scale.

                And 200 is not "at scale". The list of difficulties in revoking roots is a very different list from the problem you're citing.

                > any and every embedded device

                Yes it's flawed but it's so much better than the previous nothing we had for detecting one of the too-many CAs going rogue.

          • jofla_net 4 days ago |
            >> TLS certs should be treated much more akin to SSH host keys in the known hosts file. Browsers should record the cert the first time they see it and then warn me if it changes before it's expiration date, or some time near the expiration date.

            This is great, and actually constructive!

            I use, a hack i put together http://www.jofla.net/php__/CertChecker/ to keep a list (in json) of a bunch of machines (both https and SSH) and the last fingerprints/date it sees. Every time it runs i can see if any server has changed, just is a heads-up for any funny business. Sure its got shortcommings, it doesnt mimmic headers and such but its a start.

            It would be great if browsers could all, you know, have some type of distributed protocol, ie DHT where by at least some concensus about whether this cert has been seen by me or enough peers lately.

            Having a ton of CAs and the ability to have any link in that chain sing for ANY site is crazy, and until you've seen examples of abuse you assume the foundations are sound.

        • NoahZuniga 4 days ago |
          > broken by some government agency or hacker group

          Probably not. For browsers to accept this certificate it has to be logged in a certificate transparency log for anyone to see, and no such certificates have been seen to be logged.

        • vbezhenar 4 days ago |
          I'm not sure it is about security. For security, CRLs and OCSP were a thing from the beginning. Short-lived certificates allow to cancel CRLs or at least reduce their size, so CA can save some expenses (I guess it's quite a bit of traffic for every client to download CRLs for entire letsencrypt).
        • woodruffw 4 days ago |
          One of the ideas behind short-lived certificates is to put certificate lifetimes within the envelope of CRL efficacy, since CRLs themselves don’t scale well and are a significant source of operational challenges for CAs.

          This makes sense from a security perspective, insofar as you agree with the baseline position that revocations should always be honored in a timely manner.

      • alibarber 4 days ago |
        Well they offer a money-back guarantee. And other providers of SSL certificates exist.
        • jsheard 4 days ago |
          For better or worse the push down to 47-day certificates is an industry-wide thing, in a few years no provider will issue certificates for longer than that.

          Nobody is being forced to use 6-day certs for domains though, when the time comes Let's Encrypt will default to 47 days just like everyone else.

          • singpolyma3 4 days ago |
            > Nobody is being forced to use 6-day certs for domains though

            Yet

            • einsteinx2 4 days ago |
              Nobody is being forced to use Let’s Encrypt either.
              • singpolyma3 4 days ago |
                It doesn't matter. Google makes sure every CA has the same rules.
          • hungryhobbit 4 days ago |
            And you don't think that years ago people would have said "of course you'll be able to keep your security cert for more than two months"?

            The people who innovate in security are failing to actually create new ways to verify things, so all that everyone else in the security industry can do to make things more secure is shorten the cert expiration. It's only logical that they'll keep doing it.

            • themafia 4 days ago |
              ALPN per transaction certificates. Why take the chance?
      • jdsully 4 days ago |
        At some point it makes sense to just let us use self signed certs. Nobody believes SSL is providing attestation anyways.
        • vimda 4 days ago |
          A lot corporate environments load their root cert and MITM you anyway
          • sgjohnson 4 days ago |
            A lot of applications implement cert pinning for this exact reason
        • woodruffw 4 days ago |
          What does attestation mean in this context? The point of the Web PKI is to provide consistent cryptographic identity for online resources, not necessarily trustworthy ones.

          (The classic problem with self-signed certs being that TOFU doesn’t scale to millions of users, particularly ones who don’t know what a certificate fingerprint is or what it means when it changes.)

        • cpach 4 days ago |
          Then you might as well get rid of TLS altogether.
          • jdsully 4 days ago |
            You'd still want in transit encryption. There are other methods than centralized trust like fingerprinting to detect forgeries.
            • cpach 4 days ago |
              Haven’t seen any such system that scales to billions of user.
      • jofla_net 4 days ago |
        Rule by the few, us little people don't matter.

        Thing is, NOTHING, is stopping anyone from already getting short lived certs and being 'proactive' and rotating through. What it is saying is, well, we own the process so we'll make Chrome not play ball with your site anymore unless you do as we say...

        The CA system has cracks, that short lived certs don't fix, so meanwhile we'll make everyone as uncomfortable as possible while we rearrange deck chairs.

        awaiting downvotes in earnest.

      • akerl_ 4 days ago |
        Which wider world?

        These changes are coming from the CAB forum, which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.

        There are use cases for certificates that exist outside of that umbrella, but they are by definition niche.

        • michaelt 4 days ago |
          About 99.99% of people and organisations are neither CAs nor Browsers. Hence they have no representation in the CAB Forum.

          Hardly 'by definition niche' IMHO.

          • akerl_ 4 days ago |
            The pitch here wasn't that only a few people get a vote, it was that the people making the decisions aren't aware of how "the wider world" works. And they are, clearly. The people making Chrome/Firefox and the people running the CAs every publicly-trusted site uses are aware of what their products do, and how they are used.
            • themafia 4 days ago |
              They're aware of the major use cases. I doubt the minority cases are even on their radar.

              So great for E-Commerce, not so great for anyone else.

        • nottorp 4 days ago |
          >which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.

          So no one that actually has to renew these certificates.

          Hey! How long does a root certificate from a certificate authority last?

          10 to 25 years?

          Why don't those last 120 minutes? They're responsible for the "security" of the whole internet aren't they?

          • akerl_ 4 days ago |
            It's almost like the threat models for CA and leaf certs are different.
            • LunaSea 4 days ago |
              Yes, foot certs are much more sensitive than leaf certs.
              • akerl_ 3 days ago |
                Which is why root certs are stored in HSMs, there’s a well defined total set of them, and if the owner violates any of the rules around handling of them, the CAB can put them out of business.
          • cpach 4 days ago |
            It’s capped to 15 years.

            In another comment someone linked to a document from the Chrome team.

            Here’s a quote that I found interesting:

            “In Chrome Root Program Policy 1.5, we landed changes that set a maximum ‘term-limit’ (i.e., period of inclusion) for root CA certificates included in the Chrome Root Store to 15 years.

            While we still prefer a more agile approach, and may again explore this in the future, we encourage CA Owners to explore how they can adopt more frequent root rotation.”

            https://googlechrome.github.io/chromerootprogram/moving-forw...

            • nickf 3 days ago |
              It’ll be 5 years soon.
          • codys 4 days ago |
            > So no one that actually has to renew these certificates.

            I believe google, who maintain chrome and are on the CAB, are an entity well known for hosting various websites (iirc, it's their primary source of income), and those websites do use https

        • dvfjsdhgfv 3 days ago |
          You're kidding, right? You've never seen a server completely inaccessible just because the owner had trouble renewing the cert? A lot of websites went down this way. And they served static content. Shortening that windows is just asking for trouble.
          • akerl_ 3 days ago |
            > You're kidding, right? You've never seen a server completely inaccessible just because the owner had trouble renewing the cert?

            I am not kidding, but also the rest of your comment isn’t at all related to what I said.

      • JackSlateur 3 days ago |
        How are things done in the wider world ?

        In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?

        Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet

    • alibarber 4 days ago |
      If you are doing this in a commercial context and the 4 day debugging window, or any downtime, would cause you more costs than say, buying a 1 year certificate from a commercial supplier, then that might be your answer there...
      • mxey 4 days ago |
        There will be no certificates longer than 45 days by any CA in browsers in a few years.
    • charcircuit 4 days ago |
      >I won't have time to fix this

      Which should push you to automate the process.

      • buckle8017 4 days ago |
        He's expressly talking about broken automation.
        • charcircuit 4 days ago |
          You can have automation to fix the broken automation.
          • buckle8017 4 days ago |
            Are you serious? real question
            • charcircuit 4 days ago |
              Yes, as expiration times get smaller people will increase automation and robustness to deal with it. One way to increase robustness is to automatically diagnose why something failed and try and repair it.
    • kevincox 4 days ago |
      The short-lived requirement seems pretty reasonable for IP certs as IP addresses are often rented and may bounce between users quickly. For example if you buy a VM on a cloud provider, as soon as you release that VM or IP it may be given to another customer. Now you have a valid certificate for that IP.

      6 days actually seems like a long time for this situation!

      • sgjohnson 4 days ago |
        Cloud providers could check the transparency lists, and if there’s a valid cert for the IP, quarantine it until the cert expires. Problem solved.
        • greyface- 4 days ago |
          That's leaving money on the table, unless they continue to charge the previous tenant for the duration of quarantine.
          • nkmnz 4 days ago |
            Charging for an IP until a cert is expired is free money for cloud providers. They gonna love it.
    • Sohcahtoa82 4 days ago |
      > Are IP addresses more transient than a domain within a 45 day window?

      If I don't assign an EIP to my EC2 instance and shut it down, I'm nearly guaranteed to get a different IP when I start it again, even if I start it within seconds of shutdown completing.

      It'd be quite a challenge to use this behavior maliciously, though. You'd have to get assigned an IP that someone else was using recently, and the person using that IP would need to have also been using TLS with either an IP address certificate or with certificate verification disabled.

      • qwertox 4 days ago |
        Ok, though if you're in that situation, is an IP cert the correct solution?
        • toast0 4 days ago |
          It's probably not a good solution if you're dealing with clients you control.

          Otoh, if you're dealing with browsers, they really like WebPKI certs, and if you're directing load to specific servers in real time, why add DNS and/or a load balancer thing in the middle?

    • mholt 4 days ago |
      It's less about IP address transience, and more about IP address control. Rarely does the operator of a website or service control the IP address. It's to limit the CA's risk.
    • PunchyHamster 4 days ago |
      What worries me more about the push for shorter and shorter cert terms instead of making revoking that works is that if provider fails now you have very little time to switch to new one
      • jsheard 4 days ago |
        Some ACME clients can failover to another provider automatically if the primary one doesn't work, so you wouldn't necessarily need manual intervention on short notice as long as you have the foresight to set up a secondary provider.
      • cpach 4 days ago |
        People have tried. Revocation is a very hard problem to solve on this scale.
      • mcpherrinm 4 days ago |
        This is a two-sided solution, and one significant reason for shorter certificate lifetimes helps make revocation work better.
    • compumike 4 days ago |
      > If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.

      I think a pattern like that is reasonable for a 6-day cert:

      - renew every 2 days, and have a "4 day debugging window" - renew every 1 day, and have a "5 day debugging window"

      Monitoring options: https://letsencrypt.org/docs/monitoring-options/

      This makes me wonder if the scripts I published at https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... should have the expiry thresholds defined in units of hours, instead of integer days?

    • andrewaylett 4 days ago |
      You should probably be running your renewal pipeline more frequently than that: if you had let your ACME client set itself up on a single server, it would probably run every 12h for a 90-day certificate. The ACME client won't actually give you a new certificate until the old one is old enough to be worth renewing, and you have many more opportunities to notice that the pipeline isn't doing what you expect than if you only run when you expect to receive a new certificate.
    • cortesoft 4 days ago |
      > Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.

      They can be as transient as you want. For example, on AWS, you can release an elastic IP any time you want.

      So imagine I reserve an elastic IP, then get a 45 day cert for it, then release it immediately. I could repeat this a bunch of times, only renting the IP for a few minutes before releasing it.

      I would then have a bunch of 45 day certificates for IP addresses I don't own anymore. Those IP addresses will be assigned to other users, and you could have a cert for someone else's IP.

      Of course, there isn't a trivial way to exploit this, but it could still be an issue and defeats the purpose of an IP cert.

      • veber88 2 days ago |
        You could do same trick even for 6 day certificate.
  • charcircuit 4 days ago |
    Next, I hope they focus on issuing certificates for .onion addresses. On the modern web many features and protocols are locked behind HTTPS. The owner of a .onion has a key pair for it, so proving ownership is more trustworthy than even DNS.
    • londons_explore 4 days ago |
      But isn't it unnecessary to use https, since tor itself encrypts and verifies the identity of the endpoint?
      • rnhmjoj 4 days ago |
        Yes, but browsers moan if you connect to a website without https, no matter if it's on localhost or an onion service.
        • creatonez 4 days ago |
          Tor Browser handles this, it treats `.onion` as a secure context.
        • tucnak 4 days ago |
          Well, you're not supposed to use Tor from browsers that don't explicitly support it. Tor Browser, Brave, and I'm sure some others really wouldn't mind HTTP hidden service traffic.
      • charcircuit 4 days ago |
        For example HTTP/2 and HTTP/3 require HTTPS. While technically HTTPS is redundant, .onion sites should avoid requiring browsers to add special casing for them due to their low popularity compared to regular web sites.
        • tucnak 4 days ago |
          What are benefits of HTTP/2 and HTTP/3 for Tor hidden service traffic?
          • charcircuit 4 days ago |
            Considerably faster page load times due to being able to continue to use the same connection for each request.
      • gizmo686 4 days ago |
        It would give you a certificate chain which may authenticate the onion service as being operated as who it purports to. Of course, depending on context, a certificate that is useful for that purpose might itself be too much if an information leak
        • huhhuh 4 days ago |
          DV certificates (that lets encrypt) provides offer no verification of the owner. EV certificates for .onion could be actually useful though, but one generally has to pay for EV cert.
          • andrewaylett 4 days ago |
            A certificate that's valid for both a regular domain and an onion domain gives you a degree of confidence of common ownership.
    • throw0101d 4 days ago |
      'Automated Certificate Management Environment (ACME) Extensions for ".onion" Special-Use Domain Names'

      * https://datatracker.ietf.org/doc/html/rfc9799

      * https://acmeforonions.org

      * https://onionservices.torproject.org/research/appendixes/acm...

  • xg15 4 days ago |
    IP addresses must be accessible from the internet, so still no way to support TLS for LAN devices without manual setup or angering security researchers.
    • progbits 4 days ago |
      I mean if it's not routable how do you want to prove ownership in a way nobody else can? Just make a domain name.
      • alibarber 4 days ago |
        Also I don't see the point of what TLS is supposed to solve here? If you and I (and everyone else) can legitimately get a certificate for 10.0.0.1, then what are you proving exactly over using a self-signed cert?

        There would be no way of determining that I can connecting to my-organisation's 10.0.0.1 and not bad-org's 10.0.0.1.

        • londons_explore 4 days ago |
          Perhaps by providing some identifier in the URL?

          ie. https://10.0.0.1(af81afa8394fd7aa)/index.htm

          The identifier would be generated by the certificate authority upon your first request for a certificate, and every time you renew you get to keep the same one.

          • alibarber 4 days ago |
            I see what you're getting at - but to me this sounds almost exactly like just using DNS, even if the (A/AAAA) record you want to use resolves to an un-routable address: https://letsencrypt.org/docs/challenge-types/#dns-01-challen... - you just create a DNS TXT record instead of them trying to access a server at the address for verification.
        • cpach 4 days ago |
          A public CA won’t give you a cert for 10.0.0.1
          • alibarber 4 days ago |
            Exactly - no one can prove they own it (on purpose because it's reserved for private network use, so no one can own it)
        • Latty 4 days ago |
          This is assuming NAT, with IPv6 you should be able to have globally unique IPs. (Not unique to IPv6 in theory, of course, but in practice almost no one these days is giving LAN devices public IPv4s).
      • arianvanp 4 days ago |
        For ipv6 proof of ownership can easily be done with an outbound connection instead. And would work great for provisioning certs for internal only services.
    • cpach 4 days ago |
      One can also use a private CA for that scenario.
      • bigfishrunning 4 days ago |
        Exactly -- how many 192.168.0.1 certs do you think LetsEncrypt wants to issue?
        • tialaramex 4 days ago |
          The BRs specifically forbid issuing such a certificate since 2015. So, slightly before they were required to stop using SHA-1, slight after they were forbidden from issuing certificates for nonsense like .com or .ac.uk which obviously shouldn't be available to anybody even if they do insist they somehow "own" these names.
          • tialaramex 4 days ago |
            I can't edit it now, but that comment should have said *.com or *.ac.uk -- that is wildcards in which the suffix beyond the wildcard is an entire TLD or an entire "Public Suffix" which the rules say don't belong to anyone as a whole, they're to be shared by unrelated parties and so such a wildcard will never be a reasonable thing to exist.
    • johannes1234321 4 days ago |
      I recently migrated to a wildcard (*.home.example.com) certificate for all my home network. Works okay for many parts. However requires a public DNS server where TXT records can be set via API (lego supports a few DNS providers out of the box, see https://go-acme.github.io/lego/dns/ )
    • stackghost 4 days ago |
      >so still no way to support TLS for LAN devices without manual setup or angering security researchers.

      Arguably setting up letsencrypt is "manual setup". What you can do is run a split-horizon DNS setup inside your LAN on an internet-routable tld, and then run a CA for internal devices. That gives all your internal hosts their own hostname.sub.domain.tld name with HTTPS.

      Frankly: it's not that much more work, and it's easier than remembering IP addresses anyway.

      • cpach 4 days ago |
        There’s also the DNS-01 challenge that works well for devices on private networks.
      • tosti 4 days ago |
        > run a CA

        > easier than remembering IP addresses

        idk, the 192.168.0 part has been around since forever. The rest is just a matter of .12 for my laptop, .13 for the one behind the telly, .14 for the pi, etc.

        Every time I try to "run a CA", I start splitting hairs.

        • stackghost 4 days ago |
          No, what I'm saying is

          1. Running a CA is more work than just setting up certbot for IP addresses, but not that much more

          And that enables you to

          2. Remember only domain names, which is easier than ip addresses.

          I guess if you're ipv4 only and small it's not much benefit but if you have a big or bridged network like wonderLAN or the promised LAN it's much better.

    • sgjohnson 4 days ago |
      IPv6? You wouldn’t even need to expose the actual endpoints out on the open internet. DNAT on the edge and point inbound traffic on a VM responsible for cert renewals, then distribute to the LAN devices actually using those addresses.
    • arisudesu 3 days ago |
      What do you mean by 'LAN', everything should be routable globally with IPv6 decade ago anyway /s
    • oddly 3 days ago |
      I recently found this, might help someone here. Genius solution. https://sslip.io/
    • patmorgan23 3 days ago |
      If you have non-public IPs you need certs for you should set up a non-public certificate authority and issue your own certs for them.
  • cedws 4 days ago |
    I guess IP certs won't really be used for anything important, but isn't there a bigger risk due to BGP hijacking?
    • toast0 4 days ago |
      No additional risk IMHO. If you can hijack my service IPs, you can establish control over the IPs or the domain names that point to them. (If you can hijack my DNS IPs, you can often do much more... even with DNSSEC, you can keep serving the records that lead to IPs you hijacked)
  • razakel 4 days ago |
    Has anyone actually given a good explanation as to why TLS Client Auth is being removed?
    • cryptonector 4 days ago |
      One reason is that the client certificate with id-kp-clientAuth EKU and a dNSName SAN doesn't actually authenticate the client's FQDN. To do that you'd have to do something of a return routability check at the app layer where the server connects to the client by resolving its FQDN to check that it's the same client as on the other connection. I'm not sure how seriously to take that complaint, but it's something.
    • dextercd 4 days ago |
      It's a requirement from the Chrome root program. This page is probably the best resource on why they want this: https://googlechrome.github.io/chromerootprogram/moving-forw...
      • 0xbadcafebee 4 days ago |
        I get why Chrome doesn't want it (it doesn't serve Chrome's interests), but that doesn't explain why Let's Encrypt had to remove it. The reason seems to be "you can't be a Chrome CA and not do exactly what Chrome wants, which is... only things Chrome wants to do". In other words, CAs have been entirely captured by Chrome. They're Chrome Authorities.

        Am I the only person that thinks this is insane? All web security is now at the whims of Google?

        • dextercd 4 days ago |
          All major root store programs (Chrome, Apple, Microsoft, Mozilla) have this power. They set the requirements that CAs must follow to be included in their root store, and for most CAs their certs would be useless if they aren't included in all major ones.

          I don't think the root programs take these kind of decisions lightly and I don't see any selfish motives they could have. They need to find a balance between not overcomplicating things for site operators and CAs (they must stay reliable) while also keeping end users secure.

          A lot of CAs and site operators would love if nothing ever changed: don't disallow insecure signature/hash algorithms, 5+ year valid certs, renewals done manually, no CT, no MPIC, etc. So someone else needs to push for these improvements.

          The changes the root programs push for aren't unreasonable, so I'm not really concerned about the power they have over CAs.

          That doesn't mean the changes aren't painful in the short term. For example, the move to 45 day certificates is going to cause some downtime, but of course the root programs/browsers don't benefit from that. They're still doing this because they believe that in the long term it's going to make WebPKI more robust.

          There's also the CA/Browser Forum where rule changes are discussed and voted on. I'm not sure how root programs decide on what to make part of their root policy vs. what to try to get voted into the baseline requirements. Perhaps in this case Chrome felt that too many CAs would vote against for self-interested reasons, but that's speculation.

          • mcpherrinm 4 days ago |
            The "client cert" requirements were specifically not a CABF rule because that would rule it out for everyone complying with those rules, which is much broader than just the CAs included in Chrome.

            Some CAs will continue to run PKIs which support client certs, for use outside of Chrome.

            In general, the "baseline requirements" are intended to be just that: A shared baseline that is met by everyone. All the major root programs today have requirements which are unique to their program.

            • dextercd 4 days ago |
              Thanks for chiming in! I remember now that you also said this on the LE community forum.

              Right, that explains it. So the use would be for things other than websites or for websites that don't need to support Chrome (and also need clientAuth)?

              I guess I find it hard to wrap my head around this because I don't have experience with any applications where this plus a publicly trusted certificate makes sense. But I suppose they must exist, otherwise there would've been an effort to vote it into the BRs.

              If you or someone else here knows more about these use cases, then I'd like to hear about it to better understand this.

              • 0xbadcafebee a day ago |
                Are you asking why an HTTPS server would need to use client auth outside of the browser? The answer is mTLS. If you want to use one cert for your one domain to serve both "normal" browser content and HTTPS APIs with mTLS, your cert needs to be able to do it all.
                • dextercd a day ago |
                  The server that wants to authenticate clients via mTLS doesn't need the clientAuth EKU on its certificate, only the clients do.

                  Most of the time you set up mTLS by creating your own self-signed certificate and verifying that the client has that cert (or one that chains up to it). I'm wondering what systems exist that need a publicly trusted cert with clientAuth.

                  Only think I've heard of so far is XMPP for server-to-server auth, but there are alternative auth methods it supports.

    • singpolyma3 4 days ago |
      Because Google doesn't want anyone using PKI for anything but simple websites
    • greyface- 4 days ago |
      It competes with "Sign in with Google" SSO.
    • JackSlateur 3 days ago |
      Because using a public key infrastructure for client certificate is terrible

      mTLS is probably the only sane situation where private key infrastructure shall be used

  • cryptonector 4 days ago |
    How are IP address certificates useful?
    • SahAssar 4 days ago |
      * DoT/DoH

      * An outer SNI name when doing ECH perhaps

      * Being able to host secure http/mail/etc without being beholden to a domain registrar

      • cryptonector 4 days ago |
        Oh nice! I hadn't considered DoT/DoH. The ECH angle is interesting. Thanks.
      • miladyincontrol 4 days ago |
        IP addresses arent valid for the SNI used with ECH, even with TLS. On paper I do agree though it would be a decent option should things one day change there.
        • tialaramex 3 days ago |
          I think that would have been an alternate present rather than a plausible future.

          ECH needs for the outer (unencrypted) SNI to be somewhat plausible as a destination. For ECH GREASE what happens is that this outer SNI was real, what looks like the encrypted inner ECH data is just random noise.

          For non-GREASE ECH we want to look as much like the GREASE as we can, except that it's not noise that's the encrypted payload with a real inner SNI among other things.

      • 12_throw_away 4 days ago |
        To save others a trip to Kagi: DoT / DoH = DNS over TLS [1] / https [2]

        E.g.:

        [1] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...

        [2] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...

  • cryptonector 4 days ago |
    I wonder if transport mode IPsec can be relevant again if we're going to have IP address certificates. Ditto RFC 5660 (which -full disclosure- I authored).
    • PunchyHamster 4 days ago |
      IPSec is terrible, huge, and messy standard that company that made it took 20 years to stop getting CVE every year
      • cryptonector 4 days ago |
        But the very nice thing about ESP (over UDP or not) is that it's much simpler to build HW offload than for TLS.

        Using the long ago past as FUD here is not useful.

        • TwoNineFive 3 days ago |
          > IPSec is terrible, huge, and messy standard that company that made it took 20 years to stop getting CVE every year

          This is fact, not FUD.

          Microsoft has had multiple RCE vulns in their ipsec stack in the last two years.

          The big vendors like Cisco had ipsec vulns for decades.

          These days the issues are pretty well known and documented, but it really is a bad standard.

    • reincarnate0x14 4 days ago |
      Maybe but probably not. Various always-on , SDN, or wide scale site-to-site VPN schemes are deployed widely enough for long enough now that it's expected infrastructure at this point.

      Even getting people to use certificates on IPSEC tunnels is a pain. Which reminds me, I think the smallest models of either Palo Alto or Checkpoint still have bizarre authentication failures if the certificate chain is too long, which was always weird to me because the control planes had way more memory than necessary for well over a decade.

      • cryptonector 4 days ago |
        You're not thinking creatively enough. I'm only interested in ESP, not IKE. Consider having the TLS handshake negotiate the use of ESP, and when selected the system would plumb ESP for this connection using keys negotiated by TLS (using the exporter). Think ktls/kssl but with ESP. Presto -- no orchestration of IKE credentials, nothing -- it should just work.

        The real key is getting ESP HW offload.

        • reincarnate0x14 4 days ago |
          Oh I agree with it being nice, I'm just imagining more socialization oriented resistance to implementation and both large organizations and hobbyists already have answers that mostly cover the use cases even if not exactly as cleanly. Moving node to node encryption to an accelerated implementation of transport mode would be great, but if you're already using TLS I can see people just sticking in TLS versus hoping both ends had the necessary handshake->ESP path working, plus people are more experienced with existing troubleshooting, etc.
          • cryptonector 4 days ago |
            It's still "TLS" as far as the application is concerned, which is why this could work, but yes, there are a few roadblocks, not the least of which is the absence of compelling HW. Another thing is that I/O is faster than compute nowadays, so making it faster may not be helpful :joy:
    • JackSlateur 3 days ago |
      Is IPsec still relevant ?
      • cryptonector 3 days ago |
        It's not. What I have in mind is TLS handshake mediated ESP SA pair keying and policy. Why? Because ESP is much much simpler to implement in silicon than TCP+TLS.

        ESP is stateless if using IPv6 (no fragmentation), or even if using IPv4 (fragmented packets -> let the host handle them; PMTUD should mean no need for fragmentation the vast majority of the time). Statelessness makes HW offload easy to implement.

  • notepad0x90 4 days ago |
    It's a huge ask, but i'm hoping they'll implement code-signing certs some day, even if they charge for it. It would be nice if appstores then accepted those certs instead of directly requiring developer verification.
    • duskwuff 4 days ago |
      1) For better or worse, code signing certificates are expected to come with some degree of organizational verification. No one would trust a domain-validated code signing cert, especially not one which was issued with no human involvement.

      2) App stores review apps because they want to verify functionality and compliance with rules, not just as a box-checking exercise. A code signing cert provides no assurances in that regard.

      • notepad0x90 4 days ago |
        They can just do id verification instead of domain, either in-house or outsource it.

        app store review isn't what I was talking about, I meant not having to verify your identity with the appstore, and use your own signing cert which can be used between platforms. Moreover, it would be less costly to develop signed windows apps. It costs several hundred dollars today.

        • briHass 4 days ago |
          Azure has a service ('Artifact Signing') which is $10/month for signing Windows executables (not Windows Store apps, which don't need it.)

          That's pretty reasonable, considering it is built in to all the major code signing tools on Windows, they perform the identity verification, and the private keys are fully managed by Azure. Code signing certs are required to be on HSMs, so you're most likely going to be paying some cloud CA anyway.

          • notepad0x90 3 days ago |
            This is wild, thank you so much!! I was struggling with these costs for a long time!! Why is this not more well known? I researched this a lot and it was going to cost me at minimum $500~ over 3 years with the cheapest providers. Let me see if my specific use case can work with them.

            I owe you one @briHass :)

    • cpach 4 days ago |
      Would be cool. But since they’re a non-profit, they would need some way to make it scalable.
      • notepad0x90 4 days ago |
        I see no problem with outsourcing id verification to a trusted partner. Or they could verify payment by charging you $1 to verify you control the payment card, and combine that with address verification by paper-mailing a verification code.
    • pona-a 3 days ago |
      I see how this would be useful once we take binary signing for granted. It would probably even be quite unobjectionable if it were simply a domain binding.

      However, the very act of trying to make this system less impractical is a concession in the war on general purpose computing. To subsidize its cost would be to voluntarily loose that non-moral line of argument.

      • notepad0x90 3 days ago |
        I don't understand where the argument is. Being able to publish content that others can authenticate and then trust sounds like a huge win to me. I don't even see why it has to be restricted to code. It's just verifying who the signer is. More trusted systems and more progress happens when we trust the foundations we're building. I don't think that's a war on general purpose computing. I feel like there is this older way of thinking where insecurity is considered a right of some sort. Being able to do things insecurely should be your right, but being able to reach lots of people and force them to use insecure things sounds exactly like a war on general purpose computing.
        • pona-a 3 days ago |
          Technologies cannot be normatively evaluated without considering the power structures they facilitate.

          Consider secure boot; assuming it's properly implemented, could defend against an entire class of attacks—evil maid: if a third party physically compromises your machine while you're away to install malware, you'd be alerted or stopped from booting the modified image. This is a technical statement. Now whose keys are actually trusted to sign these images? The answer is whatever power dominates in the supply chain: Microsoft, on desktop devices, and the vendor on mobile.

          In the case of Microsoft, the public indignation eventually forced them to open this system up, letting the poweruser delegate their agent freely and without manufacturer's coercion. But what about Android, where the natural market forces did get the upper hand: most phones remain locked from disabling secure boot, even fewer let you enroll your own keys. They result is that most Android phones cease security updates only a few years after manufacture, the vendor's own software riddled with obvious faults (like filling a user-inaccessible partition with logs that never get wiped, even after factory reset) and known CVEs, yet nevertheless remain attested as secure for high-assurance applications like banking, as determined by Google. This hypocrisy isn't accidental: the system's real aim was not to secure the user, but to secure its monopoly, instrumented by privileged Google Play Services, harvesting data beyond what any SDK can.

          I myself regularly rely on attestation—my phone runs Graphene OS and my laptop self-signs its kernel for secure boot—but I recognize that these technologies in themselves are predisposed to misuse by anti-competitive corporations and repressive regimes.

          Imagine government ID backed app signing became the norm for app stores. There will no longer be open-source utilities, like scientific calculators, notes, and budget planners, as they would not bear the certification fee what is effectively volunteer work, instead replaced by their ad-ridden copycats mass-produced in a software sweatshop, featured alongside or, through malicious ads, directing to assorted malware, still just as prominent as before, signed using passport details of random people off the street, taken down as late as they can, because Google enjoys a steady revenue stream from their repeated publisher verifications and AdSense spots. And that's to say nothing of censorship circumvention tools and other politically inexpedient software.

          • notepad0x90 a day ago |
            I think you're changing the topic here. But i'll bite a bit, we're talking about let's encrypt here, so for every argument you made, it would be let's encrypt issuing the certificates. All the "open source" use cases you have can also be supported by them.

            The whole point of let's encrypt doing this would be to reduce the fees for open source devs and poor devs in general. But ultimately, software published to the public is a matter of consumer safety and welfare. to that end, if you have a solution that enables operating systems to authenticate and review software before consumers are exposed to it, feel free to suggest an alternative, short of that, too bad for the open source dev. Nothing stoping you from using alternative devices. You don't have any entitlement over operatins systems or hardware sold to the public. The needs of software developers as a whole is not important in the slightest bit when it comes to consumer devices and software. Just the same as the plumbers needs are irrelevant when it comes to evaluating the safety of water and sewage pipes, or the construction person's needs are irrelevant when it comes to evaluating the safety of the building they're working on.

            If construction worker claims they don't need regulatory certified construction materials because that means random people building cabins in the woods can't sell their house, too bad right? They can still build their own cabin and live in it, but to sell the cabin house it must pass inspection (fees), zoning requirements, accessibility and fire safety requirements,etc.. why is your software dev industry so special?

            And yes, microsoft and google get to police things, just like in every other regulated industry there are professional certification boards. You need to pass the law BAR to be a lawyer, you need to pass the medicine BAR to practice medicine on the public. And those BAR associations are made up of industry leaders. Nothing prevents you from going to medical school and treating your own self without passing the BAR. Nothing stops you from writing your own software and using it. but when other people use it, they expect the government to keep them safe from malpractice and harm, that supersedes any needs or desires you may have for open source. You can even argue that it should be free, and that's the whole point of this, let's encrypt made TLS certs free, maybe it can make code signing/dev auth free too! But if it doesn't ,i consider it gross incompetence and dereliction of duty, if the government doesn't require software signing and secure boot on every consumer accessible software system.

  • rsync 4 days ago |
    IP address certificates are particularly interesting for iOS users who want to run their own DoH servers.

    A properly configured DoH server (perhaps running unbound) with a properly constructed configuration profile which included a DoH FQDN with a proper certificate would not work in iOS.

    The reason, it turns out, is that iOS insisted that both the FQDN and the IP have proper certificates.

    This is why the configuration profiles from big organizations like dns4eu and nextdns would work properly when, for instance, installed on an iphone ... but your own personal DoH server (and profile) would not.

    • fuomag9 4 days ago |
      I use DoH behind a reverse proxy with my own domain daily without any kind of issue
    • hypeatei 4 days ago |
      OpenSSL is quite particular about the IP address being included in the SAN field of the cert when making a TLS connection, fwiw. iOS engineers may not have explicitly added this requirement and it might just be a side effect of using a crypto library.
  • midtake 4 days ago |
    Why 6 day and not 8?

    - 8 is a lucky number and a power of 2

    - 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout

    - 6 is the value of every digit in the number of the beast

    - I just don't like 6!

    • bayindirh 4 days ago |
      Because it allows to you to work for six days, and rest on the seventh. Like God did.
      • batisteo 4 days ago |
        I don't think He worked after the 6th day. Went on doing other pet projects
        • ithkuil 3 days ago |
          6 days to write a prompt. One day to unleash the agents in yolo mode
      • kibwen 4 days ago |
        ² By the seventh day God had finished the work He had been doing; so on the seventh day He rested from all His work. ³ Then the on-call tech, Lucifer, the Son of Dawn, was awoken at midnight because God did not renew the heavens' and the earths' HTTPS certificate. ⁴ Thusly Lucifer drafted his resignation in a great fury.
        • JoBrad 4 days ago |
          Is this the TLS version of the Bible?
          • MobiusHorizons 4 days ago |
            I’m pretty sure that has been hidden from our eyes
          • ithkuil 3 days ago |
            I misread that as the LTS version of the bible
        • mindcrime 4 days ago |
          Gilfoyle?
        • GTP 4 days ago |
          This made my day :D
        • encrypted_bird 4 days ago |
          I just got home from a stressful day in retail (oh who am I kidding; every day is stress in retail) and this gave me a chuckle I really needed. Thank you.
      • encrypted_bird 4 days ago |
        Not my god. My god meant to go into work but got wasted and eventually passed out in the bathtub, fully clothed and holding a bowl of riceroni.
      • Hamuko 4 days ago |
        Didn't the Garden of Eden have a pretty massive vulnerability where eating one apple would give you access to all data on good and evil?
        • pona-a 3 days ago |
          Standard memory disclosure: the apple when eaten would be freed, but it would still be read, leaking its contents. Luckily its volume was low, so they couldn't exfiltrate all of it. But still, the heavens are closed for maintenance, pending a rewrite in Rust.
    • halifaxbeard 4 days ago |
      > 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout

      There’s your answer.

      6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.

      8 days would result in things getting hammered on specific days of the week.

      • PunchyHamster 4 days ago |
        > 6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.

        people will put */5 in cron and result will be same, because that's obvious, easy and nice number.

        • bayindirh 4 days ago |
          ACME doesn't renew certificates when there's enough time, so it'll always renew around 6 days, even if you check more aggressively.

          Currently ACME sets its cron job to 12 days on 90 day certificates.

          • akerl_ 4 days ago |
            Which ACME client are you referring to?
        • phantom784 4 days ago |
          I'd have it renew Monday and Thursday to avoid weekend outages.
        • Dylan16807 4 days ago |
          If they put */5 in cron, a single error response will break their site and the beginning of March will also break their site.
          • PunchyHamster 3 days ago |
            and they will replace it with * and just do it every day just in case
            • teaearlgraycold 3 days ago |
              I’d expect most will do this. I wouldn’t be surprised if LE expects this.
            • Dylan16807 3 days ago |
              Running an update script every day is good. Certbot defaults to running twice a day. Just use something with similar logic, waiting to renew short-lived certificates until halfway through their validity period. That way the actual load is nice and spread out. And you should get that logic by default if you do a normal setup.
        • cpach 3 days ago |
          If I would use short-lived certs I would make sure to choose an ACME client that has support for ARI (ACME Renewal Information). Then the CA will tell the client when it’s time to renew.
      • blibble 4 days ago |
        so now people that want humans around will now renew twice in a week instead of once?
        • Dylan16807 4 days ago |
          Oh definitely not. They don't want humans doing any renewals.
      • nojs 4 days ago |
        I thought people generally run it daily? It’s a no-op if it doesn’t need renewal.
    • zja 4 days ago |
      The are some great points
    • hamdingers 4 days ago |
      It's actually 6 and 2/3rds! I'm trying to figure out a rationale for 160 hours and similarly coming up empty, if anyone knows I'd be interested.

      200 would be a nice round number that gets you to 8 1/3 days, so it comes with the benefits of weekly rotation.

      • dtech 4 days ago |
        It's less than 7 exactly so you cannot set it on a weekly rotation
        • tensegrist 4 days ago |
          biweekly rotation?
          • saintfire 4 days ago |
            Or is it semi-weekly?
          • UqWBcuFx6NV4r 4 days ago |
            We say pan-weekly these days
      • mcpherrinm 4 days ago |
        I chose 160 hours.

        The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.

        We chose a value that's under the maximum, which we do in general, to make sure we have some wiggle room. https://bugzilla.mozilla.org/show_bug.cgi?id=1715455 is one example of why.

        Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).

        Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.

        We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.

        • mike_d 3 days ago |
          I have always been a bit puzzled by this. By issuing fixed length certificates you practically guarantee oscillation. If you have a massive traffic spike from, say, a CDN mass reissuing after a data breach - you are guaranteed to have the same spike [160 - $renewal_buffer] hours later.

          Fuzzing the lifetime of certificates would smooth out traffic, encourage no hardcoded values, and most importantly statistical analysis from CT logs could add confidence that these validity windows are not carefully selected to further a cryptographic or practical attack.

          A https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number if you will.

          • cpach 3 days ago |
            There is a solution for smoothing out the traffic: RFC 9733, ACME Renewal Information (ARI) Extension

            https://datatracker.ietf.org/doc/rfc9773/

            • mike_d 3 days ago |
              That only addresses half the problem and is just a suggestion vs something clients can't ignore.
    • 6thbit 4 days ago |
      Worry not, cause it's not 6 days (144 hours), it is 6-ish days: 160 hours

      And 160 is the sum of the first 11 primes, as well as the sum of the cubes of the first three primes!

      • nine_k 4 days ago |
        Mr Ramanujan, I presume?
        • abdullahkhalids 4 days ago |
          I was hoping Wolfram|Alpha would spit out the above, but on just entering 160 [1], we get

          > A regular 160-gon is constructible with straightedge and compass.

          > 160 has a representation as a sum of 2 squares: 160 = 4^2 + 12^2

          > 160 is an even number.

          > 160 has the representation 160 = 2^7 + 32.

          > 160 divides 31^2 - 1.

          > 160 = aa_15 repeats a single digit in base 15.

          [1] https://www.wolframalpha.com/input?i=160

        • themafia 4 days ago |
          Every K-Paxian knows this.
    • raegis 4 days ago |
      Six is the smallest perfect number. Perfection is key here.
    • rswail 4 days ago |
      Why not refresh daily?
  • 6thbit 4 days ago |
    This comment used to say that was in staging only. (Nevermind, i was confused following the links from original article)
    • iancarroll 4 days ago |
      That is a very old article that seems to be outdated now.
  • rubatuga 4 days ago |
    Honestly not a big fan of IP address certs in the context of dynamic IP address generation
  • apitman 4 days ago |
    Very excited about this. IP certs solve an annoying bootstrapping problem for selfhosted/indiehosted software, where the software provides a dashboard for you to configure your domain, but you can't securely access the dashboard until you have a cert.

    As a concrete example, I'll probably be able to turn off bootstrap domains for TakingNames[0].

    [0]: https://takingnames.io/blog/instant-subdomains

  • josephernest 4 days ago |
    Do I understand correctly: would someone have a concrete example of URL which is both an IP address and HTTPS, widely accessible from global internet? e.g. https://<ipv4-address>/ ?
    • elpasi 4 days ago |
      The websites for DNS servers known by IP? https://1.1.1.1/ presents a valid cert although it redirects.
      • josephernest 3 days ago |
        Out of curiosity, any other example without redirect, in which the URL stays https://<ip> in the browser?
  • nkmnz 4 days ago |
    What is a good use case for an IP address certificate for the average company? Say, e-commerce or SaaS-startup?
    • superkuh 3 days ago |
      The Internet is for End Users https://datatracker.ietf.org/doc/html/rfc8890

      >Successful specifications will provide some benefit to all the relevant parties because standards do not represent a zero-sum game. However, there are sometimes situations where there is a conflict between the needs of two (or more) parties.

      >In these situations, when one of those parties is an "end user" of the Internet -- for example, a person using a web browser, mail client, or another agent that connects to the Internet -- the Internet Architecture Board argues that the IETF should favor their interests over those of other parties.

      Incorporated entities are just secondary users.

      • nkmnz 3 days ago |
        Can you elaborate on the context of your answer, please? I cannot connect it to anything the original post or I did write.
        • superkuh 3 days ago |
          I was trying to explain that human people have uses for this and that should be enough. Even if there aren't a ton of for-profit uses.
          • nkmnz 3 days ago |
            I‘m a human and I’m interested in how I could use this my side projects. Please stop dehumanising me.
  • Already__Taken 2 days ago |
    I'm confused what you'd want an IP certificate for when DNS A records update so easily and cheap? Is this a case of you'd want both not either, like setting up spf/dkms/dmarc?
  • greatgib 2 days ago |
    "forced" short lived certificates sucks so much.

    Now you will have an American entity be controlling all your assets that you want online on a very regular basis. No way than calling home regularly. Impossible to manage your own local certificate authority as sub CA without a nightmarish constant process of renewal and distribution.

    For security this means that everything will be expected to have almost constant external traffic, RW servers to overwrite the certificates, keys spreaded for that...

    And maybe I miss something but would IP address certificate be a nightmare in term of security?

    Like when using mobile network or common networks like university networks, it might be very easy to snap certificates for ip shared by multiple unrelated entities. No?

  • Khalequzzaman 17 hours ago |
    thank you.