jcalvinowens6 hours ago
Really happy to see this.

In the meantime, if you use bind as your authoritative nameserver, you can limit an hmac-secret to one TXT record, so each webserver that uses rfc2136 for certificate renewals is only capable of updating its specific record:

  key "bob.acme." {
    algorithm hmac-sha512;
    secret "blahblahblah";
  };
  
  key "joe.acme." {
    algorithm hmac-sha512;
    secret "blahblahblah2";
  };

  zone "example.com" IN {
   type master;
   file "/var/lib/bind/example.com.zone";
   update-policy {
    grant bob.acme. name _acme-challenge.bob.acme.example.com. TXT;
    grant joe.acme. name _acme-challenge.joe.acme.example.com. TXT;
   };
   key-directory "/var/lib/bind/keys-acme.example.com";
   dnssec-policy "acme";
   inline-signing yes;
  };
I like this because it means an attacker who compromises "bob" can only get certs for "bob". The server part looks like this:

  export LE_CONFIG_HOME="/etc/acme-sh/"
  export NSUPDATE_SERVER="${YOUR_NS_ADDR}"
  export NSUPDATE_KEY="/var/lib/bob-nsupdate.key"
  export NSUPDATE_KEY_NAME="bob.acme."
  export NSUPDATE_ZONE="acme.example.com."

  acme.sh --issue --server letsencrypt -d 'bob.example.com' \
        --certificate-profile shortlived \
        --days 6 \
        --dns dns_nsupdate
TrueDuality7 hours ago
I think this is solving a real operational pain point, definitely one that I've experienced. My biggest hesitation here is the direct exposure of the managing account identity not that I need to protect the accounts key material, I already need to do that.

While "usernames" are not generally protected to the same degree as credentials, they do matter and act as an important gate to even know about before a real attack can commence. This also provides the ability to associate random found credentials back to the sites you can now issue certificates for if they're using the same account. This is free scope expansion for any breach that occurs.

I guarantee sites like Shodan will start indexing these IDs on all domains they look at to provide those reverse lookup services.

liambigelow5 hours ago
CAA records including an accounturi already expose the account identity in the same manner, so I feel like that ship has already sailed somewhat (and I would prefer that the CAA and persist record formats match).
krunck7 hours ago
Exactly. They should provide the user with a list of UUIDs(or any other randomish ID tied to the actual account) that can be used in the accounturi URL for these operations.
gsich7 hours ago
The account is the same as you create in any acme client. I don't see potential for a reverse lookup.
Ayesh6 hours ago
I think the previous post is talking about a search that will find the sibling domain names that have obtained certificates with the same account ID. That is a strong indication that those domains are in the same certificate renewal pipeline, most likely on the same physical/virtual server.
mschuster915 hours ago
Run ACME inside a Docker container, one instance (and credentials) for each domain name. Doesn't consume much resources. The real problem is IP addresses anyway, CT logs "thankfully" feed information to every bad actor in real time, which makes data mining trivially easy.
cortesoft3 hours ago
you dont even need a docker container to do that.
mschuster913 hours ago
Agreed, that's just a personal preference thing of me. Harder to mess up and easier to route.
gerdesj28 minutes ago
My LE experience (post HTTP-01 and now DNS-01) - its a bit of a palava. I don't have to open port 80 which is nice for ... security audits but gains zero security benefit.

I have a PowerDNS server running locally with a static IPv4 address via NAT and I have created a DNS domain and enabled dynamic DNS updates from certain IPv4 addresses with a pre-shared key.

For each cert you need a DNS CNAME pointing to my DNS domain in a specific format. Then we have to get to grips with software to do the deed. acme.sh is superb for !Windows. simple-acme is fine for Windows. I still setup each one by hand instead of ansible/Zenworks/whatever because I'm a sucker for punishment and still small enough for now.

DNS-Persist-01 is not something I think I will ever need but clearly someone does.

Ajedi324 hours ago
This is going to make it way easier to get publicly trusted certs for LAN servers that aren't internet facing.

I'm looking forward to every admin UI out there being able to generate a string you can just paste into a DNS record to instantly get a Let's Encrypt cert.

kami231 hour ago
Just experienced this with my heavily networked off openclaw setup. I gave up and will do manual renewals until I have more time to figure out a good way of doing it. I was trying to get a cert for some headscale magic dns setups, but I think that's way more complicated than I thought it would be.
bob10295 hours ago
I've changed my mind about the short lived cert stuff after seeing what is enabled by IP address certificates with the HTTP-01 verification method. I don't even bother writing the cert to disk anymore. There is a background thread that checks to see if the current instance of the cert is null or older than 24h. The cert selector on aspnetcore just looks at this reference and blocks until its not null.

Being able to distribute self-hostable software to users that can be deployed onto a VM and made operational literally within 5 minutes is a big selling point. Domain registration & DNS are a massive pain to deal with at the novice end of the spectrum. You can combine this with things like https://checkip.amazonaws.com to build properly turnkey solutions.

inahga4 hours ago
You should persist certs somewhere. Otherwise your availability is heavily tied to LE’s uptime.
tialaramex2 hours ago
Technically, because Let's Encrypt always publishes all requested certificates to the logs (this isn't mandatory, it's just easier for most people so Let's Encrypt always does this) your tool can go look in the logs to get the certificate. You do need to know your private key, nobody else ever knew that so if you don't have that then you're done.
xyzzy_plugh31 minutes ago
Now you depend on CT log providers uptime, which as far as I can tell is worse than LE.
cube005 hours ago
Pretty risky given the rate limits of Let's Encrypt are non negotiable with no choice but to wait them out.
muvlon4 hours ago
They are quite literally negotiable: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...

There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.

dextercd3 hours ago
Your comment is 100% correct, but I just want to point out that this doesn't negate the risks of bob's approach here.

LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.

Indeed, some rate limits don't apply for renewals but some still do.

zamadatix5 hours ago
Yeessss! This should finally make certificates for internal only web services actually easier to orchestrate than before ACME. This closes probably the biggest operational pain point I've had with letsencrypt/modern web certificates.

Thank you so much to all inolved!

CaliforniaKarl2 hours ago
For folks who use certbot, here is where they are tracking work on support for this feature: https://github.com/certbot/certbot/issues/10549
jmholla5 hours ago
There's a missing part here, and that's validating your ACME account ownership.

I think most users depend on automation that creates their accounts, so they never have to deal with it. But now, you need to propagate some credential to validate your account ownership to the ACME provider. I would have liked to see some conversation about that in this announcement.

I'm not familiar with Let's Encrypt's authentication model. If they don't have token creation that can be limited by target domain, but I expect you'll need to create separate accounts for each of your target domains, or else anything with that secret can create a cert for any domain your account controls.

mschuster915 hours ago
> There's a missing part here, and that's validating your ACME account ownership.

Why? ACME accounts have credentials so that the ACME client can authenticate against the certificate issuer, and ACME providers require the placement of a DNS record or a .well-known HTTP endpoint to verify that the account is authorized to act upon the demands of whoever owns the domain.

If either your ACME credentials leak out or, even worse, someone manages to place DNS records or hijack your .well-known endpoint, you got far bigger problems at hand than someone being able to mis-issue SSL certificates under your domain name.

1vuio0pswjnm749 minutes ago
Is it false that DNS requests sent from LE to authoritatuve nameservers are unencrypted
IgorPartola3 hours ago
Am I just stupidly missing something or does this in theory allow anyone who controls a DNS server for my domain or anyone who controls traffic between LE and the DNS server for my domain to get a TLS certificate they can use to impersonate my domain?

I suppose the same is true for DNS-01 but this would make it even easier because the attacker can just put up their LE account instead of mine into the DNS response and get a certificate.

At this point why not just put my public cert into a DNS record and be done with it?

gurjeet2 hours ago
If you don't trust your DNS provider to _not_ do malicious acts against you, you shouldn't be in that relationship.

If someone can perform MITM attack between LetsEncrypt and a DNS server, we've got bigger problem than just certificate issuance.

msmith1 hour ago
To mitigate the threat from an attacker who controls the network between the cert issuer and the DNS server, CAs will check the DNS records from multiple vantage points.

Let's Encrypt has been doing this for several years, and it's a requirement for all CAs as of 2024.

[1] https://cabforum.org/2024/08/05/ballot-sc067v3-require-domai...

echoangle2 hours ago
If I control your DNS, I can also just do the HTTP Acme challenge. If you control the DNS, it’s basically your domain anyways.
bombcar3 hours ago
Yes, anyone who controls your DNS can get a TLS certificate from anyone who offers them - because, uh, they control your DNS!

Try to figure out a way to block me from getting a TLS certificate if I can modify your DNS.

IgorPartola2 hours ago
That’s fair but I also have to trust every provider between my DNS server and LE’s servers to not intercept DNS responses. Since DNS isn’t encrypted anyone anywhere between them can modify the traffic and get a certificate if I understand correctly.
mcpherrinm2 hours ago
Two current mitigations and one future:

DNSSEC prevents any modification of records, but isn’t widely deployed.

We query authoritative nameservers directly from at least four places, over a diverse set of network connections, from multiple parts of the world. This (called MPIC) makes interception more difficult.

We are also working on DNS over secure transports to authoritative nameservers, for cases where DNSSEC isn’t or won’t be deployed.

IgorPartola1 hour ago
Ah that makes sense. I was wondering why I haven’t heard of cases of successfully attacks like this. Thank you for the info!
rmoriz48 minutes ago
I would have loved to see mandatory DNSSEC requirements
mscdex5 hours ago
After having to deal with VM hosts that do GeoIP blocking, which unintentionally blocks Let's Encrypt and others from properly verifying domains via http-01/tls-alpn-01, I settled on a DIY solution that uses CNAME redirects and a custom, minimal DNS server for handling the redirected dns-01 challenges. It's essentially a greatly simplified version of the acme-dns project tailored to my project's needs (and written in node.js instead of Go).

Unfortunately with dns-persist-01 including account information in the DNS record itself, that's a bit of a show stopper for me. If/when account information changes, that means DNS records need changing and getting clients to update their DNS records (for any reason) has long been a pain.

basilikum5 hours ago
> The timestamp is expressed as UTC seconds since 1970-01-01

That should be TAI, right? Is that really correct or do they actually mean unix timestamps (those shift with leap seconds unlike TAI which is actually just the number of seconds that have passed since 1970001Z)?

wtallis5 hours ago
Do leap seconds even matter here? Doing anything involving DNS or certificates in a way that requires clock synchronization down to the second would seem to be asking for trouble.
tialaramex2 hours ago
Abolition of the Leap Second is basically a done deal. So, the differences caused by leap seconds will become frozen as arbitrary offsets, GPS time versus UTC for example.

Basically when it was invented leap seconds seemed like a good idea because we assumed the inconvenience versus value was a good trade, but in practice we've discovered the value is negligible and the inconvenience more than we expected, so, bye bye leap seconds.

The body responsible has formal treaty promises to make UTC track the Earth's spin and replacing those treaties is a huge pain, so, the "hack" proposed is to imagine into existence a leap minute or even a leap hour that could correct for the spin, and then in practice those will never be used either because it's even less convenient than a leap second - but by the time they're asked to set a date for these hypothetical changes likely the signatory countries won't exist and their successors can just sign a revised treaty, countries only tend to last a few hundred years, look at the poor US which is preparing 250th anniversary celebrations while also approaching civil war.

toast02 hours ago
Probably yeah, seconds don't really matter here. You would have to work hard for the 27 second difference to be material. But precision is nice.

unixtime is almost certainly what is meant by the standard, but it is not the count of UTC seconds since 1970; unix time is the number of seconds since 1970 as if all days had 86400 seconds. UTC, TAI, and GPS seconds are all the same length, and the same number have happened since 1970, but TAI appears 37 seconds ahead of UTC because TAI has days with 86400 seconds, while UTC has some days with 86401 seconds and was 10 seconds ahead of UTC in 1970. unixtime and UTC are in sync because unixtime allows some days to encompass 86401 UTC seconds while unixtime only counts 86400 seconds.

newsoftheday6 hours ago
Today I do the following:

/usr/bin/letsencrypt renew -n --agree-tos --email me@example.com --keep-until-expiring

Will I need to change that? Will I need to manually add custom DNS entries to all my domains?

PS To add, compared to dealing with some paid certificate services, LetsEncrypt has been a dream.

dextercd6 hours ago
This adds a new validation method that people can use if they want. The existing validation methods (https://letsencrypt.org/docs/challenge-types/) aren't going away, so your current setup will keep working.
jsheard6 hours ago
And to elaborate, the reasons you might want to use a DNS challenge are to acquire wildcard certificates, or to acquire regular certificates on a machine or domain which isn't directly internet-facing. If neither of those apply to you then the regular HTTP/TLS methods are fine.
newsoftheday6 hours ago
OK I was sort of thinking that might be the case but wanted to make sure in case I had to start prepping now, thanks. We use no wildcard domains today, maybe down the road.
bombcar3 hours ago
Wildcard domains are a great way to get certs for all your "internal systems" with only having to expose one (or a bit of one on DNS) to the Internet at large.

This is going to greatly simplify some of my scripts.

newsoftheday6 hours ago
This is good news, not sure I got that from reading the article but even if I had to do it, it wouldn't be the end of the world I guess.
qwertox6 hours ago
This will make things so much easier.

Here, certbot runs in Docker in the intranet, and on a VPS I have a custom-built nameserver to which all the _acme-challenge are redirected to via NS records.

The system in the intranet starts certbot, makes it pass it the token-domain-pair from letsencrypt, it then sends those pairs to the nameserver which then attaches the token to a TXT record for that domain, so that the DNS reply can send this to letsencrypt when they request it.

All that will be gone and I thank you for that! You add as much value to the internet as Wikipedia or OpenStreetMap.

itintheory6 hours ago
I'm really excited for this. We moved 120+ hand renewed certs to ACME, but still manually validate the domains annually. Many of them are on private/internal load balancers (no HTTP-01 challenge possible), and our DNS host doesn't support automation (no DNS-01 challenges either). While manually renewing the DCV for ~30 domains once a year isn't too bad, when the lifetime of that validity shrinks, ultimately to 9 days, it'd become a full time job. I just hope Sectigo implements this as quickly as LE.
9dev4 hours ago
For the love of god, switch to a DNS provider with an API. Whatever legacy behemoth you’re working with doesn’t justify a gap this wide.
amluto3 hours ago
Name one that doesn’t have an AWS-style per-query cost.

(There might well be a nice one, but I haven’t found it yet.)

toast03 hours ago
If it's for a business, I would contact them to see if they have a commercial offering, but I think the Hurricane Electric Free DNS might actually fit.

https://dns.he.net/

nfredericks3 hours ago
Might be obvious, but Cloudflare
amluto3 hours ago
No. Cloudflare will give a key scoped to an entire administrative domain in the Cloudflare sense like “a.com”. They will not give you a key scoped to a single entry within that domain. (That entry would be a domain in the RFC 9499 sense, but do you really expect anyone to agree on the terminology?)

In particular, there is no support for getting a key scoped to _acme-challenge.a.b.c or, even better, to a particular RR.

Maybe if you have an enterprise plan you can very awkwardly fudge it using lots of CNAMEs and subdomains.

Some DNS hosts that support old-school dynamic dns can do this. dns.he.net is an example, but they have a login system that very much stuck in the nineties.

dboreham1 hour ago
Cloudflare DNS isn't fully functional (at least for me). Can't be used for general purpose DNS hosting imho.
radiator3 hours ago
Hetzner DNS
micw7 hours ago
I wonder why they switched from a super-secure-super-complex (in terms of operations) way of doing DNS auth to a super-simple-no-cryptography-involved method that just relies on the account id.

Why not using some public/private key auth where the dns contains a public key and the requesting server uses the private key to sign the cert request? This would decouple the authorization from the actual account. It would not reveal the account's identity. It could be used with multiple account (useful for a wildcard on the DNS plus several independent systems requesting certs for subdomains).

tptacek7 hours ago
The most common vector for DNS-based attacks on issuance is compromised registrar accounts, and no matter how complicated you make the cryptography, if you're layering it onto the DNS, those attacks will preempt the cryptography.
Spivak6 hours ago
Because LE keeps a mapping of account ids to emails and public keys. You have to have the private key to the ACME account to issue a cert. The cryptography is still there but the dance is done by certbot behind the scenes.

Prior to this accounts were nearly pointless as proof of control was checked every time so people (rightfully) just threw away the account key LE generated for them. Now if you use PERSIST you have to keep it around and deploy it to servers you want to be able to issue certs.

csense6 hours ago
To get a Let's Encrypt wildcard cert, I ended up running my own DNS server with dnsmasq and delegating the _acme-challenge subdomain to it.

Pasting a challenge string once and letting its continued presence prove continued ownership of a domain is a great step forward. But I agree with others that there is absolutely no reason to expose account numbers; it should be a random ID associated with the account in Let's Encrypt's database.

As a workaround, you should probably make a new account for each domain.

bombcar2 hours ago
Your account ID is exposed in the certificate generated; what's the real difference?
Spivak6 hours ago
You bothered to manage your LE accounts? I only say because when using the other two challenge types with most deployment scenarios you were generating a new account per cert so your account ID was just a string of random numbers.
mmh00007 hours ago
I really like and hate this at the same time.

Years ago, I had a really fubar shell script for generating the DNS-01 records on my own (non-cloud) run authoritative nameserver. It "worked," but its reliability was highly questionable.

I like this DNS-PERSIST fixes that.

But I don't understand why they chose to include the account as a plain-text string in the DNS record. Seems they could have just as easily used a randomly generated key that wouldn't mean anything to anyone outside Let's Encrypt, and without exposing my account to every privacy-invasive bot and hacker.

Ajedi324 hours ago
> they could have just as easily used a randomly generated key

Isn't that pretty much what an accounturi is in the context of ACME? Who goes around manually creating Let's Encrypt accounts and re-using them on every server they manage?

ragall7 hours ago
Those who choose to use DNS-PERSIST-01 should fully commit to automation and create one LetsEncrypt account per FQDN (or at least per loadbalancer), using a UUID as username.
mcpherrinm6 hours ago
There is no username in ACME besides the account URI, so the UUID you’re suggesting isn’t needed. The account uri themselves just have a number (db primary key).

If you’re worried about correlating between domains, then yes just make multiple accounts.

There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.

9dev4 hours ago
It’s still a valid point IMHO - why not just use the public key directly? It seems like the account URI just adds problems instead of resolving any.
mcpherrinm2 hours ago
It has these primary advantages:

1. It matches what the CAA accounturi field has

2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls

3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method

glzone14 hours ago
Interesting.

I didn't realize the email field wasn't persisted. I assumed it could be used in some type of account recovery scenario.

bflesch2 hours ago
> But I don't understand why they chose to include the account as a plain-text string in the DNS record.

Simple: it's for tracking. Someone paid for that.

chaz65 hours ago
Is it possible to create an ACME account without requesting a certificate? AFAICT is is not so you cannot use this method unless you have first requested a certificate with some other method. I hope I am wrong!
dextercd5 hours ago
An account needs to be created before you can request a certificate. Some ACME clients might create the account for you implicitly when you request the first certificate, but in the background it still needs to start by registering an account.

`certbot register` followed by `certbot show_account` is how you'd do this with certbot.

chaz64 hours ago
Great, thank you!
dangoodmanUT1 hour ago
Love this, such a better method
Havoc6 hours ago
Interesting. Think a lot of the security headaches went away for me when I discovered providers like CF can restrict the scope of tokens to a single domain and lock it to my IP.
amluto6 hours ago
Even CF cannot restrict the scope of a token to a single host.
cube005 hours ago
Or a single DNS record.
infogulch4 hours ago
This is a nice increment in ACME usability.

Once again I would like to ask CA/B to permit name constrained, short lifespan, automatically issued intermediate CAs. Last year's request: https://news.ycombinator.com/item?id=43563676

ocdtrekkie6 hours ago
This might be the first time in ten years that a certificate proposal intends to make issuing certificates more reasonable and not less. More of this, less of 7-day-lifetime stupidity.
CqtGLRGcukpy6 hours ago
"Support for the draft specification is available now in Pebble, a miniature version of Boulder, our production CA software. Work is also in progress on a lego-cli client implementation to make it easier for subscribers to experiment with and adopt. Staging rollout is planned for late Q1 2026, with a production rollout targeted for some time in Q2 2026."
aaomidi5 hours ago
This is significantly better than my draft of DNS-ACCOUNT-01. Thank you Let's Encrypt team!
cyberax7 hours ago
Ah, the next step towards True DANE!

We then can just staple the Persist DNS key to the certificate itself.

And then we just need to cut out the middleman and add a new IETF standard for browsers to directly validate the certificates, as long as they confirm the DNS response using DNSSEC.

tptacek7 hours ago
This decreases the salience of DANE/DNSSEC by taking DNS queries off the per-issuance critical path. Attackers targeting multitenant platforms get only a small number of bites at the apple in this model.
NoahZuniga6 hours ago
DNS queries are still part of the critical path, as let's encrypt needs to check that the username is still allowed to receive a cert before each issuance.
cyberax6 hours ago
Sure. It's yet another advantage of doing True DANE. But it still requires DNS to be reliable for the certificate issuance to work, there's no way around it.

So why not cut out the middleman?

(And the answer right now is "legacy compatibility")

tptacek6 hours ago
I mean, the reason not to do DANE is that nobody will DNSSEC-sign, because DNSSEC signing is dangerous.
cyberax5 hours ago
Come on. It's not dangerous, it's just inconvenient and clumsy. So nobody is really using it.
akerl_5 hours ago
Ok, it's inconvenient and clumsy in ways that make it easy to shoot oneself in the foot. But that's not dangerous?
cyberax4 hours ago
When you shoot yourself in the foot with DNSSEC, you typically end up with a non-working setup.

The biggest problem is that DNS replies are often cached, so fixes for the mistakes can take a while to propagate. With Let's Encrypt you typically can fix stuff right away if something fails.

tptacek4 hours ago
When you shoot yourself in the foot with DNSSEC, your entire domain falls of the Internet, as if it had never existed in the first place. It's basically the worst possible case failure and it's happened to multiple large shops; Slack being the most notorious recent example.
cyberax3 hours ago
Yes, and it'd be great if DNSSEC added an "advisory" signature level. So it can be deployed without doing a leap of faith.

But let's not pretend that WebPKI is perfect. More than one large service failed at some point because of a forgotten TLS certificate renewal. And more than one service was pwned because a signing key leaked. Or a wildcard certificate turned out to be more wildcard than expected.

I understand the failures of DNSSEC and DNS in general. And we need to do something about it because it's really showing signs of its age as we continue to pile on functionality onto it.

I don't have an idea for a good solution for everything, but I just can't imagine us piling EVERYTHING onto WebPKI either.

akerl_2 hours ago
> But let's not pretend that WebPKI is perfect.

You're commenting on a post about LetsEncrypt working with other entities in the industry to make improvements to WebPKI. It's safe to say that nobody's claiming it's perfect.

But you can't go from ~"WebPKI isn't perfect" and ~"DNSSEC/DANE exist" and draw a magic path where using DNSSEC or DANE is actually a good thing for people to roll out. They'd need to be actually a good fit, and for DANE we have direct evidence that it isn't: a rollout was attempted and it was walked back due to multiple issues.

tptacek2 hours ago
I don't really understand most of this comment but you opened up this subthread with "Come on. It's not dangerous", and, as you're acknowledging here, it clearly is quite dangerous.
cyberax5 minutes ago
DNSSEC is not dangerous. Pretty much the worst thing is breakage, not an accidental compromise.

It's also more secure, compared to ACME. An on-path attacker can impersonate the site operator and get credentials. DNSSEC is immune to that.

Ayesh6 hours ago
I'm surprised the ballot passed, unanimously even! I get that storing the DNS credentials in the certificate renewal pipeline is risky, but many DNS providers have granular API access controls, so it is already possible to limit the surface area in case the keys get leaked. Plus, you can revoke the keys easily.

The ACME account credentials are also accessible by the same renewal pipelines that has the DNS API credentials, so this does not provide any new isolation.

~It's also not quite clear how to revoke this challenge, and how domain expiration deal with this. The DNS record contents should have been at least the HMAC of the account key, the FQDN, and something that will invalidate if the domain is transferred somewhere else. The leaf DNSSEC key would have been perfect, but DNSSEC key rotation is also quite broken, so it wouldn't play nice.~

Is there a way to limit the challenge types with CAA records? You can limit it by an account number, and I believe that is the most tight control you have so far.

---

Edit: thanks to the replies to this comment, I learned that this would provide invalidation simply by removing the DNS record, and that the DNS records are checked at renewal time with a much shorter validation TTL.

amluto6 hours ago
> but many DNS providers have granular API access controls

And many providers don't. (Even big ones that are supposedly competent like Cloudflare.)

And basically everyone who uses granular API keys are storing a cleartext key, which is no better and possibly worse than storing a credential for an ACME account.

agwa6 hours ago
> It's also not quite clear how to revoke this challenge, and how domain expiration deal with this

CAs can cache the record lookup for no longer than 10 days. After 10 days, they have to check it again. If the record is gone, which would be expected if the domain has expired or been transferred, then the authorization is no longer valid.

(I would have preferred a much shorter limit, like 8 hours, but 10 days is a lot better than the current 398 day limit for the original ACME DNS validation method.)

mcpherrinm6 hours ago
We (Let’s Encrypt) also agree 10 days seems too long, so we are migrating to 7 hours, aligning with the restrictions on CAA records.
mcpherrinm6 hours ago
This wasn’t the first version of the ballot, so there was substantial work to get consensus on a ballot before the vote.

CAs were already doing something like this (CNAME to a dns server controlled by the CA), so there was interest from everyone involved to standardize and decide on what the rules should be.

mcpherrinm6 hours ago
Yes, you can limit both challenge types and account URIs in CAA records.

To revoke the record, delete it from DNS. Let’s Encrypt queries authoritative nameservers with caches capped at 1 minute. Authorizations that have succeeded will soon be capped at 7 hours, though that’s independent of this challenge.

UltraSane1 hour ago
I use AWS Route53 and you can get incredibly granular with API permissions

Key condition keys for this purpose include:

    route53:ChangeResourceRecordSetsActions: Limits actions to CREATE, UPDATE, or DELETE.

    route53:ChangeResourceRecordSetsRecordTypes: Limits actions to specific DNS record types (e.g., A, CNAME, TXT).

    route53:ChangeResourceRecordSetsRecordValues: Limits actions based on the specific value of the DNS record.

    route53:ChangeResourceRecordSetsResourceRecords: For more complex scenarios, this can be used to control access based on the full record set details.