When you are making an API request, you've validated the certificate of the system you're making the request to and in the process doing so over a secure connection. You've usually authenticated yourself, also over a secure connection, and are including some sort of token validating that authentication which provides your authorization as well.
When you are accepting a call in your web hook, you need to ensure that the call came from the authenticated source which the signature provides. The web hook caller connects using the same certificate validation and secure connection infrastructure. They won't connect if your certificate doesn't validate or they can't establish a secure connection. The signature is their mechanism of authenticating with your API except that they are the authority of their identity.
That last bit is where the the contradiction falls away, the webhook implementer is retaining authentication authority and infrastructure (whether you call them or they are calling you) rather than asking the client to provide an authentication system for them to validate themselves with.
[edit: there's an additional factor. If you move authentication to the web hook implementer you lose control of what authentication mechanisms are in use. Having to implement everyone's authentication systems would be a nightmare full of contradictions. You also open yourself up to having to follow client processes, attend meetings, and otherwise not be able to automate the process of setting up a webhook.]
nit: not sure if that's just me but I was confused by this wording; with "authenticated yourself" you're referring to an initial permanent-token/login ⇒ session-token step? I initially thought you were implying something on the same connection the API call is made on, which would have to be TLS client certificates (HTTP bearer auth is already the token itself.)
TL;DR: I was thinking of bearer token auth flows and not intentionally excluding other forms of authentication.
Part of the problem is reverse ordering. When calling an API, you generally authenticate yourself, often to obtain a temporary token but it can be in the same call as you note via certificate. Only then do you make the API call that you actually wanted to make. I first wrote about making the API call and only then followed with discussing the authentication. In that, I was thinking of the permanent token to session token model but you're absolutely right that mutual auth could bypass that stage. The certificate-based authentication would still precede the API call processing, but would obviate the use/sending of a token. However, I haven't seen that used in automated APIs because of the management overhead and increased barrier for the more entry level skill end of the customer base. I have absolutely seen it in use for internal service interfaces.
Sorry that my words were a tangle, thank you very much for helping me clarify (or at least hopefully do so).
[edit: side note that with mutual auth, I've seen that as a gate to even open a socket paired with further authentication using some sort of a permanent token to session token protocol so one doesn't have to preclude the other.]
I don't understand where you are coming from. The article is comparing shared secret vs signing. In both those auth methods the "control" remains in the same place. The webhook consumer has to do the auth verification. The webhook sender mandates what authentication method is used.
Under none of these scenarios is the webhook consumer providing their own "authentication system".
There is a bit of switching back and forth between speaking from an API provider and an API consumer standpoint and it is possible that in not reading sufficiently carefully I misunderstood. I was also previously oblivious to the context of the product they are marketing here that turns out to include API SDK generation (including a webhook acceptance sdk for validating requests? I don't know). The nuance that the implementer is an API provider providing both direct API access and a webhook delivery API could easily have caused me to misunderstand properly written words that flew over my head in the midst of getting bored with the content in my failure to understand it.
A great question from the article, in its current state[3], is asking: are API keys are secure enough for webhooks? In theory, you could validate a webhook request by confirming that the API key matches your in memory copy of that key. Either that or signature validation is usually supplied by the service SDK or example code that you cut and paste anyway.
Sadly, for a long time, it was too much to ask some clients to provide HTTPS security for connections and those requests had to be sent over HTTP, which would require alternative mechanisms of supplying non-repudiation (proof the content had not been altered in transit). This may be a forcing factor of the convention and thank you to Let's Encrypt and others for helping the industry shift (though I still find stragglers occasionally).
In the current environment of mostly ubiquitous HTTPS any non-secure URL configuration could be rejected by a webhook sender and an API key comparison could be authentication enough while retaining the sender controlled nature of the authentication. However, that newly assumes confidentiality of the API key by two parties which a webhook implementer may not be sufficiently competent to guarantee. This could leave a webhook implementer in the position of processing malicious requests without the valid sender having a mechanism for detecting the breach and exposing the sender of the webhook request to messy, relationship damaging, potentially legal, arguments about whether they leaked a key. The signing process retains private signing materials under the custody of the sender, simplifying the threat model and reducing its surface.
So, while you're right that (if we aren't talking about the webhook request implementer using webhook implementer authentication) the webhook consumer isn't providing their own authentication system, they are providing security for the authentication system in the case of API key use. Perhaps this is a version of Postel's law? Personally as a business generating SDKs for APIs I would be inclined to go the other way and encourage customers to generate SDKs with signing rather than encouraging webhooks without signing. Demand is what it is and so are clients so... I can't fault them either.
Thank you for helping me understand that I had missed context (or at least giving me an opportunity to respond to the latest version of the page, though my making an error seems more probable).
[0] https://web.archive.org/web/20250526130006/https://www.speak... [1] https://web.archive.org/web/20250526184520/https://www.speak... [2] https://www.speakeasy.com/_gh-redirect/src/content/blog/webh... [3] https://web.archive.org/web/20250527215821/https://www.speak...
My pleasure!
> The signing process retains private signing materials under the custody of the sender
In the signing process, with a symmetric key, the signing materials do not remain under the custody of the sender. Both parties need access to the signing key. If the consumer leaks the key they have to notify the sender and vice versa.
Asymmetric signing is used very sparingly in the context of webhooks.
Sorry if I'm misinterpreting your sentence a little too literally here.
When using symmetric signing, the threat model advantage I was advocating for disappears. Finding examples like Stripe and GitHub using symmetric signing was easy. Given this it seems far more likely this is an artifact of a time that you couldn't expect customers to host using HTTPS.
I'll chalk this up as another one of those "oh god, really?" moments with this industry.
That is, surely you've worked on a web app where you receive requests from users. Those requests are authenticated (and authorized) in various ways, from OAuth tokens to session cookies to API keys. When you're handling those requests, do you require that they're signed as well? I've rarely seen such a thing (the article points out that AWS does, for example), but most web apps I've worked on don't. We simply take the request for granted (assuming its come over a TLS connection), and then check the credential.
The article is asking: if that's good enough for logic on a web app, why not in reverse? A server handling customer requests generally doesn't know their provenance either, and simply relies on the credential (unless you have IP allow-listing and other measures like that).
I actually work on webhooks as well, and we sign them (and offer mTLS and various other security measures) but I sort of took all those best practices for granted. Now I'm trying to think through what the actual threat model is here, and why it doesn't apply in reverse to the REST API endpoints that we also maintain. I can see the point of signing rather than an included credential if you allow webhooks to http endpoints, but is that it? Probably better to just not allow non-https delivery URLs anyway.
My best guess: Maybe signing the webhooks assumes the TLS-terminating middlewares might not be trusted? Or some other middleware between that and the final handler.
To the best of my understanding, the two options mentioned in the article require a shared secret: API keys include that secret verbatim in the request, while the signing uses the secret in an HMAC function.
If asymmetric cryptography were somehow involved, I would somewhat buy the arguments about validating the origin of the request, because only one party would be able to create a valid signature. But that's not the case here, because with HMAC both parties have access to the same secret used to create a "signature" (which is more like a salted hash, so creating and validating a signature are the same process).
So, if both parties can produce the hash for a valid signature, and the secret is known to both ends, and there's no advantage over API keys when using TLS (assuming TLS is not broken), then I can only think that the problem is what happens outside TLS.
That's why I think the threat model would be a compromised TLS-terminating proxy, or some compromised component in between TLS-terminating proxy and the final application handling the request.
Sounds like zero-trust shenanigans.
If I'm misunderstanding anything, I'm more than happy to be corrected.
[edit: obviously once a credential is handed out it can be misused but any such attack would put signing materials in the hands of an attacker too.]
In either case the server authenticates with TLS and PKI.
But for APIs, the client (usually) authorizes with tokens. And for webhooks, the client (usually) authorizes with signed requests.
---
You could just as easily imagine API authorization with signed requests (e.g. AWS). Or webhook authorization with tokens (e.g. JIRA can do this).
Or where either one uses mTLS (e.g. CMS HETS).
That's not how it's usually done.
But the client-server requirements for confidentiality and authenticity are the same. The only difference is who is the "client" and who is the "server."
In that context, you're right that I was more confused than now but I disagree that the client-server requirements for confidentiality and authenticity are the same.
In the case of webhooks the sender does not want to include breach of the receiver in their threat model. The consequences of and ability to detect malicious API calls to service are different than those of malicious calls claiming to be from a service. A shared secret removes the potential for detection from the service while asynchronous key signing does not.
I would approximate that 95% of the time when a webhook sender discusses signatures they are referring to HMAC (symmetric-key signing). There is a clear benefit to asymmetric-key signatures but that's not the focus of this article. It's discussing the industry convention of using symmetric-key signing.
The key difference isn't with the mechanism of authentication (although that is different) but who specifies, implements, and maintains the authentication. Webhook providers do a lot of work to avoid putting that on their clients and to keep centralized control over their implementation.
The article is comparing the use of a shared secret vs HMAC. For shared secret: Who specifies auth? The webhook producer. Who implements auth? The webhook consumer. For HMAC / signing it's exactly the same parties who do those things.
Discussions about mutual TLS and public keys are out of scope.
In summary of my other comment, it is the case that the implementation and execution roles are the same but the threat surface is very different.
Yes, I agree.
- API Keys are much, _much_ easier to use from the command line. CURL with HMAC is finicky at best and turns a one-liner into a big script
- Maintaining N client libraries for your REST API is hard and means you'll likely deprioritize non-mainstream languages. If a customer needs to write their own library to interact with your service, needing to incorporate their own HMAC adds even more friction.
- Tools have gotten much better in recent years- it is much easier to configure a logger to ignore sensitive fields now compared to ~10 years ago
There's so many better options than just dumping the secret on the wire.
But in practice things get logged, people mess up their DNS and send the request to a different party (potentially after their CDN decrypts it) or some other blunder. With HMAC as long as the recipient is validating properly (which is a whole different can of worms) the worst the attacker can do is replay requests that they have observed.
Assume that things will come out of order, may be repeated, may come in giant rushes if there's a misconfiguration or traffic spike, and may have payload details change at any time in hard-to-replicate ways (unless you're archiving every payload and associating it with errors). If you make the "signal" be nothing more than an idempotent flag-set, then many of these challenges go away. And even if someone tries to send unauthenticated requests, the worst they can do is change the order in which your objects are reprocessed. Signature verification is important, but it becomes less critical.
Stripe and Twilio do it best, with signatures that verify they're the ones sending the hook, but I'd even settle for http basic auth. So many of them seem to say "hey here's the IP addresses well be sending raw posts to your provided URL with, btw these IPs can change at any time without warning.
> However, webhooks aren’t so different from any other API request. They’re just an HTTP request from one server to another. So why not use an API key just like any other API request to check who sent the request?
Because it's still you requesting the event to happen, not the origin of the webhook. It makes no sense for the webhook to use normal API key mechanisms that are designed to control access to an API; the API is accessing you. (To be clear, of course it wouldn't use the same API key as inbound, that's a ridiculous suggestion. I'm saying the mechanics of API keys don't match this use.)
The real issue is that the webhook receiver should authenticate itself to the sender of the webhook, and the only widespread way that's currently happening is HTTPS certificate checks. As the article kinda points out for the other direction, that's kind of an auxiliary function and it's a bit questionable to rely on that. One way to do this properly would be to add another layer of encryption, which only the intended webhook receiver is given the keys for, e.g. the entire payload could be put into an encrypted PCKS#7 container. This would aid against attackers that get a hold of the webhook target in some external manner, e.g. hijacking DNS (which is enough to issue new valid certificates these days, with ACME).
> Signing requests does give extra security points, but why do we collectively place higher security requirements on webhook requests than API requests?
And now the article gets really confused, because it's misidentifying the problem. The point of signing a request that already makes use of an API key would be integrity protection, except that is indeed a function HTTPS can reasonably be relied on for in this scenario. Would a more "complex" key reduce the risk of lieaking it in log files or somesuch? Sure, but that's an aspect of API keys frequently being "loggable" strings. X509 keys as multi-line PEM text might show up less frequently in leaks due to their formatting, but that's not a statement about where and how to use them cryptographically.
Usually you'd want to use a method which prevent timing attacks for this check. Even php provides hash_equals for this usecase.
A vendor’s customers aren’t distributing software. They’re only sending messages via API calls to the vendor. This is many-to-one instead of one-to-many. The key distribution problem is solved differently: each customer saves a different API key to a file. There’s no key distribution problem that would be made easier by publishing a public key.
(That is, on the sending side. The receiving side is handled via TLS.)
It’s a web request either way, but this isn’t peer-to-peer communication, so the symmetry is broken.
In what way it doesn't?
Signatures prevent proxies, good and bad, from doing that without consequence.
This is not unique to Cloudflare, for a CDN to do anything which involves seeing the payload you have to have a browser trusted key available to their nodes. Traditionally, you did this by giving them a browser-trusted x509 certificate and private key –now it’s common to authorize them to get one from a service like Let’s Encrypt-so they could handle the TLS handshake on any node for maximum performance but some CDNs like Cloudflare allow you to use your own key server so they don’t have access to the private key but do see the session key which gives you more control: https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...
The other way this can work is by using the CDN at a lower level where it’s proxying TCP connections back to the origin servers. That loses a lot of performance and security features since they can’t see the traffic, which is why most people don’t use CDNs this way but it’s an option and it’s useful if you need to deal with custom protocols or things like accepting traffic on ports which Cloudflare doesn’t support for their normal proxy. If you really didn’t trust Cloudflare, you typically wouldn’t use them but if you had some kind of compliance requirement you could still use a CDN for things like better network performance and lower-level DDoS protection without giving the CDN operator visibility into your traffic payloads.
(You'd need to stick the DN in a trusted header, similar to the original IP address in X-Forwarded-For:)
I never got a satisfactory answer, but I think the article is correct.
Signed requests is hard for clients, and there are more API clients (consumers) than webhook clients (services).
I suppose you can just add a bearer token into the address, if you need that. A different address per association, containing a bearer token, with HTTPS, provides the same security as if the bearer token was sent in a separate header.
And with signed webhook requests, the recipient can simply ignore the signature if they deem the additional security it grants unnecessary.
For me it seems clear that the reason for this different approach is that api requests are already authenticated. Signing them would yield little additional security. Diminishing returns like the debate over long lived (manually refreshed) api keys versus short lived access tokens with long lived refresh tokens - or, annoyingly, single use refresh tokens that you have to keep track of along with the access token.
Webhooks are unauthenticated post requests that anyone could send if they know the receiving url, so they inherently need sender verification.
TFA is exploring the juxtaposition of signed web-hook requests vs bearer token api requests, both of which provide authentication but one of which is arguably superior and in common enough use to question why it hasn't become common practice at large.
To flip the question: if there aren’t meaningful benefits to signing requests, why don’t web-hooks just use bearer token authentication?
With API requests the customer takes the client role. The endpoint is the same, eg api.stripe.com. This means, an API key (shared secret) is the minimal config needed to avoid impersonation. You could sign with a private key too but it would also require configuration (uploading the public key to stripe) so there’s not much security gained.
With webhooks, the vendor is the client and needs to authenticate itself. But since it’s always the same vendor, no shared secret is needed. They can sign it with the same private key for all customers. You can bake the public key into client libraries and avoid the extra config. Thus, it’s reasonable to believe the use of public key cryptography is not because it’s more secure, but simply more convenient. Signing is kind of beautiful for these types of problems.
Signing alone creates a potential security issue (confused deputy? Not sure if it has a name): if Eve creates a stripe account and tells stripe that her webhook lives on alice.example.com, ie Alice’s server, stripe could send real verified webhook events to Alice, and if she doesn’t check which account it belongs to, she might provision resources (eg product purchases) if Eve is able to replicate the product ids etc that Alice uses.
Edit: now that I think of it, eve doesn’t even need to point stripe to Alice’s server. She can just store and replay the same signed messages from stripe and directly attack Alice’s server, since the HTTPS connections are not authenticated (only the contents are). To mitigate, the client library should contain some account id in the configuration, in order to correctly discard messages intended for someone else.
That said, you can still benefit from pub keys by having good infra and key rotations to prevent some attacks like message replay after months. Putting such a requirement on customers is pretty doomed because of the workload, processes and infra required.
As designed, the webhook receiver only has to implement the one endpoint.
[edit: in addition, bearer tokens are not the only authentication system. By moving authentication onto the webhook holder, the caller now has to satisfy any authentication system and have implementations for all of them. Some authentication systems are manual and thereby introduce friction. By providing the authentication materials themselves, they reduce friction and reduce their implementation to having only one mechanism.]
Nevertheless, your question would have yielded a better article.
> but why do we collectively place higher security requirements on webhook requests than API requests?
We really don’t, signing is just more convenient in the webhook scenario. And it’s also completely optional to check a signature, leading even to many implementations not doing so.