Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection
56 points
6 hours ago
| 12 comments
| realitydefender.com
| HN
Hi HN! This is Ben from Reality Defender (https://www.realitydefender.com). We build real-time multimodal and multi-model deepfake detection for Fortune 100s and governments all over the world. (We even won the RSAC Innovation Showcase award for our work: https://www.prnewswire.com/news-releases/reality-defender-wi...)

Today, we’re excited to share our public API and SDK, allowing anyone to access our platform with 2 lines of code: https://www.realitydefender.com/api

Back in W22, we launched our product to detect AI-generated media across audio, video, and images: https://news.ycombinator.com/item?id=30766050

That post kicked off conversations with devs, security teams, researchers, and governments. The most common question: "Can we get API/SDK access to build deepfake detection into our product?"

We’ve heard that from solo devs building moderation tools, fintechs adding ID verification, founders running marketplaces, and infrastructure companies protecting video calls and onboarding flows. They weren’t asking us to build anything new; they simply wanted access to what we already had so they could plug it in and move forward.

After running pilots and engagements with customers, we’re finally ready to share our public API and SDK. Now anyone can embed deepfake detection with just two lines of code, starting at the low price of free.

https://www.realitydefender.com/api

Our new developer tools support detection across images, voice, video, and text — with the former two available as part of the free tier. If your product touches KYC, UGC, support workflows, communications, marketplaces, or identity layers, you can now embed real-time detection directly in your stack. It runs in the cloud, and longstanding clients using our platform have also deployed on-prem, at the edge, or on fully airgapped systems.

SDKs are currently available in Python, Java, Rust, TypeScript, and Go. The first 50 scans per month are free, with usage-based pricing beyond that. If you’re working on something that requires other features or streaming access (like real-time voice or video), email us directly at yc@realitydefender.com

Much has changed since 2022. The threats we imagined back then are now showing up in everyday support tickets and incident reports. We’ve witnessed voice deepfakes targeting bank call centers to commit real-time fraud; fabricated documents and AI-generated selfies slip through KYC and IDV onboarding systems; fake dating profiles, AI-generated marketplace sellers, and “verified” influencers impersonating real people. Political disinformation videos and synthetic media leaks have triggered real-world legal and PR crises. Even reviews, support transcripts, and impersonation scripts are increasingly being generated by AI. Detection remains harder than we first expected since we began in 2021. New generation methods emerge every few weeks that invalidate prior assumptions. This is why we are committed to building every layer of this ourselves. We don’t license or white-label detection models; everything we deploy is built in-house by our team.

Since our original launch, we’ve worked with tier-one banks, global governments, and media companies to deploy detection inside their highest-risk workflows. However, we always believed the need wasn’t limited to large institutions, but everywhere. It showed up in YC office hours, in early bug reports, and in group chats after our last HN post.

We’ve taken our time to make sure this was built well, flexible enough for startups, and battle-tested enough to trust in production. The API you can use today is the same one powering many of our enterprise deployments.

Our goal is to make Reality Defender feel like Stripe, Twilio, or Plaid — an invisible, trusted layer that you can drop into your system to protect what matters. We feel deepfake detection is a key component of critical infrastructure, and like any good infrastructure, it should be modular, reliable, and boring (in the best possible way).

Reality Defender is already in the Zoom marketplace and will be on the Teams marketplace soon. We will also power deepfake detection for identity workflows, support platforms, and internal trust and safety pipelines.

If you're building something where trust, identity, or content integrity matter, or if you’ve run into weird edge cases you can’t explain, we’d love to hear from you.

You can get started here: https://realitydefender.com/api

Or you can try us for free two different ways:

1) 1-click add to Zoom / Teams to try in your own calls immediately.

2) Email us up to 50 files at yc@realitydefender.com and we’ll scan them for you — no setup required.

Thanks again to the HN community for helping launch us three years ago. It’s been a wild ride, and we’re excited to share something new. We live on HN ourselves and will be here for all your feedback. Let us know what you think!

primitivesuave
4 hours ago
[-]
First want to say that I sincerely appreciate you working on this problem. The proliferation of deepfakes is something that virtually every technology industry is dealing with right now.

Suppose that deepfake technology progressed to the point where it is still detectable by your technology, but is impossible for the naked eye. In that scenario (which many would call an eventuality), wouldn't you also be compelled to serve as an authoritative entity on the detection of deepfakes?

Imagine a future politician who is caught on video doing something scandalous, or a court case where someone is questioning the veracity of some video evidence. Are the creators of deepfake detection algorithms going to testify as expert witnesses, and how could they convince a human judge/jury that the output of their black box algorithm isn't a false positive?

reply
bpcrd
1 hour ago
[-]
Thank you. As an inference-based detection platform, our models go into every scan with the assumption that all files are both not the original/ground truth AND the files have been likely transcoded. We never say something is 0% or 100% fake because we don’t have that ground truth. That said, our award-winning models are able to say, with a confidence score of 1-99% — the higher being likely manipulated — which, in turn, is sent to the team using said detection to action as they will. Some use it as one of many signals to make an informed decision manually. Others have chosen to moderate or label accordingly. There are experts who’ve been called to testify on matters like this one, and some of them work on these very models.

As for synthetic content that is undetectable to the naked eye or ear, we are already there.

reply
taneq
5 hours ago
[-]
Yeah but does it actually work, though? There have been a lot of online tools claiming to be "AI detectors" and they all seem pretty unreliable. Can you talk us through what you look for, the most common failure modes and (at suitably high level) how you dealt with those?
reply
bpcrd
3 hours ago
[-]
We've actually deployed to several Tier 1 banks and large enterprises already for various use-cases (verification, fraud detection, threat intelligence, etc.). The feedback that we've gotten so far is that our technology is high accuracy and a useful signal.

In terms of how our technology works, our research team has trained multiple detection models to look for specific visual and audio artifacts that the major generative models leave behind. These artifacts aren't perceptible to the human eye / ear, but they are actually very detectable to computer vision and audio models.

Each of these expert models gets combined into an ensemble system that weighs all the individual model outputs to reach a final conclusion.

We've got a rigorous process of collecting data from new generators, benchmarking them, and retraining our models when necessary. Often retrains aren't needed though, since our accuracy seems to transfer well across a given deepfake technique. So even if new diffusion or autoregressive models come out, for example, the artifacts tend to be similar and are still caught by our models.

I will say that our models are most heavily benchmarked on convincing audio/video/image impersonations of humans. While we can return results for items outside that scope, we've tended to focus training and benchmarking on human impersonations since that's typically the most dangerous risk for businesses.

So that's a caveat to keep in mind if you decide to try out our Developer Free Plan.

reply
asail77
4 hours ago
[-]
Give it a try for yourself. It's free!

We have been working on this problem since 2020 and have created an trained an ensemble of AI detection models working together to tell you what is real and what is fake!

reply
seanw265
3 hours ago
[-]
How do you prevent bad actors from using your tools as a feedback loop to tune models that can evade detection?
reply
lja
3 hours ago
[-]
You would need thousands to tens of thousands of images, not just 50 to produce an adversarial network that could use the API as a check.

If someone wanted to buy it, I'm sure reality defender has protection especially because you can predict adversarial guesses.

It would be trivial for them to build "this user is sending progressively more realistic, rapid responses" if they haven't built that already.

reply
bpcrd
3 hours ago
[-]
We see who signs up for Reality Defender and instantly notice traffic patterns and other abnormalities that allow us to see if an account is in violation of terms of service. Also, our free tier is capped at 50 free scans a month which will not allow for said attackers to discern any tangible learnings or tactics they can use to bypass our detection models.
reply
BananaaRepublik
5 hours ago
[-]
Won't this just become the fitness function for training future models?
reply
bee_rider
4 hours ago
[-]
Just based on the first post, where they talk about their API a bit, it sounds like a system hosted on their machines(?). So, I assume AI trainers won’t be able to run it locally, to train off it.

Although, I always get a bad smell from that sort of logic, because it feels vaguely similar to security through obscurity in the sense that it relies on the opposition now knowing what you are doing.

reply
chrisweekly
4 hours ago
[-]
now -> not
reply
bee_rider
4 hours ago
[-]
True, haha. Although “the opposition now knowing what you are doing” is the big danger for this sort of scheme!
reply
Grimblewald
4 hours ago
[-]
I feel like a much easier solution is enforcing data provinence. Ssl for media hash, attach to metadata. The problem with AI isnt the fact its ai, its that people can invest little effort to sway things with undue leverage. A single person can look like 100's with signficantly less effort than previously. The problem with ai content is it makes abuse of public spaces much easier. Forcing people to take credit for work produced makes things easier (not solved) kind of like email. Being able to block media by domain would be a dream, but spam remains an issue.

so, tie content to domains. A domain vouches for content works like that content having been a webpage or email from said domain. Signed hash in metadata is backwards compatible and its easy to make browsers etc display warnings on unsigned content, content from new domains, blacklisted domains, etc.

benefit here is while we'll have more false negatives, unlike something like this tool, it does not cause real harm on false positives, which will be numerous if it wants to be better tham simply making someome accountable for media.

AI detection cannot work, will not work, and will cause more harm than it prevents. stuff like this is irresponsible and dangerous.

reply
bpcrd
1 hour ago
[-]
I understand the appeal of hashing-based provenance techniques, though they’ve faced some significant challenges in practice that render them ineffective at best. While many model developers have explored these approaches with good intentions, we’ve seen that they can be easily circumvented or manipulated, particularly by sophisticated bad actors who may not follow voluntary standards.

We recognize that no detection solution is 100% accurate. There will be occasional false positives and negatives. That said, our independently verified an internal testing shows we’ve achieved the lowest error rates currently available for addressing deepfake detection.

I’d respectfully suggest that dismissing AI detection entirely might be premature, especially without hands-on evaluation. If you’re interested, I’d be happy to arrange a test environment where you could evaluate our solution’s performance firsthand and see how it might fit your specific use case.

reply
darenfrankel
4 hours ago
[-]
I worked in the fraud space and could see this being a useful tool for identifying AI generated IDs + liveness checks. Will give it a try.
reply
m4tthumphrey
5 hours ago
[-]
I feel like this will be the next big cat and mouse sega after ad-blockers;

1) Produce AI tool 2) Tool gets used for bad 3) Use anti-AI/AI detection to avoid/check for AI tool 4) AI tool introduces anti-anti-AI/detection tools 5) Repeat

reply
coeneedell
2 hours ago
[-]
This is definitely a concern, but this is more or less how the cybersecurity space already works. Having dedicated researchers and a good business model helps a lot for keeping detectors like RD on the forefront of capabilities.
reply
AlecSchueler
2 hours ago
[-]
It's sadly not often enough I see a young company doing work that I feel only benefits society, but this is one of those times, so thank you and congratulations.
reply
bpcrd
1 hour ago
[-]
Thank you! We’ve been working on this since 2021 (and some of us a bit before that), and we’re reminded every day that we are ultimately working something that helps people on the macro and micro level. We want a world free of the malevolent uses of deepfakes for ourselves, our loved ones, and everyone beyond, and feel all should be privy to such protection.
reply
abhisek
5 hours ago
[-]
About time. Much needed. I just wish this was open source and built in public.

On my todo list to build a bot that finds sly AI responses for engagement farming

reply
DonHopkins
1 hour ago
[-]
How easy is it to fool Reality Defender into making false positives?

Whenever I'm openly performing nefarious illegal acts in public, I always wear my Sixfinger, so if anyone takes a photo of me, I can plausibly deny it by pointing out (while not wearing it) that the photo shows six fingers, and obviously must have been AI generated.

In support of said nefarious illegal acts, the Sixfinger includes a cap-loaded grenade launcher, gun, fragmentation bomb, ballpoint pen, code signaler, and message missile launcher. It's like a Swiss Army Finger! You can 3d print a cool roach clip attachment too.

"How did I ever get along with five???"

https://www.youtube.com/watch?v=ElVzs0lEULs

https://www.museumofplay.org/blog/sixfinger-sixfinger-man-al...

https://www.museumofplay.org/app/uploads/2010/11/Sixfinger-p...

reply
bpcrd
23 minutes ago
[-]
I understand this is in jest, but unfortunately AI generation tools more or less stopped the six-finger issue a couple of years ago. We are decidedly not a model used for the express detection of finger abnormalities, but a multi-model and multimodal detection platform — driven by our Public API (which you can try for free right now, btw) — which uses many different techniques to differentiate between content that is likely manipulated and likely not manipulated.

That said, neat gag.

reply
candiddevmike
5 hours ago
[-]
On a 2k desktop using Chrome, your website font/layout is way too big, especially your consent banner--it takes up 1/3 of the screen.
reply
viggity
5 hours ago
[-]
I find that most companies find that to be a feature, not a bug. People are more likely to hit accept if they can only see a small chunk of the content.
reply
colehud
5 hours ago
[-]
And the scrolling behavior is infuriating
reply
caxco93
4 hours ago
[-]
please do not hijack the scroll wheel
reply
jihadjihad
2 hours ago
[-]
It's like scrolling through molasses.
reply