The problem: investors want to see your actual product working with real data, but showing real dashboards means exposing credentials, API keys, client data, or internal systems on a shared screen.
The usual options all have problems: - Demo environment with fake data → looks staged, kills credibility - Real product with real data → security risk, one screenshot away from an incident - Pre-recorded walkthrough → can't answer specific questions or show interactivity
Curious how others handle this. Do you just accept the risk? Build sophisticated demo infrastructure? Something else entirely?
then you can run mockaton with those mocks. you’ll manually have to anonymize sensitive parts though.
also, you can compile your Frontend(s) and copy their assets, so yo can deploy a standalone demo server. see the last section of: https://mockaton.com/motivation
mocks don’t have to be fully static, it supports function mocks, which are http handlers.
for demoing, the dashboard has a feature for bulk selecting mocks by a comment tag.
The challenge I kept running into was the frontend side during live screen shares. Even with mocked APIs, I'd have credentials visible in browser tabs, notifications popping up with client names, or sidebar elements showing sensitive info.
Did you find Mockaton solved the full screen-share exposure problem, or did you combine it with other approaches?
1. If Frontend is directly fetching from a third-party API. Maybe, you could add an env var with the base URL, so it points to the mock server.
2. If it’s a third-party auth service
2a. If the auth service sets a cookie with a JWT, you could inject that cookie with Mockaton like this: https://github.com/ericfortis/mockaton/blob/354d97d6ea42088b...
2b. If it doesn't set a cookie (some SSO providers set it in `sessionStorage`), and assuming it’s a React app with an <AuthProvider>, you might need to refactor the entry component (<App/>) so you can bypass it. e.g.:
SKIP_AUTH // env var
? <MyApp/>
: <AuthProvider><MyApp/></AuthProvider>
Then, instead of using the 3rd party hook directly (e.g., useAuth). Create a custom hook, that fallbacks to a mocked obj when there's no AuthContext. Something like: function useUser() {
const context = useContext(AuthContext)
if (!context)
return {
id_token: 'aa',
profile: { name: 'John' }
}
return {
id_token: context.id_token ?? '',
profile: context.profile ?? {},
}
}How much overhead did that add to your development workflow? I'm curious if building and maintaining that parallel demo infrastructure became its own project, or if it stayed lightweight.
Also, did you use this for investor demos specifically, or more for development/QA?
This let it be “simple” in terms of how it generated content, with it being “complicated” only in terms of what content it needed to create and its interconnections. Because patient profiles were simple to define, but were completely different than, say, the medications they were prescribed or the appointments that had been scheduled. Or the connections between appointments and prescriptions.
So yeah, generating data is simple, defining what data to be generated and in what patterns was a lot more difficult. Sometimes things that should be related could only be generated in isolation from each other because of how that part of the generation tooling was assembled.
This was almost 100% used by developers and QA. Outside demos had a special DB used by sales with much more consistent data, albeit much smaller. The generator was meant to create _large_ data sets, just not very _pretty_ data sets.
Where it broke down for me: investors with technical backgrounds would ask edge case questions ("show me how this handles 10K records" or "what does error handling look like with real load?"). The fake environment couldn't simulate that complexity authentically.
The other issue was muscle memory. When I'm demoing something I use daily, I'm fast and fluent. In a fake environment, I'd hesitate or click wrong because it's not my real workflow. Investors noticed.
Have you found ways around those issues?
Presumably the issue here is that you have customers with >10k records, but can't show them. Why not take their data and anonymize it, then put it under a fake customer?
> "what does error handling look like with real load?"*
I find it hard to believe that anyone is making an investment decision off of this question, but how would you demo this with a real customer anyway? Intentionally introduce a bug so that you can show them how errors are handled? Wouldn't the best course of action here be to just describe the error handling?