Over the years I kept fighting the same problem when testing repository code:
real database tests are slow
Docker/TestContainers add friction and startup time
mocks don’t validate SQL behavior
SQLite introduces dialect differences
integration tests are too late for fast feedback
So I built an in-memory SQL engine focused on validating SQL behavior during unit tests.
It plugs into IDbConnection, allowing repository code (Dapper or raw SQL) to run unchanged.
What it does
emulates SQL dialect behavior
validates constraints and query logic
supports multiple providers (MySQL, SQL Server, PostgreSQL, Oracle, SQLite, DB2)
simulates version-specific behavior
provides execution metrics and plan insights
runs entirely in memory (no infrastructure)
The goal is not to replace integration tests.
It’s to catch SQL logic issues earlier and keep tests fast, deterministic, and portable.
Why not SQLite in-memory?
SQLite is great, but testing against it means testing SQLite semantics.
Teams often hit:
type mismatches
dialect drift
query compatibility differences
adaptation layers in test code
This approach keeps behavior aligned with the target database rules.
Current work
I’m working on tooling to reduce friction further:
Visual Studio & VS Code extensions
mapping tables, views, and procedures locally
generating in-memory schemas automatically
keeping test schemas aligned with real databases
Background
The core originated from a previous internal project and includes extensive compatibility tests to validate behavior across database versions and frameworks.
Looking for feedback
Curious how others handle SQL testing trade-offs.
If this problem space resonates with you, contributions and real-world scenarios are very welcome. The project is open source and evolving, and additional dialect behaviors, edge cases, and tooling improvements can make it more useful for everyone.
Repo: https://github.com/christianulson/DbSqlLikeMem Nuget: https://www.nuget.org/packages/DbSqlLikeMem