Works with any PG version today. Each branch is a fully isolated PostgreSQL container with its own port. ~2-5 seconds for a 100GB database.
https://github.com/elitan/velo
Main difference from PG18's approach: you get complete server isolation (useful for testing migrations, different PG configs, etc.) rather than databases sharing one instance.
Also agentic coding detractors: "How dare you use AI to help build a new open source project."
I'm joking and haven't read the comments you're referring to, but whether or not AI was involved is irrelevant per se. If anyone finds themselves having a gut reaction to "AI", just mentally replace it with "an intern" or "a guy from Fiverr". Either way, the buck stops with whomever is taking ownership of the project.
If the code/architecture is buggy or unsafe, call that out. If there's a specific reason to believe no one with sufficient expertise reviewed and signed off on the implementation, call that out. Otherwise, why complain that someone donated their time and expertise to give you something useful for free?
The issue of quality makes sense since it’s so easy to build these days, but when the product is open-source, these vibe coded comments make no sense. Users can literally go read the code or my favorite? Repomix it, pop it into AI Studio, and ask Gemini what this person has built, what value it brings, and does it solve the problem I have?
For vibe coded proprietary apps, you can’t do that so the comments are sort of justified.
Don't worry about these trolls.
Mind you, I'm not saying it's bad per se. But shouldn't we be open and honest about this?
I wonder if this is the new normal. Somebody says "I built Xyz" but then you realize it's vibe coded.
In such cases the person says, I have built this building. People who found companies, say they have built companies. It's commonly accepted in our society.
So even if Claude built for it for GP, as long as GP designed it, paid for tools (Claude) to build it, also tested it to make sure that it works, I personally think, he has right to say he has built it.
If you don't like it, you are not required to use it.
But here's the problem. Five years ago, when someone on here said, "I wrote this non-trivial software", the implication was that a highly motivated and competent software engineer put a lot of effort into making sure that the project meets a reasonable standard of quality and will probably put some effort into maintaining the project.
Today, it does not necessarily imply that. We just don't know.
People using LLMs to code these days is similar to how majority people stopped using assembly and moved to C and C++, then to garbage collected languages and dynamically typed languages. People were always looking for ways to make programmers more productive.
Programming is evolving. LLMs are just next generation programming tools. They make programmers more productive and in majority of the cases people and companies are going to use them more and more.
I'm just saying that we don't know how much effort was put into making this and we don't know whether it works.
The existence of a repository containing hundereds of files, thousands of SLOCs and a folder full of tests tells us less today than it used to.
There's one thing in particular that I find quite astonishing sometimes. I don't know about this particular project, but some people use LLMs to generate both the implementation and the test cases.
What does that mean? The test cases are supposed to be the formal specification of our requirements. If we do not specify formally what we expect a tool to do, how do we know whether the tool has done what we expected, including in edge cases?
> The test cases are supposed to be the formal specification of our requirements
Formal methods folks would strongly disagree with this statement. Tests are informal specifications in the sense that they don't provide a formal (mathematically rigorous) description of the full expected behavior of the system. Instead, they offer a mere glimpse into what we hope the system would do.
And that's an important part, which is where your main point stands. The test is what confirms that the thing the LLM built conforms to the cases the human expected to behave in a certain way. That's why the human needs to provide them.
(The human could take help of an LLM to write the tests, as in they give an even-more-informal natural language description of what the test should do. But the human then needs to make sure that the test really does that and maybe fill in some gaps.)
You never knew. There are plenty of intelligent, well-intentioned software engineers that publish FOSS that is buggy and doesn’t meet some arbitrary quality standards.
That is entirely an assumption on the part of the reader. Nothing about someone saying "I built this complicated thing!" implies competence, or any desire to maintain it beyond building it.
The problem you're facing is survivorship bias. You can think of lots of examples of where that has happened, and very few where it hasn't, because when the author of the project is incompetent or unmotivated the project doesn't last long enough for you to hear about it twice.
I disagree. The fact that someone has written a substantial amount of non-trivial code does imply a higher level of competence and motivation compared to not having done that.
Most of the vibe-code I’ve seen so far appears functional to the point that people will defend it, but if you take a closer look it’s a massively over complicated rat’s nest that would be difficult for a human to extend or maintain. Of course you could just use more AI, but that would only further amplify these problems.
If someone puts weeks and months of their time into building something, then I'm willing to take that as proof of their motivation to create something good.
I'm also willing to take the existence of non-trivial code that someone wrote manually as proof of some level of competence.
The presence of motivation + competence makes it more likely that the result could be something good.
> No human expert involved
You don’t know this, you are just hating.
Besides the close review and specification that may be conducted with agents, even if you handwrite / edit code, it will say that it was co-authored by the agent if you have the agent do the commit for you.
When you order a website on upwork - you didn't build it. You bought it.
Meanwhile this vibe coded nonsense is provided “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. We don’t even know if he read it before committing and pushing.
Quality of the software comes from testing. Humans and LLMs both make mistakes while coding.
There are of course projects that operate at higher development specification standards, often in the military or banking. This should be extended to all vehicles and invasive medical devices.
If someone used AI, it is a good discussion to see whether they should explicitly disclose it, but people have been using assisted tools, from auto-complete, text expanders, IDE refactoring tools, for a while - and you wouldn't make a comment that they didn't build it. The lines are becoming more blurry over time, but it is ridiculous to claim that someone didn't build something if they used AI tools.
Do you take issue with a CNC machinist stating that they made something, rather than stating that they did the CAD and CAM work but that it was the CNC machine that made the part?
Non-zero delegation doesn’t mean that the person(s) doing the delegating have put zero effort into making something, so I don’t think that delegation makes it dishonest to say that you made something. But perhaps you disagree. Or, maybe you think the use of AI means that the person using AI isn’t putting any constructive effort into what was made — but then I’d say that you’re likely way overestimating the ability of LLMs.
> Nowhere have I claimed that they didn't put work into this.
There's some mental gymnastics.
> please counter the opinion that I gave
The reply your responding to did exactly that, and you just gave more snarky responses.
Everybody in the industry is vibecoding right now - the things that stick are due to sufficient quality being pushed on it. Having a pessimistic / judgmental surface reaction to everything as being "ai slop" is not something that I'm going to look forward in my behavior.
Why good faith is a requirement for commenting but not for submissions? I would argue the good faith assumption should be disproportionately more important for submissions given the 1 to many relationship. You're not lying, it indeed is toxic and rapidly spreading. I'm glad this is the case.
Most came here for the discussion and enlightenment to be bombarded by heavily biased, low effort marketing bullshit. Presenting something that has no value to anyone besides the builder is the opposite of good faith. This submissions bury and neglect useful discussion, difficult to claim they are harmless and just not useful.
Not everyone in the industry is vibe coding, that is simply not true. but that's not the point I want to make. You don't need to be defensive about your generative tools usage, it is ok to use whatever, nobody cares. Just be ready to maintain your position and defend your ideals. Nothing is more frustrating then giving honest attention to a problem, considering someone else perspective, to just then realize it was just words words words spewed by slop machine. Nobody would give a second thought if that was disclosed. You are responsible for your craft. The moment you delegate that responsibility into the thrash you belong. If the slop machine is so great, why in hell would I need you to ask it to help me? Nonsensical.
The reason this discussion is pathetic, is that it shifts the discussion from the main topic (here it was a database implementation) - to abide by a reactionary emotive emulation with no grace or eloquence - that is mostly driven by pop culture at this point with a justification mostly shaping your ego.
There is no point in putting yourself above someone else just to justify your behavior - in fact it only tells me what kind of person you were in the first place - and as I said, this is not the kind of attitude that i'm looking up to.
no ‘everybody’ is not. a lot of us are using zero LLMs and continuing to build (quality) software just fine
https://github.com/elitan/velo/blame/12712e26b18d0935bfb6c6e...
And are we really doing this? Do we need to admit how every line of code was produced? Why? Are you expecting to see "built with the influence of Stackoverflow answers" or "google searches" on every single piece of software ever? It's an exercise of pointlessness.
> We would like to acknowledge the open source people, who are the traditional custodians of this code. We pay our respects to the stack overflow elders, past, present, and future, who call this place, the code and libraries that $program sits upon, their work. We are proud to continue their tradition of coming together and growing as a community. We thank the search engine for their stewardship and support, and we look forward to strengthening our ties as we continue our relationship of mutual respect and understanding
Then if you would kindly say that a Brazilian invented the airplane that would be good too. If you don’t do this you should be cancelled for your heinous crime.
lol, good one!
last I checked, Wright brothers used a catapult while Santos-Dumont made a plane that took off by itself.
We wouldn’t have called it reviewed in the old world, but in the AI coding world we’re now in it makes me realise that yes, it is a form of reviewing.
I use Claude a lot btw. But I wouldn’t trust it on mission critical stuff.
Again, I love Claude, I use it a ton, but a topic like database cloning requires a certain rigour in my opinion. This repo does not seem to have it. If I had hired a consultant to build a tool like this and would receive this amount of vibe coding, I’d feel deceived. I wouldn’t trust it on my critical data.
This is where it belongs, at best. He doesn't even have to disclose it. Prompting so that the ai writes the code faster than you is okay.
Or at least I cannot come up with a usecase for prod.
From that perspective, it feels like it'd be a perfect usecase to embrace the LLM guided development jank
App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.
This is why migrations are supposed to be backwards compatible
You can certainly bet you followed that advice correctly, now what are the odds you could test a what-if like that in sufficient depth?
> Eh, DB branching is mostly only necessary for testing - locally
For local DB's, when I break them, I stop the Docker image and wipe the volume mounts, then restart + apply the "migrations" folder (minus whatever new broken migration caused the issue).The steps were basically:
1. Clone the AWS RDS db - or spin up a new instance from a fresh backup.
2. Get the arn and from that the cname or public IP.
3. Plug that into the DB connection in your app
4. Run the migration on pseudo prod.
This helped up catch many bugs that were specific to production db or data quirks and would never haven been caught locally or even in CI.
Then I created a simple ruby script to automate the above and threw it into our integrity checks before any deployment. Last I heard they were still using that script I wrote in 2016!
Back in the day (2013?) I worked at a startup where the resident Linux guru had set up "instant" staging environment databases with btrfs. Really cool to see the same idea show up over and over with slightly different implementations. Speed and ease of cloning/testing is a real advantage for Postgres and Sqlite, I wish it were possible to do similar things with Clickhouse, Mysql, etc.
I'm wondering why anyone would want to use anything else at this point (for SQL).
* MySQL has a much easier story of master-master replication.
* Mongo has a much easier story of geographic distribution and sharding. (I know that Citus exists, and has used it.)
* No matter how you tune Postgres, columnar databases like Clickhouse are still faster for analytics / time series.
* Write-heavy applications still may benefit from something like Cassandra, or more modern solutions in this space.
(I bet Oracle has something to offer in the department of cluster performance, too, but I did not check it out for a long time.)
Any non-trivial amount of data and you’ll run into non-trivial problems.
For example, some of our pg databases got into such state, that we had to write custom migration tool because we couldn’t copy data to new instance using standard tools. We had to re-write schema to using custom partitions because perf on built-in partitioning degrades as number of partitions gets high, and so on.
It will take years to call this mature. Certainly not "soon"
Personally, I wouldn't use any SQL DB other that PostgreSQL for the typical "database in the cloud" use case, but I have years of experience both developing for and administering production PostgreSQL DBs, going back to 9.5 days at least. It has its warts, but I've grown to trust and understand it.
Raised an issue in my previous pet project for doing concurrent integration tests with real PostgreSQL DBs (https://github.com/allaboutapps/integresql) as well.
There's nothing technically that should prevent this if they are using HAMTs underneath, so I'm guessing they just didn't care about the feature. With HAMT, cloning any part of the data structure, no matter how nested, is just a pointer copy. This is more useful than you'd think but hardly any database makes it possible.
Also docker link seems to be broken.
Something we've been trying to solve for a long time is having instant DB resets between acceptance tests (in CI or locally) back to our known fixture state, but right now it takes decently long (like half a second to a couple seconds, I haven't benchmarked it in a while) and that's by far the slowest thing in our tests.
I just want fast snapshotted resets/rewinds to a known DB state, but I need to be using MariaDB since it's what we use in production, we can't switch DB tech at this stage of the project, even though Postgres' grass looks greener.
The only detail is that autoincrements (SEQUENCEs for PotgreSQL folks) gets bumped even if the transaction rollsback.
So tables tend to get large ids quickly. But it's just dev database so no problem.
Then spin up the dB using that image instead of an empty one for every test run.
This implies starting the DB through docker is faster than what you're doing now of course.
1. Have a local data dir with initial state
2. Create an overlayfs with a temporary directory
3. Launch your job in your docker container with the overlayfs bind mount as your data directory
4. That’s it. Writes go to the overlay and the base directory is untouched
If your db is small enough to fit in tmpfs, than sure, that is hard to beat. But then xfs and zfs are overkill too.
EDIT: I see you mentioning that starting the db is slow due to wiping and filling at runtime. But the idea of a snapshot is that you don't have to do that, unless I misunderstand you.
There's actually a potential solution here, but I haven't personally tested it: transportable tablespaces in either MySQL [1] or MariaDB [2].
The basic idea is it allows you to take pre-existing table data files from the filesystem and use them directly for a table's data. So with a bit of custom automation, you could have a setup where you have pre-exported fixture table data files, which you then make a copy of at the filesystem level, and then import as tablespaces before running each test. So a key step is making that fs copy fast, either by having it be in-memory (tmpfs) or by using a copy-on-write filesystem.
If you have a lot of tables then this might not be much faster than the 0.5-2s performance cited above though. iirc there have been some edge cases and bugs relating to the transportable tablespace feature over the years as well, but I'm not really up to speed on the status of that in recent MySQL or MariaDB.
[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-table-import....
[2] https://mariadb.com/docs/server/server-usage/storage-engines...
Obligatory mention of Neon (https://neon.com/) and Xata (https://xata.io/) which both support “instant” Postgres DB branching on Postgres versions prior to 18.
and that most of my data is either:
- business entities (users, projects, etc)
- and "event data" (sent by devices, etc)
where most of the database size is in the latter category, and that I'm fine with "subsetting" those (eg getting only the last month's "event data")
what would be the best strategy to create a kind of "staging clone"? ideally I'd like to tell the database (logically, without locking it expressly): do as though my next operations only apply to items created/updated BEFORE "currentTimestamp", and then:
- copy all my business tables (any update to those after currentTimestamp would be ignored magically even if they happen during the copy) - copy a subset of my event data (same constraint)
what's the best way to do this?
Something like:
psql <db_url> -c "\copy (SELECT * FROM event_data ORDER BY created_at DESC LIMIT 100) TO 'event-data-sample.csv' WITH CSV HEADER"
https://www.postgresql.org/docs/current/sql-copy.htmlIt'd be really nice if pg_dump had a "data sample"/"data subset" option but unfortunately nothing like that is built in that I know of.