An autopsy of AI-generated 3D slop
48 points
8 hours ago
| 14 comments
| aircada.com
| HN
dudeinhawaii
8 hours ago
[-]
Somehow this article explains perfectly, visually, how AI generated code differs from human generated code as well.

You see the exact same patterns. AI uses more code to accomplish the same thing, less efficiently.

I'm not even an AI hater. It's just a fact.

The human then has to go through and cleanup that code if you want to deliver a high-quality product.

Similarly, you can slap that AI generated 3D model right into your game engine, with its terrible topology and have it perform "ok". As you add more of these terrible models, you end up with crap performance but who cares, you delivered the game on-time right? A human can then go and slave away fixing the terrible topology and textures and take longer than they would have if the object had been modeled correctly to begin with.

The comparison of edge-loops to "high quality code" is also one that I mentally draw. High quality code can be a joy to extend and build upon.

Low quality code is like the dense mesh pictured. You have a million cross interactions and side-effects. Half the time it's easier to gut the whole thing and build a better system.

Again, I use AI models daily but AI for tools is different from AI for large products. The large products will demand the bulk of your time constantly refactoring and cleaning the code (with AI as well) -- such that you lose nearly all of the perceived speed enhancements.

That is, if you care about a high quality codebase and product...

reply
sech8420
6 hours ago
[-]
"High-quality code can be a joy to extend and build upon." I love the analogy here. It is a perfect parallel to how a good 3D model is a delight to extend. Some of the better modelers we've worked with return a model that is so incredibly lightweight, easily modifiable, and looks like the real thing that I am amazed each time.

The good thing about 3D slop vs. code slop is that it is so much easier to spot at first glance. A sloppy model immediately looks sloppy to nearly any untrained eye. But on closer look at the mesh, UVs, and texture, a trained eye is able to spot just how sloppy it truly is. Whereas with code, the untrained eye will have no idea how bad that code truly is. And as we all know now, this is creating an insane amount of security vulnerabilities in production.

reply
cadamsdotcom
55 minutes ago
[-]
Everyone needs to quit trying to one-shot, and quit assuming AI can’t do it because it can’t one-shot it.

Since the author can enumerate the problems and describe them, it’d be interesting to just use the one-shot pickleball racket model as a starting point. Generate it, look at the problems, then ask an agent to build “fixers” for each problem - small scripts (that they don’t need to build themselves!) which address each problem in turn. Then send the first pass AI output through a pipeline of fix scripts to get something far better but not quite there - and do final human tuneups on the result.

reply
LarsDu88
1 hour ago
[-]
Trellis is like a year old and practically free. There are already better models to make comparisons to.

Because they all use latent diffusion, and many techniques use voxelized intermediate representations of 3d models, often generated from images, topology is bound to be bad.

There is a lot of ongoing research around getting better topology. I expect these critiques to still be valid as much as 2 years from now, but the economics of modeling will change drastically as the models get better

reply
cthalupa
28 minutes ago
[-]
This article is pretty disingenuous in the parts where it focuses on topology. CAD files are imported all the time into CG software with awful topology - looking very similar to that mess.

There's lots of software and tooling, automated and otherwise, to significantly improve topology. This is a very common problem in this space and not acknowledging that is silly. It's not perfect, and remodeling things is indeed a common solution - but retopo addons and software are big business because they're good enough for a whole lot of use cases.

reply
SXX
1 hour ago
[-]
Now they need to compare it with Hunyuan 3D 3.0 or other SOTA 3D generator.

Obviously it's not spewing $10,000 3D models, but results are much better than what you would get for under $500 from a human 5 years ago.

So yeah you still need human art director to make sure actual source material used for generation fits your art style, but otherwise "good enough" models are 1000 times cheaper and 10000 times faster to get.

reply
hagbard_c
5 hours ago
[-]
The most important two words in this article are the last two: for now.

Indeed, for now generative models generate triangle soup without much thought. The same was true for 2D illustrations where generative models like Deep Dream came up with horrendous images with eyes all over, dogs with multitudes of heads and oh did I mention the eyes? That was about 10 years ago. Things changed, models improved, the eyes were tamed. Yes, people had too many or too few fingers but that also changed. From nightmare fuelling imagery with many-eyed dog heads sticking out where you don't want them to fully animated hi-res video only took a decade and things are still speeding up. The triangle soup of current 3D generative models is like the eye soup of Deep Dream, something to remember somewhat fondly which is no longer relevant now.

reply
Miraste
8 hours ago
[-]
Trellis isn't and has never been state of the art. It's not a good choice for comparison; there has been progress on a lot of these problems. There are models that can do clean topo and PBR textures, for example.
reply
edflsafoiewq
8 hours ago
[-]
Such as?
reply
Miraste
8 hours ago
[-]
Luma, Rodin, Tripo are a few. Meshy has some of these features too

Unfortunately they are all proprietary, but 3D models are sort of a side area in AI research, so most of the effort is from small startups.

reply
sech8420
6 hours ago
[-]
In no capacity do these create clean topo, textures, and uvs. If you do not believe me, use the reference image from the post and upload it to Meshy or Tripo and see what happens. Yes, slightly better than the open source Trellis, but still nearly impossible to work with and a model you would never put on any slightly serious eCommerce site.

We've tried them all. If one existed, it would save us money, speed up our pipeline, and trust me we'd be using it.

reply
maipen
7 hours ago
[-]
The close but not good enough is what gives us the illusion of productivity in this tools.

That’s why you see a a lot of hype around setups and benchmarks but not a lot of well polished products.

This article make it clear for 3d modeling, but also applies for code. Human touch is necessary for a commercial product. Otherwise it’s nothing more than a prototype.

It is actually much more difficult to maintain Ai code and 3d models than to just make your own.

Either AI can oneshot without human intervention or it becomes a pain really quickly

reply
sech8420
6 hours ago
[-]
Precisely. Until the AI can 'one-shot' the topology and the UVs, it’s not a shortcut but rather a more power intensive way to generate technical debt.
reply
GaggiX
1 hour ago
[-]
The article should analyze Rodin that in my opinion is probably the best one in generating 3d assets.
reply
efilife
2 hours ago
[-]
Don't complain about tangential annoyances, I know, I know... but how the hell am I supposed to judge the difference between the images in the post if you disabled zoom and the images are incredibly small? And when I open them in a new tab they automatically download?

On the plus side, I like the informal writing of the post. You can be serious about business and still be human

Edit: firefox reader mode works wonders on this article

reply
TheTriunePrism
8 hours ago
[-]
"The 'autopsy' of 3D slop highlights a critical failure in the current AI supply chain: The Illusion of Completeness.

We are living in an era of 'Statistical Harvest' where models prioritize a 'good enough' surface over structural integrity. In the spiritual supply chain of value, this is called Cutting Corners. A 3D model that breaks down upon closer inspection lacks what I call Internal Agency—it doesn't understand the 'Seed' of its own geometry. As we move towards an agent-centric world, we must distinguish between 'Generative Noise' and 'Authentic Creation'. True value definition requires a 'Watchman' who can see beyond the first-glance polish to the underlying breakdown of utility."

reply
sech8420
6 hours ago
[-]
I really like this framing of 'Internal Agency.' In 3D, that lack of a 'Seed' is exactly why a model fails when you try to animate it. A human modeler understands that a joint needs extra edge loops to bend correctly. It has 'intent' for the model's future. The AI, performing a 'Statistical Harvest,' only cares that the surface looks right in a static frame. It provides the 'Illusion of Completeness' but none of the functional DNA required for a production environment.
reply
TheTriunePrism
5 hours ago
[-]
"Spot on. The 'edge loops' analogy is the perfect physical manifestation of what I mean by functional DNA.

It proves that without 'Intent for the future' (the Seed), any output is just a static corpse. In my broader framework of the Spiritual Life Archiving System, we see this everywhere: systems that look complete at a glance but lack the underlying logic to survive 'animation' or real-world pressure.

This is exactly why we need to move from Generative Slop toward Architectural Stewardship. Glad to see the 'Internal Agency' framing resonates in the 3D space."

reply
nicebyte
1 hour ago
[-]
I've found Trellis specifically to be very "over-promise and under-deliver".

Nothing i tried with it got even close to th level of quality that they were advertising - felt like a bunch of examples were hand-picked, at best.

reply
coldtea
8 hours ago
[-]
>Why AI 3D Generation Fails eCommerce Standards

I wish I had his confidence (in eCommerce Standards)

reply
sech8420
6 hours ago
[-]
Touché. Though if the current 'eCommerce Standard' is 'dropshipped junk that looks slightly better than a hallucination,' then I’ll happily die on the hill of being over-confident.
reply
Keyframe
8 hours ago
[-]
Nice copium. These things are going to get there fast. Even what has been shown can be a good start with a decimator at hand; We've seen this with photogrammetry before. Irony is not lost on the fact that text, which complains about it, went through AI itself.
reply
coldtea
7 hours ago
[-]
>We've seen this with photogrammetry before.

Have we? It's still not that good.

reply
Keyframe
6 hours ago
[-]
It's not fully automated where you come up with a bunch of photos and have production assets. Never has been. It serves its purpose though, so will this if it's not already.
reply
sech8420
6 hours ago
[-]
"We've seen this with photogrammetry before" - I do not believe we have. It's progressed but even a good scan is still not close to being something you would put on a legitimate eCommerce product page.

I honestly hope you are right and that I'm full of copium. Truly. But the progression has been nowhere near as fast as code, text, image, or video generation. And as it stood 2 years ago vs now is the same conclusion - unusable slop for most use cases.

reply
Keyframe
6 hours ago
[-]
Listen, I agree it's unusable or at least somewhat usable. As I said in another comment. Will Smith video was exactly three years ago. 3D has been a bit neglected, but it will come. I was a denier initially, but these things move real fast. Photogrammetry was never at the level of point and shoot and you have a production asset. However, it did and does serve a need and you can't deny it's not useful. It's not painless though.
reply
sech8420
6 hours ago
[-]
That’s a fair point. I know a few foremen who use photogrammetry religiously for site surveys and volume tracking where 'lumpy' geometry doesn't matter. It’s a huge win for that niche. But yes, 3D has been lagging behind and I'm having a really hard time guestimating when it's good enough for high quality product models.
reply
dilDDoS
7 hours ago
[-]
> Nice copium. These things are going to get there fast.

Nice copium. I've been hearing how fast these things are going to get there for a few years now.

reply
Keyframe
6 hours ago
[-]
And it hasn't? Will Smith spaghetti video was exactly three years ago.
reply
selridge
5 hours ago
[-]
0 to “we are even talking about this” is an astonishing leap. Acting like this stuff has been standing still is an active choice.
reply