Ask HN: What's your biggest pain point when joining a new developer team?
5 points
7 hours ago
| 6 comments
| HN
I'm planning to make an AI tool which allows an organisations' developer to access all the files or detect references/calls for any doubts. Usually I feel like new coders in an org, have plenty of questions about the org's framework or operations in general. This makes them ask their seniors which they might not really like due to the wastage of time it would take. Hence, this entire workflow would be eliminated by having a custom AI-based platform for the same to ask all your queries on.
ccosky
1 hour ago
[-]
This isn't "THE" biggest pain point, but one I dealt with today: we have new devs who ask the whole team, "Can someone do my code review?" This leads to the bystander effect: nobody volunteers to do the code review, because we all think someone else will do it. (Alternately: the team leads get asked to do every code review, because they're the leads and know the most.)

So today I wrote a Claude skill that does a git diff against master to determine what files were changed, looks at the git history of those files (most recent commits and who committed the most lines of code), filters out the people who don't work here anymore, and suggests 3 devs who could be good matches for their MR. Hopefully that will get some of the load off the team leads and staunch the "can someone do a code review for me?" requests.

So there's my suggestion to you: something that will let new devs know 1) who is the best person to do their code review and maybe even 2) who the SME for a particular area of the system is.

reply
buggy6257
50 minutes ago
[-]
You should take a look at CODEOWNERS file specs
reply
ahmed-fathi
4 hours ago
[-]
The problem isn't finding code. it's inheriting judgment. New engineers don't struggle to locate files. They struggle to know which files to trust, which patterns are intentional vs accidental, and which senior engineer silently owns what. That's not searchable. You're building a map. What they need is a compass.
reply
lowenbjer
5 hours ago
[-]
Cursor, Sourcegraph Cody, and honestly just Claude with a well-maintained project README already solve the "ask questions about the codebase" part reasonably well. The actual hard part of joining a new team isn't the code (today, any longer). It's getting to know the people. Understanding their strengths, figuring out where you fit in, getting past imposter syndrome, not stepping on toes. No AI for that yet.
reply
PaulHoule
7 hours ago
[-]
1 click at most to install what I need to build, 1 click to build.

I ask the dev manager how long the build takes and get an answer that is within 20% of the ground truth.

reply
KevStatic
6 hours ago
[-]
That's a great point

The setup friction is real. Once you're past that... do you find understanding the codebase itself (where things live, why decisions were made) is also painful, or does that come naturally after a few days?

reply
PaulHoule
5 hours ago
[-]
I think most places I work haven’t had good documentation or particularly good processes for onboarding people. For instance, at some start ups we were always chasing a demo for customer B this week and customer C next week and people were fuzzy about what path got us there. Other places saw software as secondary or tertiary to their main business, or had a lot of turnover, etc. Or maybe we bought a web site from so and so and nobody has any idea how it works. Or the guy who made the product was pretty smart but wasn’t a good finisher and his wife just had a baby.

So I am used to looking at a mysterious code base and gradually figuring out how to safely change it. When I run into something that’s particularly dangerous (e.g. how does the auth system work?) I will document it myself, and I love writing “how do I?” runbook/procedural documentation if it is likely I’ll need it in six months or will need to hand something off to somebody else.

In the AI age there is a lot to say for just loading up a project in an agent-enabled IDE and asking questions like “When I do X, the system does Y, why is that?” and “How would I do Z?” and having extended conversations “well, I like D but I am concerned about E?” Or “What if we did F instead?” Even if you write every line of code yourself you mind find an AI coding buddy is more talkative than your coworkers.

reply
AnimalMuppet
6 hours ago
[-]
If I were given an electrical design, I would expect a schematic, a parts list, a board layout, and a theory of operations" - a prose document explaining why the rest of the stuff is the way it is.

My last job, there was the code itself, and there were UML class and sequence diagrams. But there wasn't anything like a theory of operations. That made it very difficult to learn, because it was so object oriented that you couldn't tell what anything actually did. Or, more to the point, when you needed to make a modification, you couldn't find where to make it without heroic feats of exploration.

So I think that's the great need. A human needs to sit down and write out why the code does what it does, and why it's organized the way it is, and where the parts are that are most likely to be needed, and where to make the most likely changes. I'm not sure an AI can write that - certainly AIs at the current level cannot.

reply
nonameiguess
5 hours ago
[-]
Totally depends on the org. I've worked as the 3rd employee of a months old startup and the biggest pain point was we owned no infrastructure and I did everything on my own computer, which was easy but limiting because were heavy into statistical modeling and needed more powerful compute, which we got but it took a few months. I've worked on a military/intelligence project that had been in operation for over 40 years. The oldest code had no commit history because it predated any modern VCS. Whoever wrote it was long retired if not dead. Pieces were spread across thousands of repos on many different forges. With large old-school Java OSGI and now with Kubernetes, which is basically Java written in Go, it's the abstraction and dependency injection. There is no possible way to tell until runtime what code is actually going to be used. Knowing the codebase means nothing if you can't mentally track where data flows because you don't know what the implementation is actually going to be from the code alone. With extreme HPC, it was damn near literally everything being bespoke. Custom filesystems. Custom network stacks. Expectations from running workloads on regular computers and software everyone else uses were always being subverted. With cloud environments, it's not having physical access to the machines. With on-prem labs, it's needing to provision and troubleshoot the physical layer. Either way has its drawbacks. With customer-facing software, it's customers not knowing what they want and not even knowing what they don't want until they have it, which turns out to be what they didn't want. With software for modeling and understanding the real world, it's that physics turns out to be really f'ing complicated.
reply