The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
LLM’s are fundamentally text generators, not verifiers.
They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.
In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)
In reality they work quite well for text and numeric (via tools) analysis, too. I've found them to be powerful tools for "linting" a codebase against adequately documented standards and architectural guidance, especially when given the use of type checkers, static analysis tools, etc.
Code quality improvements is the reason to do it, so *yes*. Of course, anyone using AI for analysis is probably leveraging AI for the "fix" part too (or at least I am).
Link to the ticket. Hopefully your team cares enough to write good tickets.
So if the problem is defined well in the ticket, do the code changed actually address it?
For example for a bug fix. It can check the tests and see if the PR is testing the conditions that caused the bug. It can check the code changed to see if it fits the requirements.
I think the goal with AI for creative stuff should be to make things more efficient, not replace necessarily. Whoever code reviews can get up to speed fast. I’ve been on teams where people would code review a section of the code they aren’t familiar with too much.
In this case if it saves them 30 minutes then great!
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.
The way it otherwise works without a CLA is that you own the code you contributed to your repo, and I own the code I contributed to your repo, and since your code is open-source licensed to me, that gives me the ability to modify it and send you my changes, and since my code is open-source licensed to you, that gives you the ability to incorporate it into your repo. The list of copyright owners of an open source repo without a CLA is the list of committers. You couldn't relicense that because it includes my code and I didn't give you permission to. But a CLA makes my contribution your code, not my code.
[^0]: In this case, not literally. You instead grant them a proprietary free license, satisfying the 'because I didn't give you permission' part more directly.
Please note that even GNU themselves require you to do this, see e.g. GNU Emacs which requires copyright assignment to the FSF when you submit patches. So there are legitimate reasons to do this other than being able to close the source later.
So yes, I trust a non-profit, and a collective with nearly 50 years of history supporting copyleft, implicitly more than I will ever trust a company or project offering a software while requiring THEY be assigned the copyright rather than a license. Even your statement holds a difference; they require assignment to FSF, not the project or its maintainers.
That’s just listening to history, not really a gotcha to me.
Some GNU projects require this; it’s up to the individual maintainers of each specific GNU project whether to require this or not. Many don’t.
But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.
Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.
Find what it's good for in your workflows and try it for that.
I've tried throwing LLMs at every part of the work I do and it's been entirely useless at everything beyond explaining new libraries or being a search engine. Any time it tries to write any code at all it's been entirely useless.
But then I see so many praising all it can do and how much work they get done with their agents and I'm just left confused.
AI tooling my experience:
- React/similar webdev where I "need" 1000 lines of boilerplate to do what jquery did in half a line 10 years ago: Perfect
- AbstractEnterpriseJavaFactorySingletonFactoryClassBuilder: Very helpful
- Powershell monstrosities where I "need" 1000 lines of Verb-Nouning to do what bash does in three lines: If you feed it a template that makes it stop hallucinating nonexisting Verb-Nouners, perfect
- Abstract algorithmic problems in any language: Eh, okay
- All the `foo,err=…;if err…` boilerplate in Golang: Decent
- Actually writing well-optimized business logic in any of those contexts: Forget about it
Since I spend 95% of my time writing tight business logic, it's mostly useless.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.
This also is ignoring that ideally business logic is implemented as a combination of smaller, stabler components that can be independently unit tested.
Having a lot of tests is great until you need to refactor them. I would rather have a few e2e for smoke testing and valuable workflows, Integration tests for business rules. And unit tests when it actually matters. As long as I can change implementation details without touching the tests that much.
Code is a liability. Unless you don’t have to deal with (assembly and compilers) reducing the amount of code is a good strategy.
Not having this is very indicative of a spaghetti soup architecture. Hard pass.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
But now that I’ve been using it for a while it’s absolutely terrible with anything that deals with concurrency. It’s so bad that I’ve stopped using it for any code generation and going to completely disable autocomplete.
I'm auto-completing crazy complex Rust match branches for record transformation. 30 lines of code, hitting dozens of fields and mutations, all with a single keystroke. And then it knows where my next edit will be.
I've been programming for decades and I love this. It's easily a 30-50% efficiency gain when plumbing fields or refactoring.
Really is game changing
I would prefer an off-by-default telemetry, but if there's a simple opt-out, that's fine?
I've landed on using it as part of my code review process before asking someone to review my PR. I get a lot of the nice things that LLMs can give me (a second set of eyes, a somewhat consistent reviewer) but without the downsides (no waiting on the agent to finish writing code that may not work, costs me personally nothing in time and effort as my Org pays for the LLM, when it hallucinates I can easily ignore it).
- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?
- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?
- convert an idea or plan.md or a paper into working code?
- Fix flakes, fix test<->code discrepancies or increase coverage etc
If you do all this manually, why?
If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).
> integrate module A into module B
If it's cannot be done easily, that's the sign of a less than optimal API.
> entire codebase A into codebase B
Is that a real need?
> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms
If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.
> convert an idea or plan.md or a paper into working code?
Iteratively. First have an hello world or something working, then mowing down the task list.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.
> If you do all this manually, why?
Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.
When I was younger, I had to memorize how to drive to work/the grocery store/new jersey. I still remember those routes but I haven't learned a single new route since getting a smartphone.
Are we ready to stop learning as programmers? I certainly am not and it sounds like you aren't either. I'll let myself plateau when I retire or move into management. Until then, every night debugging and experimenting has been building upon every previous night debugging and experimenting, ceaselessly progressing towards mastery.
The worst is when I get inclined to go to a specific restaurant I haven't been to in years and it's completely gone. I've started to look online to confirm before driving half an hour or more.
Disastrous? Quite possibly, but my concerns are based on different concerns.
Almost everything changes, so isn’t it better to rephrase these statements as metrics to avoid fixating on one snapshot in an evolving world?
As the metrics get better, what happens? Do you still have objections? What objections remain as AI capabilities get better and better without limit? The growth might be slow or irregular, but there are many scenarios where AIs reach the bar where they are better at almost all knowledge work.
Stepping back, do you really think of AI systems as stochastic parrots? What does this metaphor buy you? Is it mostly a card you automatically deal out when you pattern match on something? Or does serve as a reusable engine for better understanding the world?
We’ve been down this road; there is already much HN commentary on the SP metaphor. (Not that I recommend HN for this kind of thing. This is where I come to see how a subset of tech people are making sense of it, often imperfectly with correspondingly inappropriate overconfidence.)
TLDR: smart AI folks don’t anchor on the stochastic parrots metaphor. It is a catchy phrase and helped people’s papers get some attention, but it doesn’t mean what a lot of people think it means. Easily misunderstood, it serves as a convenient semantic stop sign so people don’t have to dig in to the more interesting aspects of modern AI systems. For example: (1) transformers build conceptual models of language that transcend any particular language. (2) They also build world models with spatial reasoning. (3) Many models are quite resilient to low quality training data. And more.
To make this very concrete: under the assumption of universal laws of physics, people are just following the laws of physics, and to a first approximation, our brains are just statistical pattern matchers. By this definition, humans would also be “stochastic parrots”. I go all this trouble to show that this metaphor doesn’t cut to the heart of the matter. There are clearer questions to ask: they require getting a lot more specific about various forms and applications of intelligent behavior. For example
- under what circumstances does self play lead to superhuman capability in a particular domain?
- what limits exist (if any) in the self supervised training paradigm used for sequential data? If the transformer trained in this way can write valid programs then it can create almost any Turing machine; limited only by time and space and energy. What more could you want? (Lots, but I’m genuinely curious as to people’s responses after reflecting on these.)
Which of the following would you agree to... ?
1. There is no single bar for intelligence.
2. Intelligence is better measured on a scale than with 1 bit (yes/no).
3. Intelligence is better considered as having many components instead of just one. When people talk about intelligence, they often mean different things across domains, such as emotional, social, conceptual, spatial, kinetic, sensory, etc.
4. Many researchers have looked for -- and found -- in humans, at least, some notions of generalized intellectual capability that tends to help across a wide variety of cognitive tasks.
If some of these make sense, I suggest it would be wise to conclude:
5. Reasonable people accentuate different aspects and even definitions of intelligence.
6. Expecting a yes/no answer for "is X intelligent?" without considerable explanation is approximately useless. (Unless it is a genuinely curious opener for an in-depth conversation.)
7. Asking "is X intelligent?" tends to be a poorly framed question.
This confuses intelligence with memory (or state) which tends to enable continuous learning.
This is just semantics, but you brought it up. The very first definition of intelligence provided by Webster:
1.a. the ability to learn or understand or to deal with new or trying situations : reason also : the skilled use of reason
My guess? The tail is wagging the dog here -- you are redefining the term in service of other goals. Many people naturally want humanity to remain at the top of the intellectual ladder and will distort reality as needed to stay there.
My point is not to drag anyone through the mud for doing the above. We all do it to various degrees.
Now, for my sermon. More people need to wake up and realize machine intelligence has no physics-based constraints to surpassing us.
A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.
B. The mammalian brain is an existence proof that intelligence can be grown / evolved. Homo sapiens could have bigger brains if not for birth-canal size constraints and energy limitations.
C. There are good reasons to suggest that designing an intelligent machine will be more promising than evolving one.
D. There are good reasons to suggest silicon-based intelligence will go much further than carbon-based brains.
E. We need to stop deluding ourselves by moving the goalposts. We need to acknowledge reality, for this is reality we are living in, and this is reality we can manipulate.
Let me know if you disagree with any of the sentences below. I'm not here to preach to the void.
Corrected to:
A. Businesses will boom and bust. Hype will come and go. Nevertheless, humanity seems to have an intrinsic drive to innovate, which means pushing the limits of technology. People will seek more intelligent machines, because we perceive them as useful tools. So AI is pressurized by long-running, powerful incentives, no matter how many missteps economic or otherwise. It would take a massive and sustained counter-force to prevent a generally upwards AI progression.
1. the ability to learn or understand or to deal with new or trying situations
“The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.” - Jack Reeves
AI doesn't really help me code vs me doing it myself.
AI is better doing other things...
I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.
this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?
If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.
Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.
Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.
If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
So that's my $0.02!
I type:
class Foo:
or: pub(crate) struct Foo {}
> integrate module A into module BWhat do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.
> get someones github project up and running on your machine
docker
> convert an idea or plan.md or a paper into working code
I sit in front of a keyboard and start typing.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
I sit in front of a keyboard, read, think, and then start typing.
> If you do all this manually, why?
Because I care about the quality of my code. If these activities don't interest you, why are you in this field?
I am in this field to deliver shareholder value. Writing individual lines of code; unless absolutely required, is below me?
There was once a time when only passionate people became programmers, before y'all ruined it.
[0] Actually, I'd say it is "to make my immediate manager's job easier", but if you follow that up the org chart eventually it ends up with shareholders and their money.
How sad.
People on HN and other geeky forums keep saying this, but the fact of the matter is that you're a minority and not enough people would do it to actually sustain a product/company like Zed.
Also, this post is higher on HN than the post about raising capital from Sequoia where many of the comments are about how negatively they view the raising of capital from VC.
The fact of the matter is that people want this and the inability of companies to monetize on that desire says nothing about whether the desire is large enough to "actually sustain" a product/company like Zed.
I would pay for zed.
The only path forward I see for a classic VC investment is the AI drive.
But I don't think the AI bit is valuable. A powerful plugin system would be sufficient to achieve LLM integration.
So I don't think this is a worthwhile investment unless the product gets a LOT worse and becomes actively awful for users who aren't paying beaucoup bucks for AI tooling- the ROI will have to center the AI drive.
It's not a move that will generate a good outcome for the average user.
But he does say he does want to pay!
I remember the Redis fork and how it fragmented that ecosystem to a large extent.
I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.
I think your thinking is common sense.
But giving users an escape hatch is something that people take for granted. I'd understand all these furor if there was no such thing.
Besides, I reckon Zed took a lot of resources to build and maintain. Help them recoup their investment
opt-out is not enough, specially in a program where opt-out happens via text-only config files.
I can never know if I've correctly opted out of all the things I don't want.
Upsides of Zed (for me, I think):
* Built-in AI vibecodery, which I think is going to be an unavoidable part of the job very soon.
* More IDE features while still being primarily an Editor.
* Extensions in Rust (if I'm gonna suffer, might as well learn some Rust).
* Open source.
Downsides vs Sublime:
* Missing some languages I use.
* Business model, arguably, because $42M in VC "is what it is."
All of that hard work, intended to build a business, and nobody is happy.
Now there's a hard fork.
This is shitty.
Sublime is not open source and it has a very devout paying client base.
To me the dirty thing is to make something “open source” because developers absolutely love that, to then take an arguably “not open source” path of $42 mil in VC funding.
There’s something dissonant there.
Open source allows it to gain adoption in the dev community. Devs are notoriously hard to convince to adopt a new tool. Open source is one way to do it.
The path is usually to have an open community edition and then a cloud/enterprise edition. Over time, there will be greater and greater separation between the open source one and the paid ones. Eventually, the company will forget that the open source part even exists and slowly phase it out.
I intend to make my products source-available but not open source.
I do open source libraries/frameworks that I produce as part of producing the product, but not the product itself.
With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.
But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.
From my understanding, Zed is GPL-3.0-or-later. Most projects that involve a CLA and have rugpull potential are licensed as some GPL or AGPLv3, as those are the licenses that protect everyone's rights the strongest, and thanks to the CLA trap, the definition of "everyone" can be limited to just the company who created the project.
https://github.com/zed-industries/zed/blob/main/crates/zed/C...
I think the caveat to the claim that CLAs are only useful for rug pulls still important, but this is a case where it is indeed a relevant thing to consider.
I don't like the term "rug-pull". It's misleading.
If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.
You should show gratitude, not hostility.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
The FSF requires assignment so they can re-license the code to whatever new license THEY deem best.
Not the contributors.
A CLA should always be a warning.
tl;dr: If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.
(personally I don't release anything under virus licenses like the GPL but I don't think there's a nefarious purpose behind their CLA)
This seems to be factually untrue; you can assign specific rights under copyright (such as your right to sue and receive compensation for violations by third parties) without assigning the underlying copyright. Transfer of the power to relicense is not necessary for transfer of the power to sue.
> Hey, you look to be doing business with someone who publicly advocates for harming others. Could you explain why and to what extend they are involved?
"doing business with someone whose views I dislike" is slightly downplaying the specific view here.
* That this man actually advocates for harming others, versus advocating for things that the github contributor considers tantamount to harming others
* That his personal opinions constitute a reason to not do business with a company he is involved with
* That Zed is morally at fault if they do not agree that this man's personal opinions constitute a reason to not do business with said company
I find this kind of guilt by association to be detestable. If Zed wishes to do business with someone whom I personally would not do business with for moral reasons, that does not confer some kind of moral stain on them. Forgiveness is a virtue, not a vice. Not only that, but this github contributor is going for the nuclear option by invoking a public shaming ritual upon Zed. It's extremely toxic behavior, in my opinion.
I don't think any of the evidence shown there demonstrates "advocacy for harming others". The narrative on the surely-unbiased-and-objective "genocide.vc" site used as a source there simply isn't supported by the Twitter screencaps it offers.
This also isn't at all politely asking "Could you explain why and to what extend they are involved?" It is explicitly stating that the evidenced level of involvement (i.e.: being a business partner of a company funding the project) is already (in the OP's opinion) beyond the pale. Furthermore, a rhetorical question is used to imply that this somehow deprives the Code of Conduct of meaning. Which is absurd, because the project Code of Conduct doesn't even apply to Sequoia Capital, never mind to Shaun Maguire.
Zed’s leadership does have to answer for why they invited people like that to become a part of Zed’s team.
Making a racist claim in a tweet is not advocacy for harming others.
In a perfect world, children don't get killed, but with that many levels of indirection, I don't think there is anything in this world that is not linked to some kind of genocide or other terrible things.
I am sure plenty of people here know these things, this is Y Combinator after all, but to me, the general idea in life is that getting money is hard, and stories that make it look easy are scams or extreme outliers.
Do you have an example of that? I can't find any contributors that are upset about this aspect of the funding
But I can re-paste the link here: https://github.com/zed-industries/zed/discussions/36604
But this is all an aside, I was talking about contributors in a more general sense.
> Mr. Maguire’s post was immediately condemned across social media as Islamophobic. More than 1,000 technologists signed an open letter calling for him to be disciplined. Investors, founders and technologists have sent messages to the firm’s partners about Mr. Maguire’s behavior. His critics have continued pressuring Sequoia to deal with what they see as hate speech and other invective, while his supporters have said Mr. Maguire has the right to free speech.
https://archive.is/6VoyD#selection-725.0-729.327
Shaun Maguire is a partner, not just a simple hire, and Sequoia Industries had a chance to distance them selves from him and his views, but opted not to.
This is very different from your average developer using GitHub, most of them have no choice in the matter and were using GitHub long before Microsoft’s involvement in the Gaza Genocide became apparent. Zed’s team should have been fully aware of what kind of people they are partnering with. Like I said, it should have been very easy for them not to do so.
EDIT: Here is a summary of the “disagreeable views” in question: https://genocide.vc/meet-shaun-maguire/
At the end there is a simple request for Sequoia Industries, which Sequoia Industries opted against:
> We call on Sequoia to condemn Shaun’s rhetoric and to immediately terminate his employment.
Emphasizing the nature of Mr. Maguire's opinion is not really doing anything to change the argument. Emphasizing what other people think about that opinion, even less so.
> Zed’s team should have been fully aware of what kind of people they are partnering with.
In my moral calculus, accepting money from someone who did something wrong, when that money was honestly obtained and has nothing to do with the act, does not make you culpable for anything. And as GP suggests, Microsoft's money appears to have a stronger tie to violence than Maguire's.
As an aside—despite the popularity of the trolley problem—people don‘t have a rational moral calculus. And moral behavior does not follow a sequential order from best to worse. Whatever your moral calculus be, that has no effect on whether or not the Zed team’s actions were a moral blunder or not... they were.
The site you linked to just seems to brazenly misrepresent each of Shaun's tweets - e.g. the tweet that "demonized Palestinians" never mentions Palestinians, but does explicitly refer to Hamas twice. Not sure how Shaun could have been any clearer that he was criticizing a specific terrorist group and not an entire racial/ethnic group.
> plenty of other genocide denial/justification
So he disagrees with you about this word being appropriate to describe what's actually going on. This is not a fringe viewpoint.
Nothing you have quoted evidences this.
> When he publicly shares the Pallywood conspiracy theory he is engaging in and spreading a hateful genocidal rhetoric.
Claiming that your political outgroup is engaging in political propaganda is not the same thing as calling for their deaths. Suggesting otherwise is simply not good faith argumentation.
Nothing you have done here constitutes a logical argument. It is only repeating the word "genocide" as many times as you can manage and hoping that people will sympathize.
> This is hatespeech and is illegal in many countries
This is not remotely a valid argument (consider for example that many countries also outlaw things that you would consider morally obligatory to allow), and is also irrelevant as Mr. Maguire doesn't live in one of those countries.
I don‘t think you grasp the seriousness of hate speech. Even if you don’t explicitly call for their deaths, by partaking in hate speech (including by sharing conspiracy theories about the group) you are playing an integral part of the violence against the group. And during an ongoing genocide, this speech is genocidal, and is an integral part of the genocide. There is a reason hate speech is outlawed in almost every country (including the USA; although USA is pretty lax what it considers hate speech).
The Pallywood conspiracy theory is exactly the kind of genocidal hate speech I am talking about. This conspiracy theory has been thoroughly debunked, but it persists among racists like Shaun Maguire, and serves as an integral part to justify or deny the violence done against Palestinians in an ongoing genocide.
If you disagree, I invite you to do a though experiment. Swap out Palestinians with Jews, and swap out the Pallywood conspiracy theory with e.g. Cultural Marxism, and see how Shaun Maguire’s speech holds up.
No; I think you are wrong about that seriousness.
> by partaking in hate speech (including by sharing conspiracy theories about the group) you are playing an integral part of the violence against the group.
No, I disagree very strongly with this, as a core principle.
> and serves as an integral part to justify or deny the violence done against Palestinians in an ongoing genocide.
And with this as well.
> If you disagree, I invite you to do a though experiment. Swap out Palestinians with Jews, and swap out the Pallywood conspiracy theory with e.g. Cultural Marxism, and see how Shaun Maguire’s speech holds up.
First off, the "cultural Marxism" theory is not about Jews, any more than actual Marxists blaming things on "greedy bankers" is about Jews. (A UK Labour party leader once got in trouble for this, as I recall, and I thought it was unjustified even though I disagreed with his position.)
Second, your comments here are the first I've heard of this conspiracy theory, which I don't see being described by name in Maguire's tweets.
Third, no. This thought experiment doesn't slow me down for a moment and doesn't lead me to your conclusions. If Maguire were saying hateful things about Jewish people (the term "anti-Semitic" for this is illogical and confusing), that would not be as bad as enacting violence against Jewish people, and it would not constitute "playing an integral part of the violence" enacted against them by, e.g., Hamas.
The only way to make statements that "serve as an integral part to justify or deny violence" is to actually make statements that either explicitly justify that violence or explicitly deny it. But even actually denying or justifying violence does not cause further violence, and is not morally on the same level as that violence.
> There is a reason hate speech is outlawed in almost every country (including the USA; although USA is pretty lax what it considers hate speech).
There is not such a reason, because the laws you imagine do not actually exist.
https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...
American law does not attempt to define "hate speech", nor does it outlaw such. What it does do is fail to extend constitutional protection to speech that would incite "imminent lawless action" — which in turn allows state-level law to be passed, but generally that law doesn't reference hatred either.
https://en.wikipedia.org/wiki/Brandenburg_v._Ohio
Even in Canada, the Criminal Code doesn't attempt to define "hatred", and such laws are subject to balancing tests.
https://en.wikipedia.org/wiki/Hate_speech_laws_in_Canada
> The Pallywood conspiracy theory is exactly the kind of genocidal hate speech I am talking about. This conspiracy theory has been thoroughly debunked
Even after looking this up, I don't see anything that looks like a single unified claim that could be objectively falsified. I agree that "conspiracy theory" is a fair term to describe the general sorts of claims made, but expecting the label "conspiracy theory" to function as an argument by itself is not logically valid — since actual conspiracies have been proven before.
I don’t follow. Cultural Marxism is an anti-Semitic conspiracy theory which has inspired terrorist attacks, see e.g. Anders Behring Breivik, or the Charlottesville Riots. Greedy bankers is not a conspiracy theory, but a simple observation of accumulation of wealth under capitalism. Terrorists targeting minorities very frequently use Cultural Marxism to justify their atrocities. “Greedy bankers” are used during protests, or political violence against individuals or institutions at worst. There is a fundamental difference here, if you fail to spot the difference, I don‘t know what to tell you, and honestly I fear you might be operating under some serious misinformation about the spread of anti-Semitism among the far-right.
As for Pallywood, it is a conspiracy theory which states that many of the atrocities done by the IDF in Gaza are staged by the Palestinian victims of the Gaza Genocide. There have been numerous allegations about widespread staging operations, but so far there is zero proof of any of these allegations. It is safe to say that the people who believe in this conspiracy theory do so because of racist believes about Palestinians, but not because they have been convinced by evidence. And just like Cultural Marxism, the Pallywood conspiracy theory has been used to justify serious attacks and deaths of many people, but unlike Cultural Marxism, the perpetrator of these attacks are almost exclusively confined to the IDF.
By the way Shaun Maguire has 5 tweets where he uses the term directly (all from 2023) but he uses the term indirectly a lot. And just like Cultural Marxism citing the conspiracy theory—even if you don‘t name it directly—is still hate speech. E.g. when the White Nationalists at the Charlottesville riots were chanting “Jews will not replace us!” they were citing the White Replacement conspiracy theory (as well as Cultural Marxism) and they were doing hate speech, which directly lead to the murder of Heather Heyer.
And to hammer the point home (and to bring the conversation back to the topic at hand), I seriously doubt the Zed team would have accepted VC funding from an investor affiliated with an open supporter of Anders Behring Breivik or the Charlottesville rioters.
No, it isn't. I've observed people to espouse it without any reference to Judaism whatsoever. (I don't care how Wikipedia tries to portray it, because I know from personal experience that this is not remotely a topic that Wikipedia can be trusted to cover impartially.)
> Greedy bankers is not a conspiracy theory
I didn't say it was. It is, however, commonly a dogwhistle, and even more commonly accused of being a dogwhistle. And people who claim that Jews are overrepresented in XYZ places of power very commonly do get called conspiracy theorists as a result, regardless of any other political positions they may hold.
> Terrorists targeting minorities very frequently use Cultural Marxism to justify their atrocities.
This is literally the first time in 10+ years of discussion of these sorts of "culture war" topics, and my awareness of the term "cultural Marxism", that I have seen this assertion. (But then, I suspect that I would also disagree with you in many ways about who merits the label of "terrorist", and about how that is determined.)
> honestly I fear you might be operating under some serious misinformation about the spread of anti-Semitism among the far-right.
There certainly exist far-rightists who say hateful things about Jews. But they're certainly not the same right-wingers who refuse to describe the actions of Israeli forces as "genocide". There is clearly and obviously not any such "spread"; right-wing sentiment on the conflict is more clearly on Israel's side than ever.
The rest of this is not worth engaging with. You are trying to sell me on an accounting of events that disagrees with my own observations and research, as well as a moral framework that I fundamentally reject.
I should elaborate there. It doesn't actually matter to me what you're trying to establish about the depth of these atrocities (even though I have many more disagreements with you on matters of fact). We have a situation where A accepts money from B, who has a business relationship with C, who demonstrably has said some things about X people that many would consider beyond the pale. Now let's make this hypothetical as bad as possible: let's suppose that every X person in existence has been brutally tortured and murdered under the direct oversight of D, following D's premeditated plans; let's further suppose that C has openly voiced support of D's actions. (Note here that in the actual case, D doesn't even exist.) In such a case, the value of X is completely irrelevant to how I feel about this. C is quite simply not responsible for D's actions, unless it can be established that D would not have acted but for C's encouragement. Meanwhile, A has done absolutely nothing wrong.
That’s the point of a dog whistle. Are people who use (((this))) idiom also not antisemites because they don’t explicitly mention Jews? Also look up Cultural Bolshevism and who used that term.
Sequoia Industries were made aware that one of their partners was a racist Islamophobe, they opted not to do anything about it, and allowed him to continue being a racist Islamophobe partner with Sequoia, one can only assume that Sequoia Industries is an Islamophobic investor. I personally see people knowingly accepting money from racist Islamophobes as being a problem, and I would rather nobody did that.
Yes, you are from exactly the circles that you appear to be from based on your other words here.
In my circles, that reasoning is bluntly rejected. The reductio ad absurdum is starkly apparent: your principle, applied transitively (as it logically must), identifies so many people as irredeemably evil (including within your circles!) that it cannot possibly be reconciled with the observed good in the real world.
And frankly, the way that the term "Nazi" gets thrown around nowadays seems rather offensive to the people who actually had to deal with the real thing.
That does not make such speech genocidal.
It also does not make such speech worse than physical violence.
It also does not make the speech of someone you associate with relevant to your own morality.
Furthermore, if accepting funding in this manner is considered a violation of their CoC, then surely the use of Github is even more of a violation. Why wasn't that brought up earlier instead of not at all?
And finally, ycombinator itself has members of its board who have publicly supported Israel. Why are you still using this site?
Turns out when you try to tar by association, everybody is guilty.
But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.
What, they're proud of the telemetery?
The fork claims to make everything opt-in and to not default to any specific vendor, and only to remove things that cannot be self-hosted. What proprietary features have to be cut that Zed people are really proud of?
https://github.com/zedless-editor/zedless?tab=readme-ov-file...
As far as I know, the Zed people have open sourced their collab server components (as AGPLv3), at least well enough to self-host. For example, https://github.com/zed-industries/zed/blob/main/docs/src/dev... -- AFAIK it's just https://github.com/livekit/livekit
The AI stuff will happily talk to self-hosted models, or OpenAI API lookalikes.
If there’s a group of people painfully aware of telemetry and AI being pushed everywhere is devs…
Zed for Windows: What's Taking So Long? - https://news.ycombinator.com/item?id=44964366
Sequoia backs Zed - https://news.ycombinator.com/item?id=44961172
Local-first is nice, but I do use the AI tools, so I’m unlikely to use this fork in the near term. I do like the idea behind this, especially no telemetry and no contributor agreements. I wish them the best of luck.
I did happily use Zed for about year before using any of its AI features, so who knows, maybe I’ll get fed up with AI and switch to this eventually.
> Since someone mentioned forking, I suppose I’ll use this opportunity to advertise my fork of Zed: https://github.com/zedless-editor/zed
> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.
When did people start hating node and what do they have against it?
For node.js in general? The language isn't even considered good in the browser, for which it was invented. It is absolutely insane to then try to turn it into a standalone programming language. There are so many better options available, use one of them! Reusing a crappy tool just because it's what you know is a mark of very poor craftsmanship.
You're kidding, right?
I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.
The fact that the tiny packages are so popular despite their triviality is, to me, solid evidence that simply documenting the warts does not in fact make everything fine.
And I say this as someone who is generally pro having more small-but-not-tiny packages (say, on the order of a few hundred to a few thousand lines) in the Python ecosystem.
Node and these NPM packages represent a large increase in attack surface for a relatively small benefit (namely, prettier is included in Zed so that Zed's settings.json is easier to read and edit) which makes me wonder whether Zed's devs care about security at all.
WinterTC has only recently been chartered in order to make strides towards specifying a unified standard library for the JS ecosystem.
That's all I have to say right now, but I feel it needs to be said. Thank you for doing this.
> Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”).
They are allowed to use your contribution in a derivative work under another license and/or sublicense your contribution.
It's technically not copyright reassignment though.
Are you suggesting that devs should be able to burden the original contribution with conditions, like "they can't use my code without permission 5 years later if you relicense"? That's untenable, isn't it?
I don't know how else you would accept external contributions for software without the grant in the CLA. Perhaps I'm not creative enough!
Relevant part:
> 2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”). Further, to the extent that You participate in any livestream or other collaborative feedback generating session offered by Company, you hereby consent to use of any content shared by you in connection therewith in accordance with the foregoing Contributor License Grant.
Chrome : Chromium :: Zed : ????
I don’t view Chrome and Chromium as different projects, but primarily as different builds of the same project. I feel like this will (eventually) go the same way.I went ahead with Vscode, I had to spend 2 hours to make it look like Zeditor with configs, but i was able to roll out extension in javascript much faster and VScode has lot of APIs available for extensions to consume.
It's fair because those people contributed to the codebase you're seeing. Someone can't fork a repo, make a couple commits, and then have GitHub show them as the sole contributor.
https://github.com/zedless-editor/zed/pulls?q=is%3Apr+is%3Ac...
I even thought of calling it zim (zed-improved.. like vim). Glad to see the project!
“Yes! Now I can have shortcuts to run and debug tests. Ever since snippets were added, Zed has all of the features I could ask for in an editor.”
Where I think it gets really interesting is they are adding features in it to compete with slack. Imagine a tight integration between slack huddles and VS code's collaborative editing. Since it's from scratch it's much nicer than both. I'm really excited about it.
It also didn't start out as a competitor to either.
More like a spiritual successor to Atom, at least per the people that started it who came from that project.
Running a text editor should not be this hard, it's pretty ridiculous. Sublime Text is plenty fast without this nonsense.
As soon as any dev tool gets VC backing there should be an open source alternative to alleviate the inevitable platform decay (or enshittification for lack of a better word)
This is a better outcome for everyone.
Some of us just want a good editor for free.
Sums up the problem neatly. Everyone wants everything for free. Someone has to pay the developers. Sometimes things align (there is indeed a discussion in LinkedIn about Apple hiring the OPA devs today), mostly it doesn’t.
Agreed. Although nobody ever mentions the 1,100+ developers that submitted PRs to Zed.
And yeah. I know what you mean. But this is the other side of the OSS coin. You accept free work from outside developers, and it will inevitably get forked because of an issue. But from my perspective, it's a great thing for the community. We're all standing on the shoulders of giants here.
(Opt-in telemetry is much more reasonable, if it's clear what they're doing with it.)
Options to disable crash reports and anonymous usage info are presented prominently when Zed is first opened, and can of course be configured in settings too.
I am deeply disappointed in how often I encounter social pressure, condescending comments, license terms, dark patterns, confidentiality assurances, anonymization claims, and linguistic gymnastics trying to either convince me otherwise or publicly discredit me for pointing it out. No amount of those things will change the fact that it is spyware, but they do make the world an even worse place than the spyware itself does, and they do make clear that the people behind them are hostile actors.
No, we will not stop calling it what it is.
The first line of the README
> Welcome to Zed, a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
The second line of the README (with links to download & package manager instructions omitted)
> Installation
> On macOS and Linux you can download Zed directly or install Zed via your local package manager.
I do not dispute that HN is an echo chamber. But how did you come to your conclusions?