From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):
They are long-running — anywhere from seconds to minutes to hours.
Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
They often involve input from a user (or another agent!) at some point in their execution cycle.
They spend a lot of time awaiting i/o or a human.
"""No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).
I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.
> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...
Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.
(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)
Take for example something like being able to easily swap models, in Python it’s trivial with litellm. In niche languages you’re lucky to even have an official, well mantained SDK.
I agree that integration with the separate LLMs / agents can and does accelerate initial development. But once you write the integration tooling in your language of choice -- likely a few weeks worth of work -- then it will all come down to competing on good orchestration.
Your parent poster is right: languages like Erlang / Elixir or Golang (or maybe Rust as well) are better-equipped.
You can safely swap out agents without redeploying the application, the concurrency is way below the scale BEAM was built for, and creating stateful or ephemeral agents is incredibly easy.
My plan is to set up a base agent in Python, Typescript, and Rust using MCP servers to allow users to write more complex agents in their preferred programming language too.
[0]: https://github.com/extism/extism [1]: https://github.com/extism/elixir-sdk
My original thought was to spin up SQLite databases as needed because they are super lightweight, well-tested, and supported by almost every programming language. If you want to set up an agent in another programming language via MCP, but you still want to be able to access the agent memory directly, you can use the same schema in a SQLite database.
I may end up using mnesia for more metadata or system-oriented data storage though. It's very well designed imo.
But one of the biggest reasons has just been the really nice integration with DuckDB. I can query all of the SQLite databases persisted in a directory and aggregate some metadata really easily.
In my experience the performance of the language runtime rarely matters.
If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.
"arguably".
Typescript is just a thin wrapper over javascript who doesnt have these types at all.
AGDTs, mapped types, conditional types, template literal types, partial higher-kinded types, and real inference on top of all that.
It had one of the most fully loaded type systems out there while the Go team was asking for community examples of where generics might be useful because they're not sure it might be worth it.
2. let those llms operate asynchronously in parallel
3. when an llm wants to mutate the global state but the state has changed since it's checkout, try to safely merge changes using an expensive diff algorithm (which is still cheaper than the llm); on failure retry
I've found that TypeScript is an excellent glue language for all kinds of AI. Python, followed by TS enjoy broad library support from vendors. I personally prefer it over Python because the type system is much more expressive and mature. Python is rapidly improving though.
> It turns out, cancelling long-running work in Node.js and Python is incredibly difficult for multiple reasons:
Evidence is lacking for this claim. Almost all tools out there support cancellations, and they're mostly either Python or JS.
These platforms store an event history of the functions which have run as part of the same workflow, and automatically replay those when your function gets interrupted.
I imagine synchronizing memory contents at the language level would be much more overhead than synchronizing at the output level.
Adding a durable boundary (via a task queue) in between steps is typically the first step, because you at least get persistence and retries, and for a lot of apps that's enough. It's usually where we recommend people start with Hatchet, since it's just a matter of adding a simple wrapper or declaration on top of the existing code.
Durable execution is often the third evolution of your system (after the first pass with no durability, then adding a durable boundary).
The reason I started working on Hatchet was because I'm a huge advocate of durable execution, but didn't enjoy using Temporal. So we try to make the development experience as good as possible.
On the underlying durable execution layer, it's the exact same core feature set.
It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.
It's Elixir specific, but this article emphasizes the importance of async task persistence: https://oban.pro/articles/oban-starts-where-tasks-end
Through the use of both a map that holds a context tree and a database we can purge old sessions and then reconstruct them from the database when needed (for instance an async agent session with user input required).
We also don't have to hold individual objects for the agents/workflows/tools we just make them stateless in a map and can refernce the pointers through an id as needed. Then we have a stateful object that holds the previous actions/steps/"context".
To make sure the agents/workflows are consistent we can hash the output agent/workflow (as these are serializable in my system)
I have only implemented basic Agent/tools though and the logging/reconstruction/cancellation logic has not actually been done yet.
Edit: Heh, I noticed after writing this that some sibling comments also mention Temporal.
The upside is that agent subtasks can be load balanced among servers, tasks won't be dropped if the process is killed, and better observability comes along with it.
The downside is definitely complexity. I'm having a hard time planning out an architecture that doesn't significantly increase the complexity of my agent code.
This is true regardless of the language. I always do a reasonable amount of work (milliseconds to up to a few seconds) worth of work in a Go routine every time. Anything more and your web service is not as stateless as it should be.
The death knell for variety in AI languages was when Google rug-pulled TensorFlow for Swift.
-Someone who has written a ton of JS over the past... almost 30 years now.
Yes, today’s ML engineer has practically no choice but to use Python, in a variety of settings, if they want to be able to work with others, access the labor market without it being an uphill battle, and most especially if they want to study AI / ML at a university.
But there were also the choices to initially build out that ecosystem in Python and to always teach AI / ML in Python. They made sense logistically, since universities largely only teach Python, so it was a lowest-common-denominator language that allowed the universities to give AI / ML research opportunities to everyone, with absolutely no gatekeeping and with a steadfast spirit of friendly inclusion (sorry, couldn’t resist the sarcastic tangent). I can’t blame them for working with what they had.
But now that the techniques have grown up and graduated to form multibillion-dollar companies, I’m hopeful that industry will take up the mantle to develop an ecosystem that’s better suited for production and for modern software engineering.
“Like the CPU package, the module is accelerated by the TensorFlow C binary. But the GPU package runs tensor operations on the GPU with CUDA.”
They note that these operations are synchronous, so using them will sacrifice some of JavaScript’s effectiveness at asynchronous event processing. This is not different from Python when you are training or serving a model. JavaScript’s strengths would shine brighter when coordinating agents / building systems that coordinate models.
JS is a terrible language to begin with, and bringing it to the backend was a mistake. TS doesn’t change the fact that the underlying language is still a pile of crap.
So, like many, I’ll write anything—Go, Rust, Python, Ruby, Elixir, F#—before touching JS or TS with a ten-foot pole.
Use whatever language works well for you and the task at hand, but many enjoy fullstack JS/TS.
It's 2025, Node.js has been around since 2009, yet these languages' still use C-based interpreters by default, and their non-standard JIT alternatives are still much worse than V8.
Sure, the libs are mostly written in C/C++, but all of them have first-class support for Python and oftentimes Python only. Serving the model is a different story and you can use whatever language to do so.
As someone who has worked in the DS realm for an extended period of time, I can tell you Python has practically zero competition when it comes to data wrangling and training models. There are plenty of contenders when it comes to serving the models or building “agents.”
As for type checking, yeah, it sucks big time. TS is a much better type system than the bolted-on hints in Python. But it’s still JS at the end of the day. All the power of V8, a zillion other runtimes, and TS gets marred by a terribly designed language.
Use whatever you like.
Pouchdb. Hypercore (pear). It’s nice to be able to spin up JS versions of things and have them “just work” in the most widely deployed platform in the world.
TensorflowJS was awesome for years, with things like blazeface, readyplayer me avatars and hallway tile and other models working in realtime at the edge. Before chatgpt was even conceived. What’s your solution, transpile Go into wasm?
Agents can work in people’s browsers as well as node.js around the world. Being inside a browser gives a great sandbox, and it’s private on the person’s own machine too.
This was possible years ago: https://www.youtube.com/watch?v=CpSzT_c7_UI&t=10m30s
I do my best to run as little in the browser as possible. Everything is an order of magnitude simpler and faster to build if you do the bulk of things on a server in a language of your choice and render to the browser as necessary.
Beneath all the jargon, it’s good to remember that an “agent” is ultimately just a bunch of http requests and streams that need to be coordinated—some serially and some concurrently. And while that sounds pretty simple at a high level, there are many subtle details to pay attention to if you want to make this kind of system robust and scalable. Timeouts, retries, cancellation, error handling, thread pools, thread safety, and so on.
This stuff is Go’s bread and butter. It’s exactly what it was designed for. It’s not going to get you an MVP quite as fast as node or python, but as the codebase grows and edge cases accumulate, the advantages of Go become more and more noticeable.
But outside of that - ML in go is basically impossible. Trying to integrate with the outside ecosystem of go is really difficult - and my experience has been that Claude Code is far less effective with Go then it is with Python, or even Swift.
I ditched a project I was writing in Go and replaced it with Swift (this was mostly prompt based anyways). It was remarkably how much better the first pass of the code generation was.
On the other hand, the programming languages used by LLM people seem to be python and javascript mainly.
So while I argue that they all should really move on to modern languages, I think go is still better than the I-can't-even-install-this mess of python and javascript imports without even a Dockerfile that seem to be so prevalent in LLM projects.
Honest question, I am genuinely interested in what cannot be done easily or at all due to limitations of the Go type system.
You can find many articles on the internet about it, but in my experience I would summarize it in:
It looks like it's made to have a simple compiler, not to simplify the programmer's life.
Initially its simplicity is wonderful. Then you start to notice how verbose things are. Channels are another looks-nice-but-maybe-don't feature. nil vs nil-interface. Lack of proper enums is hurting so much I can't describe it. I personally hate automatic type conversions, and there are so many inconsistencies in the standard and most used libraries that you really start to wonder why some things where even done. validators that validate nothing, half-done tagging systems for structs, tons of similar-but-not-quite interfaces and methods.
It's like the language has learning wheels that you can't shake off or work around. You end up wanting to leave for a better one.
People had to beg for years for basic generics and small features. If google is not interested in it, you'd better not be interested in it and it shows after a while.
Companies started to use it as an alternative to C and C++, while in reality it's an alternative to python. Just like in python a lot of the work and warnings are tied into the linter as a clear workaround. Our linter config has something like 70+ linters classes enabled, and we are a very small team.
C can be described as a relatively simple language (with caveats), C++ has grown to a blob that does and has everything, and while they have lots of footguns I did not find the same level of frustration as with go. You always end up fighting a lot of corner cases everywhere.
Wanted to say even more, but I think I ranted enough.
Do you mean sum types? That is not a case of them not being "proper", though. They simply do not exist as a feature at all.
Go's enums function pretty much like enums in every single other language under the sun. If anything, Go enums are more advanced than most languages, allowing things like bit shifts. But at the heart of it all, it's all just the same. Here are enum implementations in both Go and Rust:
[Go] https://github.com/golang/go/blob/f18d046568496dd331657df4ba...
[Rust] https://github.com/rust-lang/rust/blob/40daf23eeb711dadf140b...
While Go leans on the enum value produced by `range` to act as the language's enumerator, while Rust performs explicit incrementing to produce the enumerator, the outcome is no different — effectively nothing more than [n=0, n++]. Which stands to reason as that's literally, as echoed by the dictionary, what an enum is.
Yes, you can emulate this style of enums by using iota to start a self-incrementing list of integer constants. But that's not what any language (except for C) has ever meant by "enum".
Enums are generally assumed to be type-safe and namespaced. But in Go, they are neither:
type Color int
const (
Red Color = iota
Green
Blue
)
func show(color Color) {
fmt.Printf("State: %v", color)
}
fun main() {
show(Red)
show(6)
}
There is no namespacing, no way to — well — enumerate all the members of the enum, no way to convert the enum value to or from a string (without code-genreation tools like stringer), and the worst "feature" of all is that enums are just integers that can freely receive incorrect values.If you want to admire a cool hack that you can show off to your friends, then yeah, iota is a pretty neat trick. But as a language feature it's just a ugly and awkward footgun. Being able to auto-increment powers of two is a very small consolation prize for all of that (and something you can easily achieve in Rust anyway with any[1] number[2] of crates[3]).
[1] https://crates.io/crates/enumflags2
Sure, but now you're getting into the topic of types. Enums produce values. Besides, Go isn't really even intended to be a statically-typed language in the first place. It was explicitly told when it was released that they wanted it to be like a dynamically-typed language, but with statically-typed performance.
If you want to have an honest conversation, what other dynamically-typed languages support type-safe "enums"?
> But that's not what any language (except for C) has ever meant by "enum".
Except all the others. Why would a enum when used when looping over an array have a completely different definition? It wouldn't, of course. Enums are called what they are in a language because they actually use enums in the implementation, as highlighted in both the Go and Rust codebases above.
Many languages couple enums with sum types to greater effect, but certainly not all. C is one, but even Typescript, arguably the most type-intensive language in common use, also went with "raw" enums like Go.
Even without sum types, there is a common pattern of defining a new type and const-defining the possible values that is a clear workaround on the lack of an 'enum' keyword.
Maybe because the compiler can't be sure that those const values are all the possible values of the type, we can't have things like enforcing exhaustive switches on this "enum", and that is left to the linter at best.
Default-zero initialization is always valid too, which can leave you with an "enum" value that is not present in the const definitions (not everything starts on iota, iota does not mean 0).
It's a hack, it became a pattern. It still is not a proper (or even basic) enum even without sum types.
It is to the extent that it helps explain what an enum is, and why we call the language feature what we do. Python makes this even more apparent as you explicitly have to call out that you want the enum instead of it always being there like in Go:
for i, v in enumerate(array):
# ...
In case I'm not being clear, an array enumerator like in the above code is not the same as a language enumerator, but an array enumerator (or something similar in concept) is how language enumerators are implemented. That is why language enumerators got the name they did.> It still is not a proper (or even basic) enum even without sum types.
It most certainly is "proper". In fact, you could argue that most other languages are the ones that are lacking. Go's enums support things like bit shifts, which is unusual in other languages. Perhaps it is those other languages that aren't "proper"?
But, to be sure, it's not sum types. That is certain. If you want sum types you are going to have to look elsewhere. Go made it quite clear from the beginning that it wanted to be a "dynamically-typed language with statically-typed performance", accepting minimal static type capability in order to support the performance need.
There is definitely a place for languages with more advanced type systems, but there are already plenty of them! Many considerably older than Go. Haskell has decades on Go. Go was decidedly created to fill in the niche of "Python, but faster", which wasn't well served at the time. Creating another Haskell would have been silly and pointless; but another addition to the long list of obscure languages serving no purpose.
I thought the main "let's migrate our codebase to Go" crowd had always been from the Java folks, especially the enterprise ones. Any C/C++ code that is performant is about to get a hit, albeit small, from migrating to a GC-based runtime like Go, so I'd think that could be a put off for any critical realtime stuff - where Rust can be a much better target. And, true for both C++ and Java codebases, they also might have to undergo (sic) a major redux at the type/class level.
But yes, the Googlers behind Go were frustrated by C++ compile times, tooling warts, the 0x standard proposal and concurrency control issues - and that was primal for them, as they wanted to write network-server software that was tidy and fast [1]. Java was a secondary (but important) huge beast they wanted to tackle internally, IIRC. Java was then the primary language Googlers were using on the server... Today apparently most of their cloud stuff is written in Go.
There's a lot of software out there that either was written before good modern options existed, or uses very outdated patterns, or its language wasn't chosen with much thought.
If you are interested in the merits of golang, you should listen to someone who uses it.
There's a lot of discussions on the internet about the bad design decisions of Golang (for example around channels, enums, error handling, redeclarations, interfaces, zero values, nilability... at least generics aren't so much a subject anymore)
While you could try to argue that dynamically-typed languages in general are a poor fit for everything, the reality is that people there are typically using Python instead – and the best alternative suggestions beyond Go are Erlang and Elixer, which are also dynamically typed, so that idea doesn't work. Dynamic typing is what clearly fits the problem domain.
But despite all of that the language has some really good qualities too- interfaces works far better than it feels like they should; the packaging fits together really well (I'm learning Rust right now and the file structure is far more complicated); and people are able to write a lot of linters/codegen tools BECAUSE the language is so simple.
All in all I worry the least about the long term maintenance cost of my Go code, especially compared to my Python or JS code.
There isn't much more you can do with them. Literally all an enum can produce is a number.
In increasingly common use in a number of languages, enums are being coupled with discriminated unions, using the enum value as the discriminant. This is probably what you're really thinking of, noticing Go's lack of unions.
But where you might use a discriminated union if it were available, you would currently use an interface, where the type name, rather than an enum, is what differentiates the different types. If Go were to gain something like a discriminated union, it is likely it would want to extend upon that idea. Most especially given that generics already introduced syntax that would lend itself fairly well to it:
type Foo interface {
Bar | Baz
}
Where enums are used to act as a discriminant in other languages, that is largely just an implementation detail that nobody really cares about. In fact, since you mentioned Rust, there are only 8,000 results on Github for `std::mem::discriminant` in Rust code, which is basically nothing. That is quite indicative that wanting to use an enum (in Rust, at least) is a rare edge case. Funny, given how concerned Rust users seem to be about enums. Talk is cheap, I suppose. public enum Day {
SUNDAY, MONDAY, TUESDAY, WEDNESDAY,
THURSDAY, FRIDAY, SATURDAY
}
To use this enum, you typically declare a variable of type Day, which is a subclass of Enum, itself a subclass of Object, which cannot be cast to or from int. If a variable is typed as Day, then it can only take one of these variants (or null). Even though the Day class does have an ordinal() method, and you can look up the variants by ordinal, you cannot represent Day(7) or Day(-1) in any way, shape, or form. This sealed set of variants is guaranteed by the language and runtime (*). Each variant, like SUNDAY, is an instance of class Day, and not a mere integer. You can attach additional methods to the Day class, and those methods do not need to anticipate any other variants than the ones you define. Indeed, enums are sometimes used with a single variant, typically called INSTANCE, to make true singletons.* = There is a caveat here, which is that the sealed set of variants can differ between compile-time (what's in a .java file) and runtime (what's in a .class file) but this only happens when you mismatch your dependency versions. Rather importantly, the resolution of enum variants by the classloader is based on their name and not their ordinal, so even if the runtime class differs from the compile-time source, Day.MONDAY will never be turned into a differently named variant.
Then I am not sure how you think it is an enum? What defines an enum, literally by dictionary definition, is numbering.
It is hilarious to me that when enum is used in the context of looping over an array, everyone understands that it represents the index of the element. But when it comes to an enum in a language, all of a sudden some start to think it is magically something else? But the whole reason it is called an enum is because it is produced by the order index of an AST/other intermediate representation node. The very same thing!
While I haven't looked closely at how Java implements the feature of which you speak, I'd be surprised if it isn't more or less the same as how Rust does it under the hood. As in using a union with an enumerator producing the discriminant. In this case it would be a tag-only union, but that distinction is of little consequence for the purposes of this discussion. That there is an `ordinal` method pretty much confirms that suspicion (and defies your claim).
> you cannot represent Day(7) or Day(-1) in any way, shape, or form.
While that is true, that's a feature of the type system. This is a half-assed attempt at sum types. Enums, on the other hand, are values. An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically, which is what is happening in your example. The enum is returned by `ordinal`, like you said. Same as calling std::mem::discriminant in Rust like we already discussed in a sibling thread.
You seem to be trivializing the type system. This property is not imagined solely by the compiler, it is carried through the language and runtime and cannot be violated (outside of bugs or unsafe code). Go has nothing like this.
If you choose to call this "not an enum", that is certainly your idiosyncratic prerogative, but that doesn't make for very interesting discussion. Even though I agree that discriminated unions aren't enums and am somewhat annoyed by Rust's overloading of the term, this is not that.
It strongly suggests that implementation is a discriminated union, just like Rust. Again, it is tag-only in this case, where Rust allows also attaching payload, but that's still a type of discriminated union. That it is a set of integers – contrary to the claim made earlier – combined with you explaining how the compiler treats it like a discriminated union — as in that there are type checks against the union state, that does reveal that it couldn't be anything other than a discriminated union that is effectively identical to what we find in Rust, along with many other languages these days.
> It can be (and is) simply a field on each Day object, not an index into anything
So...? Enum is not staunch in exactly where the number comes from; it simply needs to number something. Indices are convenient, though, and I am not sure why you would use anything else? That doesn't necessarily mean the index will start where you think it should, of course.
For example,
enum Foo { A, B, C }
enum Bar { X, Y, Z }
In some languages, the indices might "reset" for each enum [A=0, B=1, C=2, X=0, Y=1, Z=2], while in other languages it might "count from the top" [A=0, B=1, C=2, X=3, Y=4, Z=5]. But, meaningless differences aside, where else is the number going to come from? Using a random number generator would be silly.But, humour us, how does Java produce its enums and why doesn't it use indices for that? Moreover, why did they choose to use the word `ordinal` for the method name when that literally expresses that it is the positional index?
public class Day extends Enum<Day> {
private int _ordinal;
private Day(int ordinal) { this._ordinal = ordinal; }
public int ordinal() { return this._ordinal; }
public static final Day SUNDAY = new Day(0);
// ...
public static final Day SATURDAY = new Day(6);
}
with the added constraint that the Day constructor cannot be invoked by reflection, and the static instances shown herein can be used in a switch statement (which may reduce them to their ordinals to simplify the jump table). Each instance is ultimately a pointer, so yes it could be pulled from a sort of RNG (the allocator). As I said they are probably in an array, so it's likely that the addresses of each variant start from some semi-random base but then increase by a fixed amount (the size of a Day object). A variable of type Day stores the pointer, not the ordinal.Now, it really seems to be in the weeds of pedantry when you start talking about discriminated unions that have only discriminants and no payload. Taking from your examples, the key point is that a Foo is not a Bar and is also not an int. Regardless of whether the variants are distinct or overlapping in their ordinals, they are not interchangeable with each other or with machine-sized integers.
Yes, this echos what I stated earlier: "An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically" Nice to see that your understanding is growing.
> Taking from your examples, the key point is that a Foo is not a Bar.
I'm not sure that's a useful point. Nobody thinks
class Foo {}
class Bar {}
...are treated as being the same in Java, or, well any language that allows defining types of that nature. That is even the case in Go! type Foo int
type Bar int
const f Foo = iota
const b Bar = f // compiler error on mismatched types
But what is significant to the discussion about enums is the value that drives the inner union of the class. As in, the numbers that rest beneath SUNDAY, MONDAY, TUESDAY, etc. That's the enum portion.The values of Day are {SUNDAY, ..., SATURDAY} not {0, ..., 6}. We can, of course, establish a 1:1 mapping between those two sets, and the API provides a convenient forward mapping through the ordinal method and a somewhat less convenient reverse mapping through the values static method. However, at runtime, instances of Day are pointers not numbers, and ints outside the range [0, 6] will never be returned by the ordinal method and will cause IndexOutOfBoundsException if used like Day.values()[ordinal].
Tying back to purpose of this thread, Go cannot deliver the same guarantee. Even if we define
type Day int
const (
Sunday Day = iota
// ...
Saturday
)
then we can always construct Day(-1) or Day(7) and we must consider them in a switch statement. It is also trivial to cast to another "enum" type in Go, even if the variant doesn't exist on the other side. This sealed, nonconvertible nature of Java enums makes them "true" enums, which you can call tag-only discriminated unions or whatever if you want, but no such thing exists in Go. In fact, it is not even possible to directly adapt the Java approach, since sealed types of any kind, including structs, are impossible thanks to new(T) being allowed for all types T.It is no secret that Go has a limited type system. In fact, upon release it was explicitly stated that their goal was for it to be a "dynamically-typed language with statically-typed performance", meaning that what limited type system it does have there only to support the performance goals. You'd have to be completely out to lunch while also living under a rock to think that Go has "advanced" types.
But, as before, enums are values. It is not clear why you want to keep going back to talking about type systems. That is an entirely different subject. It may be an interesting one, but off-topic as it pertains to this discussion specifically about enums, and especially not useful when talking in the context of Go which it isn't really intended to be a statically-typed language in the first place.
enum Discriminant {
Disc0 = 0,
Disc1 = 1,
…
}
enum Discriminant1 {
Disc0,
Disc1,
}
#[repr(u8)]
enum Discriminant2 {
Disc0 = 10,
Disc1 = 20
}
fn main() {
let d1 = Discriminant1::Disc1;
let d2 = Discriminant2::Disc1;
println!("{:?}", std::mem::discriminant(&d1)); // Value by enumerator.
println!("{:?}", std::mem::discriminant(&d2)); // Value by constant.
}
Which makes the use of the enum keyword particularly bizarre given that there is no longer even an enumerator involved, but I suppose bizarre inconsistencies are par for the course in Rust.And because it has been used like that in C for decades, the dictionary definition takes a backseat to the now de-facto C-based definition (at least for popular systems languages, which Rust is trying to share as much syntax with).
Meaning the keyword? Sure, C has the same inconsistency if you disable the enumerator with manual constant values. C is not exactly the paragon of thoughtful design. But whataboutism is a dumb path to go down.
> the dictionary definition takes a backseat to the now de-facto C-based definition
That's clearly not the case, though, as the functionality offered by the Rust enum keyword is very different. It puts absolutely no effort into being anything like C. Instead, it uses enum as the keyword for defining sum types. The C enum keyword, on the other hand, does nothing but define constants, and is functionally identical to what Go has. There is an enum involved in both cases, as demonstrated earlier, so the terminology isn't strictly wrong (in the usual case) but the reason for it existing shares little commonality.
But maybe you've moved onto the concept of enums rather than syntax and I didn't notice? You are right that the dictionary definition is in line with the intent of the C keyword, which speaks to the implementation, and is how C, Rust, Go, and every other language out there use the terminology. In another comment I even linked to the implementation in both Go and Rust and you can see that the implementation is conceptually the same in both cases: https://news.ycombinator.com/item?id=44236666
https://techblog.steelseries.com/golisp/index.html
https://github.com/SteelSeries/golisp
I wonder if they still use it.
Frankly, anything that has a compiler and supports doing asynchronous stuff decently probably does the job. Which of course describes a wide range of languages. And since agents inherently involve a lot (some would say mostly) prompt engineering, it helps if the language is good at things like multi line strings, templated strings, and just generally manipulating strings.
As for the async stuff, it's nice if a language can do async things. But is that enough? Agentic systems essentially reach out to other systems over the network. Some of the tasks may be long lived. Minutes, hours, or even days. A lot can happen in such long time. IMHO the model of some system keeping all that state in a long running process is probably not ideal. We might want something more robust and long running and less dependent on a some stateful process running somewhere for days on end.
There is an argument to be made for externalizing related state from the language and maybe using some middleware optimized for this sort of thing. I've seen a few things that go in that direction but not a lot yet. It seems that people are still busy reinventing wheels and not fully realizing yet that a lot of those wheels don't need reinventing. There's a lot of middleware out there that is really great at async job scheduling, processing, fan out, and all the other stuff that people eventually will figure out is needed here.
1. If you make your agents/workflows serializable you can run/load them from a config file or add/remove them from a decoupled frontend. You can also hash them to make versioning easy to track/immutable.
2. If you decouple the stateful object from the agent/workflow object you can just store that through sufficient logging then you can rebuild any flow at any state and have branching by allowing traces to build on one another. You can also restart/rerun a flow starting at any location.
3. You can allow for serializable tools by having a standard HttpRequestTool then setup cloudflare workers/any external endpoints for the actual toolcall logic. Removing primary server load and making it possible to add/remove tools without rebuilding/restarting.
Given this system in golang you can have a single server which supports tens of thousands of concurrent agent workflows.
The biggest problem is there isn't that many people who are working on it. So even if you can make agents 100x more efficient by running in Go it doesn't really matter if cost isn't the biggest factor for the final implementations.
The actual compute/server/running costs for big AI Agent implementation contracts is <1%, so making it 100x more efficient doesn't really matter.
Along these lines, I'm building Dive:
https://github.com/diveagents/dive
When building a SaaS with a Go backend, it's nice to be able to have the option of the agents and workflows being in the same process. And being confident in the ability of that to scale well.
While it's true that Go lacks good ML libraries, for some this isn't too consequential if your app is primarily using Anthropic or OpenAI and a database that offers semantic or hybrid search for RAG. The ML is done elsewhere. Plus it could be that you can leverage MCP servers and at that point you're language agnostic.
Regarding the concurrency model approach with Go and agents, I initially baked a message based approach (a la the Actor model, with one goroutine per agent) into Dive Agents, but eventually found that this would be better implemented as another layer. So currently in Dive it's the user's choice on how to implement concurrency and whether to use messaging. But I anticipate building that back in as an optional layer.
Some of the things are just more natural in python being a dynamic language. Eg decorator to quickly convert methods into tool calls, iterating over tool functions to create list of tools, packages to quickly convert them into json schema etc.
Consuming many incoming triggers, eg from user input as well as incoming emails from gmail, or messages from slack which would trigger new agent run was lot more natural in go with channels and switch for loop vs in python where had to create many queues and threading etc
> Share memory by communicating
> Centralized cancellation mechanism with context.Context
> Expansive standard library
> Profiling
> Bonus: LLMs are good at writing Go code
I think profiling is probably the lowest value good here, but would be willing to hear out stories of AI middleware applications that found value in that.
Cancelling tasks is probably the highest value good here, but I think the contending runtimes (TS/Python) all prefer using 3P libraries to handle this kind of stuff, so probably not the biggest deal.
Being able to write good Go code is pretty cool though; I don't write enough to make a judgement there.
Good at writing bad code. But most of the code in the wild is written by mid-level devs, without guidance and on short timelines.. i.e bad code. But this is a problem with all languages, not just Go.
The language of agents doesn't matter much in the long run as it's just a thin shell of tool definitions and API calls to the backing LLM.
You need a DSL either supported in the language or through configuration. These are features you get for free in python and secondly JavaScript. You have to write most of this yourself in go
I think I'd condense this out to "this is not a really important deciding factor in what language you choose for your agent". If you know you need something you can only get in Python, you'll write the agent in Python.
This fits LLMs pretty well too it seems!
So every discussion about the "best" programming language is really you telling the world about your favorite language.
Use Go. Use Python. Use JavaScript. Use whatever the hell else you want. They are all good enough for the job. If you are held back it won't be because of the language itself.
But programming languages make tradeoffs on those very paths (particularly spawning child processes and communicating with them, how underlying memory is accessed and modified, garbage collection).
Agents often involve a specific architecture that's useful for a language with powerful concurrency features. These features differentiate the language as you hit scale.
Not every language is equally suited to every task.
The issue with Go, is as soon as you need to do actual machine learning it falls down.
The issue with Python is that you often want concurrency in agents. Although this may be solved with Pythons new threading.
Why is Rust great? It interops very well with Python, so you can write any concurrent pieces into that and simply import it into py, without needing to sacrifice any ML work.
I'll be honest Go is a bit of an odd fit in the world of AI, and if thats the future I'm not sure Go has a big part to play outside of some infra stuff.
LLM researchers care about neither since Rust comes with its own headache: learning curve, slow compilation, weak stdlib, and Go’s FFI story is just sad. It’s still Python or GTFO.
That said, Go is great to whip up “agents” since it’s a nicer language to write networking and glue code, which is what agents are. Other than a few niche groups, I’ve seen a lot more agents written in Go than in Rust.
Agents that don’t do machine learning rarely ever work, that’s the sad truth of the ecosystem.
Dive orchestrates multi agent workflows in Go. Take a look and let me know what you think.
by that logic Elixir is even better for agents.
also the link at the bottom of the page is pretty much why I ditched Go: https://go.dev/blog/error-syntax
The AI landscape moves so fast, and this conservative, backwards looking mindset of the new Go dev team doesn't match the forward looking LLM engineering mindset.
Elixir's lightweight processes and distribution story make it ideal for orchestration, and that includes orchestrating LLMs.
Shameless plug, but that's what many people have been using Oban Pro's Workflows for recently, and something we demonstrated in our "Cascading Workflows" article: https://oban.pro/articles/weaving-stories-with-cascading-wor...
Unlike hatchet, it actually runs locally, in your own application as well.
Erlang possibly even more so. The argument that pure code is generally safer to vibe code is compelling to me. (Elixir's purity is rather complicated to describe, Erlang's much more obvious and clear.) It's easier to analyze that this bit of code doesn't reach out and break something else along the way.
Though it would be nice to have a language popular enough for the LLMs to work well on, that was pure, but that was also fast. At the moment writing in pure code means taking a fairly substantial performance hit, and I'm not talking about the O(n log n) algorithm slowdowns, I mean just normal performance.
Funnily, it's also one of the reasons I stay with Go.
Error handling is the most contraversial Go topic, with half the people saying it's terrible and needs a new syntax, and half saying it's perfect and adding any more syntax will ruin it.
pretty please :P
we all yearn for a good static language, and most of us would kill for "something like Rust (good type system, syntax, tools) but without ownership / linear-typing - just a good GC, all-on-the-heap and a dash of nice immutable datastructs"...
https://github.com/arthurcolle/agents.erl
I consider myself an expert in this relatively niche domain and welcome follow up, critiques, and even your most challenging problems. I love this area and I think distributed systems are coming back in a big way in this new era!
Elixir is way more productive to write/deal with (Phoenix vs. Erlang templating) maybe if you're a web dev, but at the end of the day you're dealing with the same exact same underlying architecture. If you're a prolog programmer, Erlang will feel nicer than if you're a ruby programmer.
I have many packages published as Mix packages, and some published as rebar packages.
Overall, ergonomics definitely feel nicer with Elixir, but I feel like by having it be portrayed as "so different" from Erlang, people don't pull open the Erlang/OTP docs, and don't look at the dozens of behaviors that already exist that usually solve your problem.
Like, why is there a gen stage in Elixir but not in Erlang?
If you wanna use the BEAM, you can use it. If they were more in sync, and provided OOTB in the same distribution, I'd always lean towards Elixir.
Just feels weird that Elixir gets a bunch of street cred for what are fundamentally Erlang/OTP capabilities
gen_stage is just a library. One could write it in Erlang. It's like asking why Broadway is only for Elixir and not Erlang.
It was hard to approach the Erlang docs when I started in Elixir. However, they've moved to an ex_doc format (is it ex_docs?) as a standard and it's so much easier to grok.
I couldn't imagine trying to implement this DSPy library in Erlang, for example
I know what you mean, at the same time I'm thinking we should welcome any momentum from the Elixir community. The more people working with Elixir/Erlang the better. And if you try Elixir at some point you learn about the Elixir background.
But otherwise they are mostly the same: Elixir is just an Erlang reskin.
So pretty much wherever you can use one, you can use the other.
I would have like it more, if they had reskinned it to look more like Haskell. But that's just my preference.
1. Requiring a VM, making deployment more complex.
2. Not being natively compiled, or always having this performance roof for the inner loops.
After considering both Erlang/Elixir and Go a lot for my scientific workflow manager, I finally went with Go for these exact reasons.
It already does well coordinating IoT networks. It's probably one of the most underestimated systems.
The Elixir community has been working hard to be able to run models directly within BEAM, and recently, have added the capability for running Python directly.
What are they doing with Python on the BEAM these days? I'm OOTL