- No ad-hoc polymorphism (apart from function overloading IIRC) means no standard way of defining how things work. There are not many conventions yet in place so you won’t know if your library supports eg JSON deserialization for its types
- Coupled with a lack of macros, this means you have to implement even most basic functionality like JSON (de)serialization yourself - even for stdlib and most popular libs’ structs
- When looking on how to access the file system, I learned the stdlib does not provide fs access as the API couldn’t be shared between the JS and Erlang targets. The most popular fs package for erlang target didn’t look of high quality at all. Something so basic and important.
- This made me realise that in contrast to elixir which not only runs on the BEAM („Erlang“) but also runs with seamless Erlang interop, Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.
There are many things I liked, like the algebraic data types, the Result and Option types, pattern matching with destructuring. Which made me realize what I really want is Rust. My ways lead to Rust, I guess.
Gleam has access to the entire ecosystem out of the box, because all languages on the BEAM interoperate with one another. For example, here's a function inside the module for gleam_otp's static supervisor:
@external(erlang, "supervisor", "start_link")
fn erlang_start_link(
module: Atom,
args: #(ErlangStartFlags, List(ErlangChildSpec)),
) -> Result(Pid, Dynamic)
As another example, I chose a package[0] at random that implements bindings to the Elixir package blake2[1]. @external(erlang, "Elixir.Blake2", "hash2b")
pub fn hash2b(message m: BitArray, output_size output_size: Int) -> BitArray
@external(erlang, "Elixir.Blake2", "hash2b")
pub fn hash2b_secret(
message m: BitArray,
output_size output_size: Int,
secret_key secret_key: BitArray,
) -> BitArray
It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer – but it's wrong to say you can't lean on the wider BEAM ecosystem![0]: https://github.com/sisou/nimiq_gleam/blob/main/gblake2/src/g...
Hayleigh, when I asked on the discord about how to solve my JSON problem in order to get structured logging working, you replied that I’m the first one to ask about this.
Now reading this: > It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer
Certainly makes me even more feel like gatekeeping.
As for the @external annotations, I think you're both right to a degree. Perhaps we can all agree to say: Gleam can use most libraries from Erlang/Elixir, but requires some minimal type-annotated FFI bindings to do so (otherwise it couldn't claim to be a type-safe language).
Why do you feel like a gatekeeper? Your opinion is valid, it's just that the interop statement was wrong.
It would be different if I didn't have to write bindings and Gleam integrated automatically with foreign APIs. For Erlang that's probably not possible, but for the Javascript ecosystem it could make use of Typescript signatures maybe. (it would be very hard though)
In Gleam, you first have to declare the function type and THEN you can call the function directly.
This is probably the lightest way you can bridge between statically and dynamically typed languages, but it's not the same as Elixir.
The runtime behaviour and cost of calling an Erlang function is the same in Elixir and Gleam, however the syntax is more verbose in Gleam as it asks for type information, while in Elixir this is optional.
https://cs-syd.eu/posts/2023-08-25-ad-hoc-polymorphism-erode...
But at least ad hoc polymorphism lets you search for all instances of that business logic easily.
For JSON deserialisation, you would declare a module-type called "JSON-deserialiser", and you would define a bunch of modules of that module-type.
The unusual thing is that a JSON-deserialiser would no longer be tied to a type (read: type, not module-type). Types in ML-like languages don't have any structure at all. I suppose you can now define many different JSON-serialisers for the same type?
That’s true for Elixir as practiced, but it’s the wrong conclusion for Gleam.
Elixir doesn’t care about ad-hoc polymorphism because in Elixir it’s a runtime convention, not a compile-time guarantee. Protocols don’t give you universal quantification, exhaustiveness, coherence, or refactoring safety. Missing cases become production crashes, not compiler errors. So teams sensibly avoid building architecture on top of them.
In a statically typed language, ad-hoc polymorphism is a different beast entirely. It’s one of the primary ways you encode abstraction safely. The compiler enforces that implementations exist, pushes back on missing cases, and lets you refactor without widening everything into explicit pattern matches.
That’s exactly why people who like static types do care about it.
Pointing to Elixir community norms and concluding “nobody cares” is mixing up ecosystem habits with language design. Elixir doesn’t reward those abstractions, so people don’t use them. Gleam is explicitly targeting people who want the compiler to carry more of the burden.
If Gleam is “Elixir with types,” fine, lack of ad-hoc polymorphism is consistent. If it’s “a serious statically typed language on the BEAM,” then the absence is a real limitation, not bikeshedding.
Static types aren’t about catching typos. They’re about moving failure from runtime to compile time. Ad-hoc polymorphism is one of the main tools for doing that without collapsing everything into concrete types.
That’s why the criticism exists, regardless of how Elixir codebases look today.
Both overcome it by admitting they don't know and need to learn.
But most of all I think the overall simplicity of the language is really what’s standing out to me. So far I think the lack of ad-hoc poly and macros are a plus - it really reduces the impulse to write “magical” code, or code with lots of indirections. In the past I’ve definitely been guilty of over-abstracting things, and I’m really trying to keep things as simple as possible now. Though I’ve yet to try Gleam with a large project - maybe I’ll miss the abstractions as project complexity increases.
I suspect Gleam will be a great language for small to medium sized projects written with LLM assistance (NOT vibecoded) - the small language, strong typing and immutability gives good guardrails for LLM-generated code, and encourages a simple, direct style of programming where a human programmer can keep the whole structure in their head. Letting an LLM run free and not understanding what it’s written is I think where projects run into big problems.
But take look at nested callback code, the pyramid of doom, and you see why it's pragmatically necessary. It's a brilliant design that incorporates just enough metaprogramming magic to make it ergonomic. The LSP even lets you convert back and forth between nested callback style and `use`, so you can strip away the magic in one code action if you need to unravel it.
But use is nicer to avoid callback hell and all the indentations/scoping.
Nothing interesting being created in F#
As much as I had high hopes for F# I think its safe at this point, to not pursuit it any further
.Net is C#
If you want an Ocaml like language, that is not Ocaml, your best bet is Rescript and that being said, Rescript is probably more of a competitor to gleam, since gleam also have javascript as a target
It's confusing too. Is Gleam suitable for distributed computing like Elixir/Erlang on BEAM? Would that answer change if I compile it to JS?
I find this kind of explicit separation very powerful. It also removes some of the anxiety if something will end up in a client bundle when it’s supposed to be server only.
My main friction point is that the Int type maps to different concepts in erlang and js
In erlang it's a arbitrary precision Int
In js it the js number type, which is a 64bit float iirc.
Also recursion can hit limits way sooner in js.
For me, my code rarely ran in both js and erlang. But could be skillissue
Pick the target that makes sense for your project and stick with it :)
If you compile it to JS, then the guarantees change to JS's guarantees.
Personally I've felt that the JS target is a big plus and hasn't detracted from Gleam. Writing a full stack app with both sides being in Gleam and sharing common code is something I've enjoyed a lot. The most visible impact is that there's no target specific functions in the stdlib or the language itself, so Erlang related things are in gleam_erlang and gleam_otp, and e.g. filesystem access is a package instead of being in the stdlib. If you're just into Erlang, you don't need to interact with the JS target at all.
Of course I can't say if anyone ever made any decisions based on the other target that would have repercussions for me only using the BEAM.
Unfortunately, there are many tests for the server, and none for the client.
It's definitely something they should figure out.
Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits. The wire doesn’t care about monads or integers or characters or strings or functors, just 1’s and 0’s, and ultimately I feel like imposing a type system can often get in the way more than it helps. There’s so much weirdness and uncertainty associated with stuff going over the wire, and pretty types often don’t really capture that.
I haven’t tried Gleam yet, and I will give it a go, and it’s entirely possible it will change my opinion on this, so I am willing to have my mind changed.
So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.
Types naively used can fall apart pretty easily. Suppose you have some data being sent in three chunks. Suppose you get chunk 1 and chunk 3 but chunk 2 arrives corrupted for whatever reason. What do you do? Do you reject the entire object since it doesn’t conform to the type spec? Maybe you do, maybe you don’t, or maybe you structure the type around it to handle that.
But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
There are more elaborate type systems that do encode these things better like session types, and I should clarify that I don’t think that those get in the way. I just think that stuff like the C type system or HM type systems stop being useful, because these type systems don’t have the best way to encode the non-determinism of distributed stuff.
You can of course ameliorate this somewhat with higher level protocols like HTTP, and once you get to that level types do map pretty well and you should use them. I just have mixed feelings for low-level network stuff.
Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).
For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.
The type is a way to communicate (to the compiler, to other devs, to future you) that those are the expected invariants.
The check for invariants is trivial as you say. The value of types is in expressing what those invariants are in the first place.
I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.
For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.
Complaining that static types don't guard you against lost packets and bit flips is missing the point.
Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.
I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.
Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.
Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.
The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.
Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.
Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.
It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.
IMO the right approach is just to parse everything into a known type at the point of ingress, and from there you can just deal with your language's native data structures.
Once you add distribution you have to encode for the fact that the network is terrible.
You absolutely can parse at ingress, but then there are issues with that. If the data you got is 3/4 good, but one field is corrupted, do you reject everything? Sometimes, but often Probably not, network calls are too expensive, so you encode that into the type with a Maybe. But of course any field could be corrupt so you have to encode lots of fields as Maybes. Suddenly you have reinvented dynamic typing but it’s LARPing as a static type system.
I get what your saying, but can't you have the same issue if instead you have 3 local threads that you need to get the objects from, one can throw an exception and you only receive 2, same problem
When you have to deal with large amounts of uncertainty, static types often reduce to a bunch of optionals, forcing you to null check every field. This is what you end up having to do with dynamic typing as well.
I don’t think types buy you much in cases with extreme uncertainty, and I think they create noise as a result.
It’s a potentially similar issue with threads as well, especially if you’re not sharing data between them, which has similar issues as a distributed app.
A difference is that it’s much cheaper to do retries within a single process compared to doing it over a network, so if something gets borked locally then a retry is (comparatively) free.
On one end, you write / generate / assume a deserialisator that checks whether incoming data satisfies all required invariants, eg all fields are present. On the other end, you specify a type that has all the required fields in required format.
If deserialisation fails to satisfy type requirements, it produces an error which you can handle by eg falling back to a different type, rejecting operation or re-requesting data.
If deserialisation doesn't fail – hooray, now you don't have to worry about uncertainty.
The important thing here is that uncertainty is contained in a very specific place. It's an uncertainty barrier, if you wish: before it there's raw data, after it it's either an error or valid data.
If you don't have a strict barrier like that – every place in the program has to deal with uncertainty.
So it's not necessarily about dynamic / static. It's about being able to set barriers that narrow down uncertainty, and growing number of assumptions. The good thing about ergonomic typing system is that it allows you to offload these assumptions from your mind by encoding them in the types and let compiler worry about it.
It's basically automatization of assumptions book keeping.
What the hell is really the alternative here? Do you just pretend your process can accept any kind of data, and just never do anything with it??
If you need an integer and you get a string, you just don't work. This has nothing to do with types. There's no solution here, it's just no thank you, error, panic, 500.
Actually Gleam somewhat shares this view, it doesn't pretend that you can do typesafe distributed message passing (and it doesn't fall into the decades-running trap of trying to solve this). Distributed computing in Gleam would involve handling dynamic messages the same way handling any other response from outside the system is done.
This is a bit more boilerplate-y but imo it's preferable to the other two options of pretending its type safe or not existing.
I might give it a look this weekend.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
Within a single computer that’s easy because a single computer is generally well behaved and you’re not going to lose data and so yeah your type assumptions hold.
When you add distribution you cannot make as many assumptions, and as such you encode that into the type with a bunch of optionals. Once you have gotten everything into optionals, you’re effectively doing the same checks you’d be doing with a dynamic language everywhere anyway.
I feel like at that point, the types stop buying you very much, and your code doesn’t end up looking or acting significantly different than the equivalent dynamic code, and at that point I feel like the types are just noise.
I like HM type systems, I have written many applications in Haskell and Rust and F#, so it’s not like I think type systems are bad in general or anything. I just don’t think HM type systems encode this kind of uncertainty nicely.
Gleam has no `interpreted` story, right? Something like clojure, common lisp, etc. I think this matters because debugging on beam is not THAT great, there are tools in erlang/elixir to facilitate debugging, like inspect() or dbg().
If anyone has experience in this language, what is the mindset with gleam? How you guys debug?
There is the echo keyword now, which is comparable to elixir's dbg(), I use that a lot.
Lacking a REPL, what I normally do is make a dev module, like 'dev/playground.gleam' where I'm testing things out (this is something that the gleam compiler supports, /dev is similar to /test) and then run it with 'gleam run -m playground'.
Sometimes I also use the Erlang shell. You can get an Erlang shell with all the gleam modules from your project loaded in with the 'gleam shell' command. You just need to know the Erlang syntax, and how Gleam modules are named when compiled to Erlang (they use an '@' separator, so gleam/json becomes 'gleam@json').
Unfortunately there is not yet a plugin for the BEAM debuggers for them to use Gleam syntax.
Despite of the suspicion, Gleam provides a better and elegant syntax for those who are not familiar with Erlang or functional programming languages, which I loved most.
Can I do networking? Can I do system calls to my OS? Display graphics and sound? Can I import a C library that will do all that and call its functions? And if so, how? I just can’t see it from any documentation. Yes, I can call functions from other BEAM-based languages, but then I’m going in circles.
The Erlang docs on this are here: https://www.erlang.org/doc/system/overview.html
It always ends the same way, always.
A set of "rights" comes from current law.
And Code of Law is an invention like everything else.
"You are welcome to our community only if your hair is long and you drive a yellow car, if you're not then you're not welcome." is pretty insane IMO.
We'll see if this will be a solid community I guess.
It is a totally insane way to run a project, and it's quite obvious the Gleam community is run by persons who are unable to handle people with opinions that differ from their own.
It does sound like a great way to build an echo chamber.
Regardless, that's irrelevant to the discussion at hand. You don't have to change your beliefs to be part of the Gleam community, you just have to not be an asshole about them. If you're the kind of person that starts an argument any time BLM is mentioned, is it understandable why they wouldn't want you in their community?
There we go again. I think that you people would gain if you'd read what does it mean if something is objective and also the meaning of "subjective". Either you live in social bubbles or you're intentionally ignoring anything that's not in line with your ideology. I'm actually not surprised this is the case, since you're not allowing people outside of your bubbles in your community spaces. You cherish diversity, yet in reality you're the most ideologically closed social group that I know. Letting people in only if it's easy for you is very far from "welcoming".
You're accusing the parent of being an asshole, yet you dismiss his arguments based on false "objectivity". And because of your "objectivity", which is clearly subjective, you reinforce the argument that parent is not welcome in your community. How is this welcoming?
I mean I know the answer.
Regarding your accusation of subjectivity, I was addressing the misnomer that BLMGN is equivalent to the BLM movement - it's not. BLMGN might be a grift for all I know, but that cannot be used to call the entire BLM movement a grift. By definition a grifter is aware that they're grifting, do you believe that every BLM protest was organised by someone looking to make a profit? If not, BLM is objectively not a grift.
On the topic of being 'welcoming', clearly you don't understand the paradox of tolerance. Is it intolerant to exclude Nazis from a community? Obviously not, despite what the Nazis would claim, because Nazis make the communities they're involved in intolerable to anyone that's not a Nazi.
Thus, if you want to create an inclusive community, you have no choice but to exclude certain groups of people.
It's actually pretty simple to figure out which groups should be excluded: - Transphobes are constantly imposing their beliefs on trans people, trans people want equal rights. - White supremacists are constantly imposing their beliefs on black people, black people want equal rights. - Homophobes are constantly imposing their beliefs on gay people, gay people want equal rights. Do I need to continue?
To be clear, I barely interact with these "safe" communities - pretty much only when I need some help with my code. It's very easy to hide your beliefs if you want to participate, I could be a raging homophobe for all they know because I've never talked about gay people in there.
You say the they don't tolerate anyone outside their bubbles, but anyone is free to join and start getting support, there's no purity test. So do you mean they don't tolerate people questioning trans rights in a support channel? Because obviously they don't. If you want to start an argument there are plenty of appropriate places to do so, places that don't make people feel unsafe.
Yet you say that objectively it's not a grift because there is at least 1 instance of a non-grifting event in Ω. Even Kenneth Copeland _sometimes_ is right about _something_, can we say that objectively he's not a liar because _there was at least 1 instance of him telling the truth_? I think not.
Also, you people use this word, "nazi", but do you actually know what are Nazis? German National-Socialists. Even the name "NAZI" is taken from German language. So if you ask me "what is wrong with disliking nazis", yet you use some artificial and historically wrong definition of a nazi, then I'm telling you that the problem is with you people using "nazis" for others who disagrees with you. I probably am a Nazi in someone's eyes, because I'm opposed to trans-women participating in women's sports. In reality, my grandfather fought with the actual nazis which existed in real world, not in your imagination.
> Thus, if you want to create an inclusive community, you have no choice but to exclude certain groups of people.
Yeah, this is how I understand it as well. People want easy inclusivity, a mono-themed style of thinking, and diversity only within their own strictly defined boundaries. I interpret this as a contradiction and a lie: diversity among selected groups is not true diversity, and inclusivity limited to chosen pools is not genuine inclusivity. For me there is absolutely no difference between this and a situation where whites stick to whites, blacks stick to blacks, etc.
If something is marked as "LGBTQ+ friendly" then I'm all fine -- it's very understandable and I know what I'm dealing with. But if something is "inclusive" then I automatically know I'm not in the target audience, because the sole definition of "include" is already loaded. The language already contains words with different meanings. For me this means "we're so closed, we even use our own definition of 'inclusion' to not think about the outside world".
If the section was phrased as “We are LGBTQ+ friendly and do not tolerate transphobia or racism” that would feel more welcoming to you?
At the end of the day, it’s a programming language community. If you join and ask a question about how to call functions from Erlang, you’ll definitely get an answer. If you join and bring up your feelings about trans women in sports you’ll most likely be asked to stop or removed, as it’s just not a space where that kind of discussion is welcome.
But why should it be?
I agree, that would be very silly. I don't think you can compare not tolerating racists to mandating a particular car colour.
It depends on your definition of racism I guess.
I wouldn't be so sure. In leftist projects, there are countless of examples why someone is not welcome based on their personal beliefs, and because of this I'm cautiously suspicious about Gleam.
If I'm wrong then the Internet is a better place than I think it is, which would be a good thing. If I'm right, then at least I'll dodge another bullet. Either way I win I guess.
For Elixir I saw a simple distributed job scheduler - it was dead simple in code and was ripped, because it didn’t require maintenance for ~8 years just working without issue and people who knew anything about it left company or switched part of company and acted as they forgot everything.
The other example is medium sized (in terms of features and code) web app - maintained by <30 people now, delivering more than 800 people at the other company, no stress, no issues and with great DX because of the BEAM (other company is drowning in JVM based nano-services).
- Have to deploy product XYZ (because we don't write everything from scratch)
- Need to extend said product
- Use one of the official SDKs, because we aren't yak shaving for new platforms
Thus that is how we end up using the languages we kind of complain about.
To be fair, languages like Elixir and Gleam do exist, because too many complain about Erlang, which me with my Prolog background see no issues with.
When I joined an Erlang project I also had some aha moments with the syntax and how stuff is structured, and I found Elixir much nicer to work with (without any real Ruby experience). I don't want to say Erlang is not modern enough, but some things felt like they were around half the work (and more enjoyable) with some Elixir libraries (vastly bigger ecosystem than pure Erlang), for example handling XML.
It might be a bit simplistic, but I don't think you really lose anything meaningful when using Gleam or Elixir over pure Erlang. Just like you don't lose anything when using Clojure or Kotlin over pure Java.
Others use Erlang and Elixir quite effectively and successfully in several billion dollar businesses apart from nerd aspects. It will be interesting to me personally if Gleam also has its day in the sun.
The facts about Gleam:
1. It runs on the BEAM - exceptionally slow compared to Go, but infinitely scalable by default in a way that Go is not - in practice, very rarely matters.
2. They will argue the slowness doesn't matter -> if ~97% of time is spent waiting on I/O -> you can be 10x slower and that means you're only ~30% for typical applications -> it's easier to scale more machines on the BEAM than it is to scale a single machine -> this is true, but largely irrelevant in Go's core market -> it's almost as if Go was built by smart people.
3. The reality is that predictability is much harder to guarantee once you start moving components to different machines. Correct, predictable distributed computing makes correct, predictable concurrent programming look easy.
4. The BEAM does not allow shared memory, Go does (unsafely). There are many cases where the performance impact of this is night and day (why Go ultimately allowed unsafety).
I assume Gleam claims to make this just work. But as someone working in this space, this seems like trying to abstract away the difference between taking a boat to Europe or a plane.
Gleam may be nice if you're building something for the BEAM (massively scalable single app that just makes sense with the actor model, typically chat / telecom).
Though I question why you would use it over Elixir.
Go's syntax kind of blows, but it is so INCREDIBLY good at what it does, that you are not going to beat Go by just having better syntax and being "infinitely scalable" by default.
In practice, Go is easily scalable enough for almost anyone. If it isn't congrats, you're a $10B+ company. You can afford to rearchitect and optimize your hot paths.
Here's some webserver benchmarks that cover a handful of popular languages: https://stressgrid.com/blog/webserver_benchmark/
I believe BEAM got a JIT compiler built into the runtime not too long ago (after that post, iirc), so it might perform a bit better now.
I'm paying keen attention to Gleam to see if it can provide a robust development experience in this way, in the longer term.
I am more excited about making things rather than fetishizing about some language paradigms so, I acknowledge that Gleam just isn't for me. I did give me the insight that for me, it might be the best to stick with the common denominator languages for the foreseeable future.
to effectively critique a language you must understand the design trade offs made
although the playground is a much gentler introduction than installing gleam+erlang+rebar3
import gleam/io
pub fn main() {
io.println("hello, friend!")
}
becomes this Raku say “hello, friend!”
well maybe you really want to have a main() so you can pass in name from the command line #!/usr/bin/env raku
sub MAIN($name) {
say "hello, $name!”
}I'd always thought it would be a go-like thing where the put the mascot away for everything except for the minor hero section or buried in the footer.
RIP Perl.
You know the take-aways from the comparison are quite instructive:
- do I need to import the io lib? (shouldn't this just be included)
- do I need a main() in every script? (does this rule out one liners like `> raku -e "say 'hi'"`)
- is `io.println` quite an awkward way to spell `print`?
I am not making the case that these are right or wrong language design decisions, but I do think that they are instructive of the goals of the designers. In the case of raku its "batteries included" and a push for "baby raku" to be as gentle on new coders as eg. Python.
In comparison with Gleam, I would be more interested to see how good Raku is at helping the programmer prevent errors through static analysis, how easy it is to build with concurrency, how much value the language puts into being easy to understand and reason about, and whether it can run on the server as well as compile to JS.
I have no negative predisposition, I don't really care about the history of pearl or whatever, I have looked at Raku before but I find the syntax very foreign, and the fact that it seems to (maybe optionally?) incorporate glyphs that I can't easily type with a keyboard.
I love the butterfly though, so I'd love to get to know the language more.
Yes that would be me! If you like making these comparisons, can you write the following pattern matching in Raku?
import gleam/io
pub type Fish {
Starfish(name: String, favourite_colour: String)
Jellyfish(name: String, jiggly: Bool)
}
pub fn main() {
handle_fish(Starfish("Lucy", "Pink"))
}
fn handle_fish(fish: Fish) {
case fish {
Starfish(_, favourite_colour) -> io.println(favourite_colour)
Jellyfish(name, ..) -> io.println(name)
}
} role Fish { has Str $.name }
class Starfish does Fish { has Str $.favourite-colour; }
class Jellyfish does Fish { has Bool $.jiggly }
sub handle-fish(Fish $fish) {
given $fish {
when Starfish { say .favourite-colour }
when Jellyfish { say .name }
}
}
handle-fish Starfish.new: :name("Lucy"), :favourite-colour("Pink");
I would probably reach for multi-dispatch... role Fish { has Str $.name }
class Starfish does Fish { has Str $.favourite-colour; }
class Jellyfish does Fish { has Bool $.jiggly }
multi sub handle-fish(Starfish $fish) { say $fish.favourite-colour }
multi sub handle-fish(Jellyfish $fish) { say $fish.name }
handle-fish Starfish.new: :name("Lucy"), :favourite-colour("Pink"); my @promises;
sub MAIN() {
# Run loads of green threads, no problem
for ^200_000 {
spawn-greeter($++);
}
await Promise.allof(@promises);
}
sub spawn-greeter($i) {
@promises.push: start {
say "Hello from $i";
}
}