On the other hand, the fact that this is even possible is more wild. Instead of replacing JS with a proper statically-typed language, we're spending all this effort turning a preprocessor's type system into a turing-complete metalanguage. Pretty soon we'll be able to compile TypeScript entirely using types.
And luckily, the most complex of types are usually limited to and contained within library type definitions. They add a lot of value for the library users, who usually don't have to deal that that level of complexity.
We still have a long way to go in figuring out how to get our type systems to be easy enough to use to where this stuff doesn't surprise people anymore (because it shouldn't! identifier manipulation should be table stakes and yet)
[0]: modulo soundness of course! Though I don't think that's intrinsic to the expressiveness
Imagine if WASM were supported natively instead, with browsers exposing the same DOM interfaces that they do to JS. You could link a wasm binary in a <script> and do everything you can with JS/TS, but with any language of your choosing. No doubt a compiled form of TS would appear immediately. We'd no longer need separate runtime type checking.
Just feels like priorities are in the wrong place.
TypeScript wasn't created separate from JavaScript and then chose JavaScript as a backend. TypeScript only exists to perform build-time type checking of JavaScript. There wouldn't be a TypeScript that compiled to something else, because other languages already have their own type systems.
Runtime type-checking isn't part of TypeScript because 1) It isn't part of JavaScript, and TypeScript doesn't add runtime features anymore. 2) It'd be very expensive for simple types, 3) Complex types would be prohibitively expense as you have to both reify the types and perform deep structural checking.
WASM also is natively supported, and with newer extensions like reference types and GC, we're getting closer to the point where a DOM API could be defined. It'll still be a long while, but that's the long-term direction it's heading in. But even then, you would only see a TypeScript-to-WASM compiler[1] because there's already so much TypeScript out there, not because TypeScript is a particularly good language for that environment. A more static language would be a lot better for a WASM target.
[1]: Porfor is already such a compiler for JS and TS, but it does not do runtime type-checking: https://porffor.dev/
public static void Main() { Document.Body.Append(new Div("hello world")); }
and be able to use it in a page like <script src="hello.wasm"></script>
and have that just work without any JS "glue code". Maybe someday. I know they're working on the DOM APIs, but as you said, it's been slow going. Feels like priorities are elsewhere. Even CSS is moving forward with new features faster than WASM is (nesting and view transitions are awesome though).(Btw when I said "separate runtime type checking" I didn't mean language-level; I was referring to the validation libraries and `typeof`'s that are required today since TS types obviously no longer exist after build. If it were a real static language, then of course you can't store a bool in a string in the first place.)
[0]: https://www.assemblyscript.org/ (Porffor looks neat too. Wonder if it could be useful in plugin architectures? E.g. plugins can written in JS, and the program only needs a WASM interpreter. I'll bookmark it. Thanks.)
As someone who's written a lot of Typescript in fairly large projects: in practice this isn't really an issue if you
1. ban casting and 'any' via eslint,
2. use something like io-ts at http api/storage boundaries to validate data coming in/out of your system without a risk of validator/type mismatch.
But you have to have total buy in from everyone, and be willing to sit down with new devs and explain why casting is bad, and how they can avoid needing that eslint suppression they just added to the codebase. It certainly would be easier if it just wasn't possible to bypass the type system like this.
Even though it works 99% of the time, just like in TS you can occasionally run into a bug because some misbehaving library handed you a null that it said can't be a null...
Even inside the Typescript rules, `as` is a ridiculously dangerous timebomb.
Typescript is 100% about "convenience" and write-lots-of-code-now style of productivity, ~0% about safety or long-term maintainability.
Grepping a real world codebase that would not be `unsafe` in Rust:
event as CustomEvent<T>
const errorEvent = event as ErrorEvent;
const element = getByRole("textbox");
expect(element).toBeInstanceOf(HTMLInputElement);
const input = element as HTMLInputElement;
const element = parent.firstElementChild as HTMLElement;
type ItemMap = Map<Item["id"], Item>;
...
new Map() as ItemMap
const clusterSource = this.map.getSource(sourceName) as GeoJSONSource;
[K in keyof T as T[K] extends Fn ? K : never]: T[K];
target[type] as unknown as Fn<...
export const Foo = [1,2,3] as const;
and on it goes. Typescript normalizes unsafe behavior.It's on you to ensure that you don't misuse `as`. If I could choose between current TS, and a "safer" one that's less expressive in complex cases, I'd choose the current one any day of the week.
Sane languages have a downcast mechanism that doesn't pretend it succeeds every time.
Also weird that Typescript has exactly the mechanism you're talking about. Why are you acting like it doesn't?
What are some examples of the timeless mistakes in those programs? I think X was a pretty good effort, it's just that it essentially ossified and has been left behind compared to some more modern systems (although I'm using it right now.) Sendmail approach to dynamic configuration was sub-optimal. But these aren't examples of mistakes that I see recapitulated often.
Not every configuration system is as bad as the m4 nightmare that sendmail used, and I understand nothing really better was feasible in the prelapsarian or Stone Age days of its implementation. But I worked - fought - with sendmail for years and, as in the book, I also remain mildly surprised that Allman continues to perambulate. Most such things in my later professional experience differ by degree, not kind. YAML is not as bad as what was typically perpetrated in Perl days, but it does too much and too little and all its fiddly rules give me headaches. JSON is awful and what we're basically stuck with, because even though it's so simple it's almost useless, at least it's simple. XML is much better than it gets credit for, but nobody likes it because most programmers seem to regard the need to use a keyboard as an imposition, and I assume also have frequent nightmares featuring lots of pointy angle brackets. (I use Emacs because I don't hate myself, and I wish more people had the sense to keep things as simple as Emacs Lisp typically is.)
I don't want to talk about X. Wayland has been about 60% mistakes by volume, and I like too many of the people who made it too well to be anything other than sad about that.
Re sendmail, when were you working with that? My reaction was just to look at it, say "nope", and used Exim instead. Perhaps the most instructive lesson here is the importance of good choices when it comes to selecting systems to depend on.
Other than that, I'm not sure what the lesson is in "people collectively decided to depend on one of the worst alternatives available." We still see that today with programming languages.
There's nothing really wrong with YAML, except perhaps the way some people use it. I classify that as "skill issue". I work with Kubernetes regularly, and its YAML usage is fine.
Something similar applies to JSON. If it's so terrible, what's better? With JSON Schema and OpenAPI, it's feature-comparable to XML now.
The problem with XML is its completely unnecessary verbosity outweighs its usefulness. I can only assume it was designed by ex-mainframe people who, unlike me, actually yearned for a return to the overengineered environments they were used to. It's no surprise that JSON and YAML edged out XML.
Emacs Lisp is an abomination. Sure, Lisp has its place historically - I had a spirited discussion with John McCarthy about that at a Lisp conference in the 2000s. I'll just mention two words: dynamic scoping. They took decades to even figure out a solution to the funarg problem, and that still didn't really fix the language. Luckily Guy Steele came along and noticed that Church had solved that problem before computers were even invented.
Oh, only until about 2005. Other options on the table included Postfix and qmail; when I reached a point where burgeoning trust in my engineering judgment coincided with time to replace the oldest production boxes, we commenced to switch to Postfix, primarily because administering that yielded me the lowest Excedrin bill.
Anything with genuine numerical precision would be better than JSON, is what. I appreciate this is an open-ended suggestion with no implementation offered, but if I have to spend one more mortal minute bikeshedding bignum representations in strings, I won't be held responsible for my actions. Indeed just the thought reminds me part of my purpose in this time apart from labor is to decide whether indeed I will train as a lawyer, where I understand time similarly spent is billable in six-minute increments.
I wish I'd been a fly on the wall for your discussion with McCarthy, as perhaps I also wish you could have been for a very spirited chat I had with Stallman around 2016 on the merits and externalities of his and FSF's philosophical approach. I appreciate you taking the time of such a detailed and thoughtful reply, which I confide I'll later revisit and find benefit beyond that already apparent. Enjoy your day!
A fun read / Video...
const schema = proto`
syntax = "proto3";
message Person { ... }
`;
type Person = typeof schema['Person'];
And you could get built-in schema validation with a sophisticated enough type definition for `proto`, nice syntax highlighting in many tools with a nested grammar.We would love to see this feature in TypeScript to be able to have type-safe template in lit-html without an external tool.
The issue hasn't seen much activity lately, but it would be good to highlight this library as another use case.
Ultimately i decided ts-simple-type is too difficult to maintain, so now I just use the TypeScript compiler API directly to introspect types and emit stuff, but most of that code is private to Notion Labs Inc
"No compile/no codegen" sounds nice until you get slow compile times because a type system is slow VM, the error messages are so confusing it's hard to tell what's going on, and there's no debugging tools.
My hat's off to the author - I attempted something like this for a toy regex engine and got nowhere fast. This is a much better illustration of what I thought _should_ be possible, but I couldn't quite wrap my head around the use of the ternary operator to resolve types.
Probably better to just stick with codegen
That said, depending on how your codegen works and how you're using protos at runtime, this approach might actually be faster at runtime. Types are stripped at compile-time and there’s no generated class or constructor logic — in the compiled output, you're left with plain JS objects which potentially avoids the serialization or class overhead that some proto codegen tools introduce.
(FWIW, type inference in VSCode seemed reasonably fast with the toy examples I was playing with)
If your codegen is introducing runtime overhead you should use a different codegen.
> type inference in VSCode seemed reasonably fast with the toy examples I was playing with
It usually is. It can become a problem in a real project that has a lot of stuff going on, though.
Also, been building something in a different space (LeetCode prep tool), but the idea of removing build steps for dev speed really resonates. Would love to see how this could plug into a lightweight frontend setup.
```
declare module '*?raw' { const rawFileContent: string export default rawFileContent }
```
Then, when I add the file to my types property array of my tsconfig's compilerOptions, I can import anything I want into a typescript file as a string, so long as I add "?raw" to the end of it. I use it to inject HTML and CSS into templates. No reason it couldn't be used to inject a .proto file's contents into the inline template.
Again, you're technically correct! But a "import non js content" feature is a pretty solveable problem in TS. Maybe not at the language level, but at the implementation level, at least.
It would be awfully silly to do this at runtime because typescript doesn't exist at runtime, which is sort of the whole point of the library.
The reply with the typescript definition for ?raw is unrelated to this project and would neither solve the issue presently nor address it in the future. But if you implemented it in your bundler, it absolutely solves this problem exactly as described, because the imported file can have whatever boilerplate you want around it (like `as const`). This is something that exists and is usable today.
2. Typescript runs before your bundler and has no idea what the bundler is doing, so any transformations you do there are invisible to it.
You keep telling me this is possible today, at this point I'd ask you to please prove it, because if it's true then that's very exciting and I have a bunch of use cases for it.
Might as well do code generation at that point, it'd even be debuggable.
Also, I hope you expected me to read that output in the same cadence as the Hooli focus groups, because that's exactly what I did.
I wonder if the author has a use case in mind for this that I don't see. Like if you are only using TS, what's the point of protobuf? If you are exchanging data with programs written in other languages why avoid the protobuf tooling that you need anyway?
Maybe this is just a fun toy project to write a parser in TS?