I'm not so sure. The difference between a self-hosted compiler and a circular interpreter is that the compiler has a binary artifact that you can store.
With an interpreter, you still need some binary to run your interpreter, which will probably be CPython, making the new interpreter redundant. And if you add a language feature to the custom interpreter, and you want to use that feature in the interpreter itself, you need to run the whole chain at runtime: CPython -> Old Interpreter That Understand New Feature -> New Interpreter That Uses New Feature -> Target Program. And the chain only gets longer, each iteration exponentially slower.
Meanwhile with a self-hosted compiler, each iteration is "cached" in the form a compiled binary. The chain is only in the history of the binary, not part of the runtime.
---
Edit since this is now a top comment: I'm not complaining about the project! Interpreters are cool, and this is genuinely useful for learning and experimentation. It's also nice to demystify our tools.
It's more than just what is syntax or a language feature, for example RPython provides nts classes, but only very limited multiple inheritance; all the MRO stuff is implemented using RPython for PyPy itself.
I.e. PyPy DOESN'T have an interpreter written in an interpreted language.
> That's a partial evaluator, not an interpreter, and it converts an interpreter into compiler, which are different things.
https://old.reddit.com/r/Compilers/comments/1sm90x5/retrofit...
Reading the comments and understanding that transitively, weval turns interpreters into compilers, allowing interpreters to generate machine code.
What are your goals, to let everyone know that interpreters, definitionally don't generate code? This isn't debate club.
I dropped a cool link that shows we have a machine that turns interpreters into compilers. I am talking about the machine. You are talking about the definition. We aren't talking about the same thing.
I think because python is a stack-based interpreter this is a really great way to get some exposure to how it works if you're not too familiar with C. A nice project!
Perl is transformed into an AST. Then that is decorated into an opcode tree. The thing runs code nearly as fast as C in many instances, once the startup has completed and the code is actually running.
cf: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
eval(str)
```python3
from openai import OpenAI
import sys
client = OpenAI()
response = client.chat.completions.create( model="gpt-4", messages=[{ "role": "user", "content": f"generate valid python byte code this program compiles to: {sys.argv[1]}" }] )
print(response.choices[0].message.content)
```
Actually, probably not better.
It makes me sad that I have to write C to make any meaningful changes to Python. Same goes for ruby. Rubinius was such a nice project.
Hacking on schemes and lisps made me realize how much more fun it is when the language is implemented in the language itself. It also makes sure you have the right abstractions for solving a bunch of real problems.
Shedskin is very nearly Python compatible, one could say it is an implementation of Python.
What do you mean by that? I'm not familiar with PyPy
It lags behind CPython in features and currently only supports Python versions up to 3.11. There was a big discussion a month ago: https://news.ycombinator.com/item?id=47293415
But you can help! https://pypy.org/howtohelp.html
So it can just run under CPython? If so, then that isn't too misleading.
the text is based on python 3.5 which was released in 2015
other discussions:
https://news.ycombinator.com/item?id=16795049