I’m building ÆTHRA — a programming language designed specifically for composing music and emotional soundscapes.
Instead of focusing on general-purpose programming, ÆTHRA is a pure DSL where code directly represents musical intent: tempo, mood, chords, progression, dynamics, and instruments.
The goal is to make music composition feel closer to writing a story or emotion, rather than manipulating low-level audio APIs.
Key ideas: - Text-based music composition - Chords and progressions as first-class concepts - Time, tempo, and structure handled by the language - Designed for ambient, cinematic, emotional, and minimal music - Interpreter written in C# (.NET)
Example ÆTHRA code (simplified):
tempo 60 instrument guitar
chord Am for 4 chord F for 4 chord C for 4 chord G for 4
This generates a slow, melancholic progression suitable for ambient or cinematic scenes.
ÆTHRA currently: - Generates WAV audio - Supports notes, chords, tempo, duration, velocity - Uses a simple interpreter (no external DAWs or MIDI tools) - Is intentionally minimal and readable
What it is NOT: - Not a DAW replacement - Not MIDI-focused
Why I made it: I wanted a language where music is the primary output — not an afterthought. Something between code, emotion, and sound design.
The project is open-source and early-stage (v0.8). I’m mainly looking for: - Feedback on the language design - Ideas for musical features worth adding - Thoughts from people into PL design, audio, or generative art
Repo: <https://github.com/TanmayCzax/AETHRA>
Thanks for reading — happy to answer questions or discuss ideas.
From your README’s philosophy section: “You describe what you want to feel — ÆTHRA handles how it sounds.” But the rest of the documentation doesn’t yet feel aligned to that vision. The closest you get to that is when you describe your example chord progression as melancholic, but you as the composer already happened to know that this particular progression provides the feeling you have in mind.
I love the idea of a high level way to programmatically or idiomatically describe how music should feel, especially how the composition should evolve over time (perhaps even in surprising ways that are beyond current tools). I hope as you progress that you’re able to find innovative ways to build toward that vision.
The current feature set feels like it would be considerably more convenient in a GUI environment. Again, I hope that as you continue to build, it becomes more obvious why this is a language and not a visual synthesis/composition tool.
A little audio output demo would go a very long way in potentially getting me interested in playing around with this.
Good luck!
[0] https://strudel.cc/#Ci8vICJQeXJhbWlkIFNvbmcgKFJhdyBBYnN0cmFj...
E.g. Csound