The OCI approach should mean that resources are not ringfenced and held separate from the host while the VM is running, which would be beneficial for applications run on the host compared to the winapps approach.
Is there a noticeable performance benefit to using winpodx compared to winapps? How does the idle resource usage compare too?
This looks great though. +1 choosing Qt instead of Electron. -1 for Python though. Otherwise, your approach ticks most of my boxes.
One feature I'd like to see though is reverse file associations - basically associate Linux filetypes inside the Windows VM so that any file you open in a Windows app would open the file in Linux, assuming Linux has a file association for it. Say I've installed Directory Opus in the VM and I want to use it as my primary file manager in Linux, and say I double-click on a .xml file, I would like to open it in the Linux app associated with that filetype (which would be Kate in my case).
On Python: fair pushback. Picked it for stdlib coverage (zero runtime deps on 3.11+, one tomli fallback for 3.9/3.10) and iteration speed. Heavy lifting is in the container and FreeRDP so perf hasn't been the bottleneck, but yeah the language choice is a tradeoff.
Reverse file association is interesting, hadn't thought about that direction. The v0.3.0 agent could probably handle it but I'd want to look at the security model first. Marking it TBD. If you open an issue with the use case that'd help me scope it.
Can we take this to mean that GPU passthrough is planned? This would be huge especially for running Adobe/Canva software.
Is Grammarly considered AI or not? I use Grammarly heavily because I use speech recognition in a stream of consciousness mode. It catches misrecognitions and language where I thought the right word but said a different one.
The below is the above once through Grammarly and a couple of written-by-me substitutions.
Is Grammarly considered AI or not? I use Grammarly heavily because I use Aqua speech recognition in a stream-of-consciousness mode. It catches misrecognitions and language where I thought the right word but said the wrong one.
If the writing ends up being the same as what you would produce if you carefully edited it yourself, it will be well received.
If it shows any signs of being machine-generated rather than human-authored, the audience will sense it and react negatively.
We advise against copy+pasting any generated text into HN. If you think there’s some fuzziness around the definition of “generated”, well, see what happens.
Just tell your model to speak more casually, more conversationally, and more like it’s human written. Tell it to avoid any “AI tells,” most of which are BS anyway.
If people accuse you of using AI, ignore them.