You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical relationship. The correlations are still there.
(latter quotes are from their github readme https://github.com/vpdotnet/vpnetd-sgx )
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
func (om *ObfuscationManager) ProcessOutgoingPacket(
...
// TODO where is the obfuscation here?
https://github.com/vpdotnet/vpnetd-sgx/blob/bc63e3b8efe41120...While I do see the impl of the 10ms flush interval, I don't see any randomisation within batches. So iiuc, packets are still flushed in their original order.
In an older version packets were sent back in sequence to their original connection to the host, as it was faster.
We since then implemented a system where nproc (16+) buffers receiving packets running at differed intervals, meaning that while packets are processed "in order" the fact this runs in multiple threads, reading packets even from the same client will cause these to be put in queues that will be flushed at different timings.
We have performed many tests and implementing a more straightforward randomized queue (by allocating memory, handling array of pointers of buffers, shuffling these, and sending these shuffled) did not make much of a difference in terms of randomization but resulted a huge loss in performance due to the limitations of the SGX environment.
As we implement other trusted environments (TEE) we will be implementing other strategies and obfuscation methods.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
That's a pretty big trust already. Intel has much to loose and would have no problem covering up bugs for government in SGX or certifying government-malware.
And intel had a LOT of successfull attacks and even with their cpu they are known to prefer speed than security.
As far as I'm aware, no. Any network protocol can be spoofed, with varying degrees of difficulty.
I would love to be wrong.
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
Again, I would love to know if I'm wrong.
The fact that no publicly disclosed threat actor has been identified says nothing.
Are you suggesting a solution for this situation?
Because of the cryptographic verifications, the communication cannot be spoofed.
The trust chain ends with you trusting Intel to only make CPUs that do what they say they do, so that if the code doesn't say to clone a private key, it won't.
(You also have to trust the owners to not correlate your traffic from outside the enclave, which is the same as every VPN, so this adds nothing)
The second part in terms of correlations is untrue since we include a number of techniques to frustrate timing attacks among other things.
Additionally, if you’re still talking about trust it means you don’t understand the technical implications of this.
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
I left the company on principle by relinquishing my shares at a mere fraction (about 1/3) the value. I walked away from millions of dollars, and I am happy with my decision.
Given what happened, we built VP so that trust is no longer required.
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
Any real security researcher recognizes this.
If you think 'trusting random strangers' is a better security architecture, then you should not work in security.
More importantly, trusting random strangers is much better than trusting a known hostile actor. During the Freenode fiasco, you have repeatedly demonstrated yourself to be untrustworthy and vengeful. Everyone saw your petty revenges against people who dared voice the slightest of criticisms. Why on earth should anyone trust that you'll uphold your customer's privacy no matter what?
Trusting random internet people is actually the biggest “troll” of the internet.
Any VPN that asks you to trust their guarantees and not the guarantees of code is selling you snake oil and should not be trusted.
Trust is not a feature in security. Thus, we removed and replaced it with code based guarantees.
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
In order for a malicious instance to use the same public key as an attested one, they’d have to share the private key (for decryption to work). If you can verify that the SGX code never leaks the private key that was generated inside the enclave, then you can be reasonably sure that the private key can’t be shared to other servers or WG instances.
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
Until crypto is legally treated like cash (e.g. I don't have to report that I bought a beer with a $20 bill from an ATM), I don't think it's a very satisfying solution to have to either 1. Report to the IRS that I bought a VPN with monero or 2. Commit a tax crime and be paranoid about the IRS using automated tools to find you out for years after each transaction.
Even ignoring that elephant inthe room, how do you regularly (to pay subscription) get the crypto without leaving a paper trail or dealing with sketchy people?
I like virtual cards like privacy.com. If a state actor is after you, they will find you. So the typical threat model to me is companies trying to track you, like your ISP/Google/Facebook.
It would be nice if there was some way to be tax compliant and get the privacy benefits of monero though. Am I missing some crypto tax compliance tooling here or are all of these crypto payment users just poking the IRS bear?
That's not what your link says, and as far as I'm aware it's not true. Buying crypto and then using some of it to buy goods and services has no tax reporting requirement, those only start when you're either selling crypto or receiving it as payment. Which is the same situation as the tax reporting for any other currency or valuable item you could deal in.
Reads pretty explicit to me. You have to report every event.
Also, I couldn't see where it is based? Anywhere in Five-Eyes countries, or places like USA with national security letters (or just their fascist government) is probably not going to fit most people's that models.
> we operate proudly in the united states. protected by the constitution — not offshore shell games.
> no backdoors. no stored data. even if they ask, we've got nothing.
> we don't dodge the law — we built tech that doesn't need to.
Surely, they couldn't say that with a straight face under the current regime.
Freenode was sold to me by Christel, the previous owner. I did not even offer to purchase it and simply assumed I was doing what I had been doing for a decade for freenode and many other FOSS projects - keeping them alive. It was my funds that did so the whole time for freenode (and a number of other projects which I stopped funding thereafter given the death threats I was receiving which led to the end of many of them unfortunately).
The Libera staff [1] attempted to steal the domain because they wanted control. None of the staff were developers at the time and complained they couldn’t even write their own irc client. Think of Mozilla. The people who run it aren’t the coders. Same thing.
Here are the receipts for every statement I just made:
http://techrights.org/wp-content/uploads/2021/05/lee-side.pd...
PS: Freenode seems more active then Libera where everyone is just idle (bots?) but that is another point. See for yourself with the client I wrote: IRC.com.
[1] By Libera staff I mean the former freenode staffers who left to form Libera. These are the same people I spent a lot of money helping to protect legally from the allegations made by “OldCoder”
Ultimately, the people who invested their labor into the network felt like they had little control over the future of the project and felt like they had been rug pulled by christel so they left. They did not believe that it was christel's to sell. Ultimately, as soon as the original operators left, the new management have not necessarily left a great impression. For a while, freenode.net would just redirect to a subreddit? And then later it was a reddit clone of sorts? (https://web.archive.org/web/20220505184527/https://freenode....). Channels were taken over at will. There were somewhat dubious partnerships made, crypto products endorsed. The first blog posts made by the new management straight after the changeover were markedly different from the previous messaging - (https://web.archive.org/web/20210730233709/https://freenode....). The original freenode was doing its best to be a place where like minded people could collaborate and communicate without adding too much of a political sway or coloring anything, the freenode after the takeover did not aspire to do any such thing. If in May of 2021 one could've argued that the old staff were a tad too eager to leave, then the newcomers did everything in their power to prove them right in less than a month.
Please do fund FOSS stuff, it really helps. Just don't expect to buy yourself out of being cringeworthy.
However, there are some discrepancies:
1. The ex staffers were already preparing their takeover event long before my name came in the picture. Domain registration dates and meeting minutes notes proves this. I was likely an easier target than Christel - or maybe that’s why she asked me to buy it from her.
2. The ex staffers had already begun emailing false narratives to open source projects before any of these actions began.
The channel topic changes did occur as a result of #2, but timing and reasoning is important. I do, however, think that these actions were a mistake.
Today, it’s pointless to fund FOSS projects since many of these funds end up going to non developers who are good at socializing but not meaningful development. Instead it’s better to support individual developers.
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
Let me correct that for you - the guy who brought you the first Bitcoin exchange and arguably helped pave the way for cryptocurrencies today.
> guy who destroyed Freenode
This was already debunked [1]. I tried to save freenode - I was the only one funding it up until the point where freenode's ownership "gave" it to me essentially which resulted in the non-developer staff to attempt to hostile takeover the network [2].
The end result was that they gave control of the domain back to me (and as a result, freenode).
> Personally, I'd rather trust in Mullvad.
Trusting random teams of people on the internet isn't exactly a form of security or privacy.
Developers and cypherpunks trust code, not words.
If you're a developer, I'd highly suggest you read the code.
> This VPN requires you to trust in Intel
You really can't use the internet or any internet-distributed software without trusting Intel. Maybe you're better off logging out if that is your policy. ¯\_(ツ)_/¯
[1] http://techrights.org/wp-content/uploads/2021/05/lee-side.pd...
[2] Funny how non-developers keep ruining Open Source (Mozilla, and many others - see Lunduke Journal for more).
Not sure how you can "debunk" that Freenode was destroyed - it clearly was - and the fact that an identical network minus that person is now running just fine, proves that person was the problem. All evidence points to the fact that Freenode (under a different name) seems to have been saved by kicking out the guy who was trying to blackmail it by having ownership of the name Freenode.
You're right, Intel CPUs aren't trustworthy either since they tend to stop working after just a year or so. I have a greater confidence that my CPU doesn't contain an intentional remotely exploitable backdoor, because that takes serious effort (also because it's AMD), than that Intel hasn't sent a couple of short bitstrings to the US government.
As for the freenode issue, look at the facts before parroting false narratives. I posted receipts - they are clear.
Oh wow, that exchange must be doing very well and be super successful and not have any controversies, right? It's still alive at least, right? /s. Acting like losing over 700,000 bitcoins is a sign of credibility is just wild.
This isn't the flex that you think it is. You guys really should have gone to prison.
And worse, it is harder for the American government to eavesdrop on US soil than it is outside America.
Of course, if a national spying apparatus is after you, regardless of the nation, pretty good chance jurisdiction doesn’t matter.
I don't have any particular insight here, but isn't that why Five Eyes is used, a workaround for what would otherwise be illegal activities. Not that the current USA regime care about the law, of course.
The GP mentioned Snowden and yet you say this. What material and significant changes have happened since 2013 to make this claim?
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
There's also an option to just mail them cash, but some countries may seize all mailed cash if discovered.
this isn't reddit. if you are trying to act like a know it all at least do basic research.
crypto rubes are hilarious.
What's your issue with tor?
Old copy? Might need an update.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
I have not inspected whether the procedure suggested for verifying the enclave contents is correct. It's irrelevant if you need to prove that the decrypted traffic, while still being associated with your identity, goes ONLY into the enclave and is not sent to, let's say, KGB via a separate channel.
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
I imagine those websites block IP ranges of popular VPN providers.
Am I right in thinking that hosting my own VPN would resolve this issue?
I pay approximately 50¢/month for such a setup, and you can probably do it for free forever if you decide to be slightly abusive about it. However, be aware that you don’t really gain any real privacy since you’re effectively just changing your IP address; a real VPN provides privacy by mixing your traffic with that of a bunch of other clients.
Some services will also block cloud ranges to prevent e.g. geoblock evasion, although you’ll see a lot less blocking compared to a real VPN service or Tor.
Also unless you are running on a fully trusted stack (TPM or other attestation), you don't in fact know that only you have access. This is hard. `Quickly` isn't a thing.
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
That said, the freenode issue was debunked and you can see receipts here: http://techrights.org/wp-content/uploads/2021/05/lee-side.pd...
I funded freenode since 2011 so any narrative that makes it seem I just appeared out of nowhere is factually untrue. Also, I was handed it because Christel felt I was a good custodian thereof. Instead, former staff who I protected from allegations made by OldCoder for years, went on to form Libera, tried to steal the domain for a developers irc network when they themselves shockingly couldn’t even code a simple irc client, and then made up a false narrative.
The state of open source generally isn’t what you think and you would do well for yourself to read Lunduke’s Journal among other things. The developers don’t actually run most of the projects these days. Look at Mozilla.
Notably: the only part of your system which can be verified is the SGX box, which can only handle encryption. How can we be certain that you are not able to correlate traffic? It is not enough to simply say that you implemented traffic mixing, as that can be defeated by placing each user on their own SGX instance.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
It's signed by Intel and thus, guaranteed to come from the enclave!
What if, for example, a three-letter agency seized keys from Intel, served them with a gag order to prohibit disclosure of the seizure, put themselves in the middle of the network path between you and the user, and modified the server software to send falsified signatures derived from those seized keys?
Huh, I thought Mark Karpelès is working at Private Internet Access.
From the about page:
> currently head of karpeles labs, a multi faceted research and development firm specializing in highly complex technology systems
I guess he quit to run a competing vpn company?
Servers that don't log and can't without hard drives, ports physically glued shut.
Answer: no you can’t, you still have to trust them. At the end of the day, you always just have to trust the provider, somewhere.
You still have to trust them, you're not wrong but at some point I'll fall back to the common question security people(not me) tell paranoid doubters: Whats your threat model?
If you're running a global child-abusing ring through Mullvad or OVPN(offers static IPv4 for inbound traffic) I don't know what they'd do but they've proved themselves over and over to be organisations you can trust.
OVPN turns around about 1.2M$ with 0.8M$ profit (0), Mullvad turns around significantly more money but with less profit margin (1) (probably funneling profits to a tax haven) so the risk of someone buying out OVPN is there, but "you" are probably not worth it if the ones targeting TPB didn't figure out how to get through.
You can still run TOR over their VPNs as another layer if you're uncertain their reputation is trustworthy enough for your usecase but don't want TOR traffic originating from your IP.
https://claude.ai/share/a47c19f7-8782-4a9f-ae26-2d2adb52eaed
0: https://www.allabolag.se/foretag/ovpn-integritet-ab/-/konsul... 1: https://www.allabolag.se/foretag/mullvad-vpn-ab/g%C3%B6tebor...
You can look up any Swedish company through sites like allabolag or merinfo if you're curious... until they grow into tax-evading evil megacorps :)
lol
SGX has been broken time and again
SGX has 0-day exploits live in the wild as we speak
so... valiant attempt in terms of your product... but utterly unsuitable foundation
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
They could run one secure enclave runningng the legit version of code and one insecure hardware running insecure software.
Then they put a load balancer in front of both.
When people ask for the attestation the LB sends traffic to the secure enclave, so you get the attestation back and all seems good.
When people send vpn traffic the loadbalancer sends them to the insecure hardware with insecure software.
So sgx proves nothing..
They are proving that they are the ones hosting the VPN server - not some server that stole their software and are running a honeypot and that the hosting company has not tampered with it.
So in the end you still have to trust the company that they are not sharing the certificates with 3rd parties.
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Served by an enclave, but there's no guarantee it's the one actually handling your VPN requests at that moment, right?
And even if it was, my understanding is this still wouldn't prevent other network-level devices from monitoring/logging traffic before/after it hits the VPN server.
Saying "we don't log" doesn't mean someone else isn't logging at the network level.
I think SGX also wouldn't protect against kernel-level request logging such as via eBPF or nftables.
In all seriousness, I don’t even trust intel to start with.
Things may have changed since mid-2023 but here were my takeaways:
-----
Re: Vendor lock-in
Vendor lock-in is (was?) a huge problem (at least for process-based TEEs)
Process-based TEEs (mainly SGX) operated by providing essentially a whole new system abstraction. At the base level, there was no `libc`-like or `POSIX`-like interfaces, only Intel specific ones. This is why there are projects like [Gramine](https://github.com/gramineproject/gramine) and [Fortanix](https://github.com/fortanix/rust-sgx) that aimed to provide a more `libc`/`POSIX`-like interface to developers, even though this was the leakiest of abstractions (you can’t even create a UDP socket).
This is not only a problem in terms of Developer Experience though, this is also a huge problem for vendor lock-in. Porting things were nigh impossible, and you’re stuck with the Intel platform. I think Intel is making a genuine attempt at making a good TEE right now, but what if Intel decides to axe their budget for TEEs or drop support altogether?
*Possible solution*: Use VM-based TEE abstractions like SEV or TDX, which can run a full VM, which is at least a more portable solution and has a full Linux environment (with some caveats).
-----
Re: Trust model
A convincing trust model for TEE VPNs was possible, but a big engineering challenge.
1. TEEs without remote attestation and reproducible builds of the backend are near-meaningless: If a VPN operator hands you a proof co-signed by Intel that they’re running in SGX… so what? They could simply be running a data-harvesting pipeline in SGX.
*Possible solution**: A **remote attestation** of *what they’re running*, which requires that they have a reproducible build of *what they’re running*, and for the remote attestation to verifiably attest to that reproducible build
2. For VPNs: it is always possible to hand the user a *remote attestation* to one server, then just swap out that server for another when the user is connecting. *Possible solution**: A way to link the **remote attestation** to what you’re connecting to.
-----Re: Vulnerabilities
Back in mid-2023, it seemed like vulnerabilities in different TEEs were still popping up. However, I don’t want to overstate them here or engage in FUD: over time, it seems like the newly revealed vulnerabilities were becoming more and more theoretical attacks hard to carry out in the real world.
I think this is something that improves over time as the different TEE platforms matures, but *relying solely on TEEs* to make claims about privacy and security seemed a bit shaky to me.
This was the final straw for me for why not to start with TEEs back in 2023: Given that we wanted to avoid vendor lock-in as much as possible, we only had AMD SEV as a choice at the time. I came across this vulnerability ([GitHub](https://github.com/PSPReverse/amd-sp-glitch), [arXiv](https://arxiv.org/abs/2108.04575)) and (from my reading) it was very practical and almost unforgive-able, see [this image](https://forum-uploads.privacyguidesusercontent.com/original/...). Funnily enough, you can even see my post in 2023 to understand if the AMD VLEK addition mitigated the vulnerability ([GitHub comment](https://github.com/PSPReverse/amd-sp-glitch/issues/3)).
*Possible solution*: MPR but each hop is a different TEE implementation. That way an attacker would have to have an exploit for all the TEE implementations to break the security model.
-----
Anyway, these were my thoughts back in 2023, things like hardware vulnerabilities may have changed since then, and certainly the availability of Intel TDX (another VM-based TEE) makes vendor lock-in much better, but the “Trust Model” challenges still remain. That is a big engineering challenge though, and not a fundamental problem with TEEs, so I’m very cautiously optimistic!
I think the comments here about SGX trust are misguided. This isn't protecting you from deep state chip level intentional bypasses. We can at least have reasonable enough assurance in SGX per se. The average law enforcement isn't going to get to your data because of some undisclosed SGX issue.
But unlike AMZ Nitro, which AIUI has a network stack which bypasses the guest OS (I believe hypervisor can see everything, which I would trust about the same as SGX), SGX requires host/guest support to pass network packets. So in Nitro you can operate the TCB entirely without (unverified, unattested) guest OS seeing anything? But in SGX the guest has to pass traffic back and forth to SGX. The difference here is who operates the untrusted bit. For SGX it's the application author themselves.
That is why you need the 10ms batching, to stop the host/guest from matching src/dst pairs, and inspecting the outbound traffic (inbound is presumably encrypted for the TEE). However, batching is laughable and won't stop correlation (unless you inject significant fake traffic, which the host/guest has to not be able to tell is fake).
So like every other VPN this is marketing of snake oil.
Compare that to express or whoever it is that offers static IP within Nitro. That is way more useful than this pretend security. (Use of Nitro allows them to not know what static IP is assigned to you, so they can't be compelled to give that info up.)
MASQUE (Apple Private Relay) or other double-blind VPNs are better and don't require SGX.
Besides the technical inadequacies, you have the double whammy of PIA and MtGox heritage. oh my.
I always found confidential compute to be only good to isolate the hosting company from risk - not the customer!