It's also why nothing in my AWS account is "canonical storage". If I need, say, a database in AWS, it is live-mirrored to somewhere within my control, on hardware I own, even if that thing never sees any production traffic beyond the mirror itself. Plus backups.
That way, if this ever happens, I can recover fairly easily. The backups protect me from my own mistakes, and the local canonical copies and backups protect me from theirs.
Granted, it gets harder and more expensive with increasing scale, but it's a necessary expense if you care at all about business continuity issues. On a personal level, it's much cheaper though, especially these days.
He said, "no, it's on github".
I said no more.
If your boss is that daft, its probably a sign to bail out. Remember to do your own personal due diligence. Due dill is not something that you just do for someone else: do your own! Do your own personal risk assessment. If code was lost, who would be found accountable? You or them?
EDIT: PS - I am a CTO ...
Unless github closes the account, or a hacker gets access, or a rogue employee gets mad and deletes all, or some development accident results in repo deletion, or etc etc.
And no one else in the company knew what GitHub was.
The strategy is about as silly as having ten babies and expecting that one of them will make it. It is what you would expect out of the worst poverty-ridden parts of Africa.
An alternative is to select and nurture your investments really well, so the rate of success is much higher. I'd like to see the script be flipped, whereby 90% of investments go on to becoming profit making, secondarily with their stable cash income being preferred to big exits.
Oh boy.
Tons of apps in maintenance mode run critical infrastructure and see few commits in a year.
To step outside just utility programs, the reason why Command & Conquer didn't have a remaster was:
> I'm not going to get into this conversation, but I feel this needs to be answered. During this project of getting the games on Steam, no source code from any legacy games showed up in the archives.
The people wouldn't, but in the environments I'm thinking of, security policies might.
What you're leaning into is a high-risk backup strategy that would rely mostly on luck to get something remotely close to the current version back online. It's pretty reckless.
In environments that go so far (deleting local checkouts of code out of security concerns), I bet they do have a mirror/copy of the version controlled code.
Software still runs, and if you don't have the source then you'll only have the binary or other build artifact.
In fact, in VSCode, one can use a project without cloning and checking it out at all.
Additionally to that, if any of those developers have a backup strategy for their local computer, those also count as a backup of that source code.
CTO: "I know our entire github repo is deleted and all our source code is gone and we never took backups, but I'm hoping the developers might have it all on their machines."
CEO: "Hoping developers had it locally was your strategy for protecting all our source code?"
CTO: "It's a sound approach and ticks all the boxes."
CEO: "You're fired."
Board Directors to CEO: "You're fired."
If not MBA, the problem may also stem from the gradual atrophy and disrespect shown towards the sysadmin profession.
Options to consider for various circumstances include:
- Different object storage clouds with different accounts (different names, emails, and payment methods), potentially geographically different too
- Tarsnap (while using AWS under the hood but someone else's account(s))
- MEGA
- Onsite warm and/or cold media
- Geographically separate colo DR site, despite the overly-proud trend of "we're 100% (on someone else's SPoF) cloud now"
- Offsite cold media (personal home and/or IronMountain)
I often have my git configured to push to multiple upstreams, this means that basically all of your mirrors can be primaries.
This is a really good part about GitHub. Every copy is effectively a mirror, too, and it's cryptographically verified as well, so, you don't have to worry about the mirror going rogue without anyone noticing.
In a collaborative scenario, doing it that way makes sure everything is always properly synchronized. Some individual's lacking config can't break things.
I don't go as far as "live mirror", but I've been advocating _for years_ on here and in meatspace that this is the most important thing you can be doing.
You can rebuild your infrastructure. You cannot rebuild your user's data.
An extended outage is bad but in many cases not existential. In many cases customers will stick around. (I work with one client that was down over a month without a single cancellation because their line-of-business application was that valuable to their customers.)
Once you've lost your users' data, they have little incentive to stick around. There's no longer any stickiness as far as "I would have to migrate my data out" and... you've completely lost their trust as far as leaving any new data in your hands. You've completely destroyed all the effort they've invested in your product, and they're going to be hesitant to invest it again. (And that's assuming you're not dealing with something like people's money where losing track of who owns what may result in some existence-threatening lawsuits all on its own.)
The barrier to keeping a copy of your data "off site" is often fairly small. What would it take you right now to set up a scheduled job to dump a database and sync it into B2 or something?
Even if that's too logistically difficult (convincing auditors about the encryption used or anything else), what would it take to set up a separate AWS account under a different legal entity with a different payment method that just synced your snapshots and backups to it?
Unless you're working on software where people will die when it's offline, you should prioritize durability over availability. Backups of backups is more important than your N-teir Web-3 enterprise scalable architecture that allows deployment over 18*π AZs with zero-downtime failover.
See, as a case study, Unisuper's incident on GCP: https://www.unisuper.com.au/about-us/media-centre/2024/a-joi...
They're going to dazzle you with all of their hardened bunker this, and multiple escape route that, not realizing all of their complex machinery is metaphorically running off of a machine with no battery backup. One power outage and POOF!
> Before anyone says “you put all your eggs in one basket,” let me be clear: I didn’t. I put them in one provider, with what should have been bulletproof redundancy:
That's one basket. A single point of failure. "But it should have been impossible to fail!" Backups are to handle the "impossible" failure (in reality nothing is 100% reliable).
My point is that the basket that eggs are put in is not always clear in hindsight. I wasn’t even aware that Mastercard and visa shared fraud alerts and that they were automatically linked.
The author’s article is not about backups, it’s about accountability.
They gaslit me for 20 days while violating their own stated policies.
> I’d done everything right. Vault encryption keys stored separately from my main infrastructure. Defense in depth. Zero trust architecture. The works.
Did you? Is putting all your eggs in one basket "defense in depth"? Is total trust in AWS "zero trust architecture"?
I'm not defending AWS here; they fully deserve all the fallout they can get from this, and I do feel for the dev who lost all their stuff through AWS's fuckup. Lots of people do the same.
My current employer does the same. It's a major bank, and all of their stuff is Microsoft. Azure, SharePoint, Office, Teams, the works. I think it's foolish to trust a single foreign company with all your vital data and infrastructure, operating in q country where the government demands access to everything, but this is what everybody does now.
We trust "the cloud" way too much, and expose ourselves to these sort of fuckups.
The architecture was built assuming infrastructure within AWS might fail. What I didn’t plan for was the provider itself turning hostile, skipping their own retention policy, and treating verification as a deletion trigger.
And if you have a similar horror story with a tens/hundred of thousands of dollars (or more) monthly AWS invoice, please speak, I'm very curious to learn what happened.
From what i gather it was not. Or did you have a strategy for a 0-warning complete AWS service closure? Just imagine AWS closing their doors from one day to the next due to economic losses, or due to judicial inquiries into their illegal practices: were you really prepared for their failure?
The cloud was never data living in tiny rain droplets and swimming across the earth to our clients. The cloud was always somebody else's computer(s) that they control, and we don't. I'm sorry you learnt that lesson the hard way.
It's one of the reasons I don't use my Google account for everything (another is that I don't want them to know everything about me), and I strongly dislike Google's and Microsoft's attempts to force their accounts on me for everything.
Yeah...
If I'm working tickets at AWS that kind of dickishness is going to ensure that I don't do more than the least amount of effort for you.
Maybe I could burn my entire weekend trying to see if I can rescue your data... or maybe I'm going to do nothing more than strictly follow procedure and let my boss know that I tried...
I'll concede that I'm hugely empathetic for people that suffer data loss. The pithy aphorism about there being two types of people -- those who haven't lost data, and those who do backups -- is doubly droll because only the second group really appreciates the phrase.
But it's surprising to find people with more than a decade in IT who don't appreciate the risks here.
The timeline reveals there were 13 days from when the first signs of trouble surfaced, to when the account was deleted. So a fortnight of very unsubtle reminders to do something AND a fortnight in which to act.
(I recently learned the phrase BATNA[0] and in modern <sic> IT where it's Turtles as a Service, all the way down, it's amazing how often this concept is applicable.)
Author seems very keen to blame his part-time sysadmin rather than his systems architect. I can understand the appeal of that blame distribution algorithm, but it's nonetheless misguided.
The phrasing:
> But here’s the dilemma they’ve created: What if you have petabytes of data? How do you backup a backup?
inverts the horse & cart. If you have a petabyte of data that's important, that you can't recreate from other sources, your concern is how to keep your data safe.
If you're paying someone to keep a copy, pay (at least one other) person to keep another copy. Even that isn't something I'd call safe though.
[0] https://en.wikipedia.org/wiki/Best_alternative_to_a_negotiat...
The only failure I didn’t plan for? AWS becoming the failure.
The provider nuking everything in violation of their own retention policies. That’s not a backup problem, that is a provider trust problem.
The reason i did not kept a local copy, was that i formatted my computer after a hardware failure, after the nurse dropped the laptop in the hospital i was on. Since i have a AWS backup, i just started with a fresh OS while waiting to get discharged to return home and redownload everything.
When i returned 6 days days later, the backup was gone.
But you need to be aware that you never had backups in the way most sysadmins mean. If i need a friend to take care of a loved one while i'm away, and my backup plan is having the same person take them care of them but in a different house or with a different haircut, that's no backup plan: that's bus factor = 1.
Backups mean having a second (or third, etc) copy of your data stored with a 3rd party. Backup assumes you have an original copy of the entirety of the data to begin with.
From this point, and i'm sorry it bit you like this, but you never followed any good sysadmin practices about backups and disaster recovery. I have no idea what AWS best practices say, but trusting a single actor (whether hardware manufacturer or services provider) with all your data has always been against the 3-2-1 golden rule of backups and what happened to you was inevitable.
Blame AWS all you want, but Google does exactly the same thing all the time, deleting 15-years-old accounts with all associated data with no recourse. Some of us thought the cloud was safe and had all their "cross-region" backups burn in flames in OVH Strasbourg.
We could never trust cloud companies, and some of us never did. I never trusted AWS with my data, and i'm sorry you made that mistake, but you may also take the opportunity to learn how to handle backups properly in the future and never trust a single egg basket, or whatever metaphor is more appropriate.
Good luck to you in the future!
That person wasn’t around to respond.
But the overall post and the double buried ledes make me question the degree to which we’re getting the whole story.
Right or wrong, those messages look like very standard AWS-speak for "this is your mistake, not ours".
I have no idea if that's a reasonable stance or not, but I _will_ say that AWS's internal culture contributes to this sort of bad press. If they would respond to their customers with even a trace of empathy and ownership, this post would likely never have been written.
Maybe the mistake was inventing non-immutable data storage?
Besides, what would be the potential benefit of such a hypothetical script? The author mentions "a bill under $200," so that's the upper limit on how much it costs AWS to keep the author's whole data. If I was working there and a coworker said "Hey I created a script that can save the company $200 by finding a defunct (but paying) customer and wiping out their data!", I'd have replied "What the fuck is wrong with you."
But here is the thing: no one from AWS has given me an official explanation. Not during the 20-day support hell, not after termination, not even when I asked directly: “Does my data still exist?” Just a slow drip of templated replies, evasions, and contradictions.
An AWS insider did reach out claiming it was an internal test gone wrong, triggered by a misused --dry flag and targeting At a company that size, you'd expect guardrails, audits, approvals low-activity accounts.
According to them, the team ran it without proper approval. Maybe it is true. Maybe they were trying to warn me. Maybe its a trap to get me to throw baseless accusations and discredit myself.
I'm not presenting that theory as fact. I don’t know what happened behind the wall.
What I do know is:
- My account was terminated without following AWS’s own 90-day retention policy
- I had a valid payment method on file
- Support stonewalled every direct question for 20 days
- No answers were provided, even post-mortem
There is no “wipe everything immediately” button.
> According to them, AWS MENA was running some kind of proof of concept on “dormant” and “low-activity” accounts. Multiple accounts were affected, not just mine.
If AWS has a 90-day closure policy, why was this account deleted so quickly?
Me paying your bill doesn't give me ownership of your stuff - as far as AWS is concerned, your bill is paid, and that's the extent of their involvement, everything else is between you and me.
If what he writes is true, he remained the account holder and even had a backup billing method in place - something he probably wouldn't have if he wasn't the account holder.
I don't know if he's completely honest about the story, but "somebody else paid, so we decided they are now the owner" isn't how that works.
What happen is that the person paying for the account had to settle an invoice of many thousand of dollars. They offered me AWS gift cards,to send me electronics and they will pay for it in parts.
They lost lot of money because of crypto collapse. So i accepted their solution to pay for my OSS usage for few months.
That like if i was going to pay for your rent for 1 year. You don't pay, while i don't have to pay 3-4 years of your rent at once.
What happen, is that AWS dropped a nuclear bomb in your house, in the middle of the month .. then tell you later that it was about payment.
If they told me in the first email it was about the payer, i will have unlinked and backuped.
They should have to hold it for at least 90 days. In my opinion, it should be more like six months to a year.
In my mind, it's exactly equivalent to a storage space destroying your car five days after you miss a payment. They effectively stole and destroyed the data when they should have been required to return it to the actual owner.
Of course, that's my opinion of how it should be. AFAIK, there is no real legal framework, and how it actually is is entirely up to your provider, which is one reason I never trust them.
Similarly, a physical storage company can totally make a mistake and accidentally destroy your stuff if they mix up their bookkeeping, and your remedy is generally either to reach an amicable settlement with them or sue them for your damages.
> When AWS demanded this vanished payer validate himself, I pointed out that I already had my own Wise card on file—the same card I’d used to pay before the payer arrangement, kept active specifically in case the payer disconnected while I was traveling or offline
https://aws.amazon.com/compliance/shared-responsibility-mode...
You, the customer, are responsible for your data. AWS is only responsible for the infrastructure that it resides on.
AWS is not responsible for what you do with your data. AWS, on its own, is not going to make backups for you so you can recover from a failed DB migration you did yourself. AWS cannot be held responsible for your admin fat-fingering and deleting the entire prod environment.
However, AWS is responsible for the underlying infrastructure. The website you linked clearly shows "storage" as falling under AWS's responsibility: your virtual hard drives aren't supposed to just magically disappear!
If they can just nuke your entire setup with an "Oops, sorry!", what's all the talk about redundancy and reliability supposed to be worth? At that point, how are they any different from Joe's Discount Basement Hosting?
Some accident occurs. You don't pay your bill, address changes, etc. You have at least two entire years to contact the holder and claim your property. After that point, it is passed to the state as unclaimed property. You still have an opportunity to claim it.
Digital data? Screw that! One mistake, everything deleted.
You have potentially stronger civil remedies for recouping on those damages, but not always.
>When I asked why, a colleague warned me: “AWS MENA operates differently. They can terminate you randomly.”
Huh.
(I did have to look it up.)
They said a dry-run flag was passed in the --gnu-style form, but the internal tool expected -dry, and since Java has no native CLI parser, it just ignored it and ran for real.
Supposedly the dev team was used to Python-style CLIs, and this got through without proper testing.
Just use EC2 and basic primitives which are easy to migrate (ie S3, SES)
I was hospitalized, in another city, with all the computer at home, and locked behind 2FA.
They send me is on notice on Thursday, by Monday evening, all access was revoked.
For weeks i asked for a readonly access to my data, then they could take anytime they want to verify, they refused.
And he more i ask about my data, they more they avoid to speak about it.
Think about it , you could be sick, on a trip, having jetlag, in some festival, getting married... by the time you are back online, the delay was gone.
A lot of people in this industry have near-zero operations knowledge that doesn't involve AWS, and it's frightening.
Everyone saying "you should've had offsite backups" certainly has a point but 99% of the blame lies with AWS here. This entire process must've crossed so many highly paid "experts" and no one considered freezing an account before nuking it for some compliance thing.
It's just baffling.
Hope these cases will lead to more people leaving the clouds and going back to on-prem stuff.
In the last "mandatory education program" I participated, the AWS instructor laughed at the possibility of data loss.
https://www.infosecurity-magazine.com/news/code-spaces-demis...
If that's your whole infra you really shouldn't be on AWS in the first place.
I've been using a VPS powered by Virtuozzo since like 2002 or 2003, how is EC2 all that different? Just the API?
Per https://en.wikipedia.org/wiki/Virtuozzo_(company), SWsoft was founded in 1997, and publicly released Virtuozzo in 2000.
Per https://en.wikipedia.org/wiki/FreeBSD_jail, FreeBSD jail was committed into FreeBSD in 1999 "after some period of production use by a hosting provider", and released with FreeBSD 4.0 in 2000.
Per https://en.wikipedia.org/wiki/Qmail, qmail was released in 1998.
Today, lots of AWS services are basically just re-packaged OSS packages.
This is smelling like the classic "Dropbox isn't anything new" HN comment.
TODAY amazon services are just re-packaged OSS packages, yes. That wasn't the case before.
Per Wikipedia, live migration was added to OpenVZ in April 2006, one year after OpenVZ was open-sourced in 2005, five years after Virtuozzo was first released as a commercial product in 2000, 3 years after the company was started in 1997. Straight from Wikipedia. I would guess that prior to April 2006, live migration was already available as part of commercial Virtuozzo for quite a while, probably.
Not to mention Xen. Isn't it common knowledge that EC2 was powered first by Xen, then by Linux-KVM, switching in 2017? What exactly is their secret sauce, except for stealing OSS and not giving back?
https://en.wikipedia.org/wiki/Virtuozzo_(company)
https://en.wikipedia.org/wiki/OpenVZ#Checkpointing_and_live_...
https://en.wikipedia.org/wiki/Xen
https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
https://www.theregister.co.uk/2017/11/07/aws_writes_new_kvm_...
---
Dropbox is a little different, but, even then, doesn't everyone simply use Google Drive today? What's so special about Dropbox in 2025?
Just look at Virtuozzo. Why did SWsoft/Parallels open-source it into OpenVZ? The decision makes no sense, wasn't Virtuozzo their whole product they were selling to the hosting providers?
The answer lies in the complexities of the kernel code. By open-sourcing the underlying technology, they're ensuring immediate compatibility with all upcoming kernel changes. Had they kept it in-house, it'd be a nightmare to continue to integrate it with each upcoming Linux release, riddling the technology with preventable bugs, and eventually losing to competitors, including Xen and Linux-KVM at the time. OpenVZ was extremely popular in the VPS community until just a few years ago, before we had more RAM than we known what to do with, before Linux-KVM replaced both Xen and OpenVZ in the majority of the hosting environments in the last 10 years as of 2025.
I do agree with the other commenter that you're just rewriting the history here. All of these EC2 clouds are simply repackaged OSS (Xen and then Linux-KVM in the case of AWS EC2). If Amazon had actually developed some truly unique kernel stuff, we'd see it OSS by now, because of the difficulty with maintaining those kinds of kernel forks locally. But we don't. Because they haven't.
A lot of people in this industry have near-zero operations knowledge that doesn't involve AWS, and it's frightening.
They also have near-zero knowledge on the history of the field.
You are focused on the details and missing entirely the point of my post.
https://web.archive.org/web/20011204195446/http://www.sw-sof...
https://web.archive.org/web/20011224040032/http://www.sw-sof...
That's 2001, and Virtuozzo was already advertised supporting migrations.
OpenVZ remained super popular for at least like 10 years after it was open-sourced as OpenVZ in 2005, well into 2015, at least. Primarily because it allowed over-provisioning of RAM and all the other resources, which wasn't possible in other environments without the memory balloon drivers and other issues.
I imagine the major reason OpenVZ has declined in popularity is because nowadays memory is cheap enough that over-provisioning of RAM isn't that much of a selling point, and not being able to run your own kernel with processor-guaranteed virtualisation, is deemed too old-fashioned and less secure for true multitenancy than Linux-KVM, which has basically taken over the entire market, from both Xen and OpenVZ, and VMware, and everybody else. Even Amazon EC2 is based on Linux-KVM now, whereas previously it was based on Xen.
Yeah, well. The dog ate my homework.
You can request via help site data takeout under ccpa or gdpr. Took about two weeks but i got all my data
I started losing my shit at people posting anti trans hysteria. Part of me wants to send spez a bill for 14 years of deleting dick pics and local kkk posting. And then there was COVID...
I’m not much of a conspiracy theorist, but I could imagine a blog post almost identical to this one being generated in response to a prompt like “write a first-person narrative about: a cloud provider abruptly deleting a decade-old account and all associated data without warning. Include a plot twist”.
I literally cannot tell if this story is something that really happened or not. It scares me a little, because if this was a real problem and I was in the author’s shoes, I would want people to believe me.
If it sounds like an LLM, maybe it is because people like me had to learn how to write clearly from LLMs because English is not our first language.
I could’ve written in my native tongue, but then someone else will have complained that not how english is structured.
Also, the story is real. Just because it is well-structured doesn't mean it's fiction. Yes, i used AI to resort it, but i can assure you that no AI will generate the Piers Morgan reference.
If anything, an AI tool would have written a shorter, less rambling post.
For context, here’s a handful of the ChatGPT cues I see.
- “wasn’t just my backup—it was my clean room for open‑source development” - “wasn’t standard AWS incompetence; this was something else entirely” - “you’re not being targeted; you’re being algorithmically categorized” - “isn’t a system failure; the architecture and promises are sound” - “This isn’t just about my account. It’s about what happens when […]” - “This wasn’t my production infrastructure […] it was my launch pad for updating other infrastructure” - “The cloud isn’t your friend. It’s a business”
I counted about THIRTY em-dashes, which any frequent generative AI user would understand to be a major tell. It’s got an average word count in each sentence of around ~11 (try to write with only 11 words in each sentence, and you’ll see why this is silly), and much of the article consists of brief, punchy sentences separated by periods or question marks, which is the classic ChatGPT prose style. For crying out loud, it even has a table with quippy one-word cell contents at the end of the article like what ChatGPT generates 9/10 times when asked for a comparison of two things.
It’s just disappointing. The author is undermining his own credibility for what would otherwise be a very real problem, and again, his real writing style when you read his actual written work is great.
> I was alone. Nobody understood the weight of losing a decade of work. But I had ChatGPT, Claude, and Grok to talk to
> To everyone who worked on these AIs, who contributed to their training data—thank you. Without you, this post might have been a very different kind of message.
It sounded like perhaps this post would have conveyed a message the author didn't think constructive if they wrote it entirely themselves
IMHO that's a good thing, something that should be encouraged. Counting the em dashes is just an exercise in missing the forest for the trees. Accusing someone of posting AI slop without evidence should be treated no differently here than accusing them of shilling or sock-puppetry. In other words, it should be prohibited by the site guidelines.
I use both em-dashes and semicolons extensively in my own writing.
Even if the article _were_ written by or in coordination with ChatGPT... why does it matter? The content is either honest or dishonest.
Dude, plenty of people write with em-dashes and semicolons; I personally use them constantly (and I don't use LLMs at all). Em-dashes are trivial to type on MacOS (Alt+Shift+Dash) and even on Windows—I used to have the alt code muscle memorized (Alt+0151) but now I just use the Mac version with an AutoHotkey script. I get being wary of LLM spam now that it's pretty much everywhere, but this is not the "tell" you think it is.
To be clear, you're free to dislike this writing style, but I'm 100% confident that it has been common since long before LLMs were in widespread usage.
> his real writing style when you read his actual written work is great.
You're doubling down on this not being "his real writing style" despite acknowledging you were wrong about this being written by ChatGPT?
BTW, I actually use the em-dash symbols very frequently myself — on a Mac and on Android, it's very easy through the standard keyboard, with Option-Dash being the shortcut on the Mac.
But maybe it doesn't matter any more? Most people can't.
1) Keep my backups with a different infrastructure provider
2) Never allow third parties to pay for critical services
3) The moment I am asked for "verification," the emergency plan kicks in
Tell me you have off cloud backups? If not, then I know its brutal, but AWS is responsible for their part in the disaster, and you are responsible for yours - which is not being able to recover at all.
Store your data on your own disks, then at least you will blame yourself, not .. Java command line parsers?
Ah, but that's still one basket.
Source: I worked there.
https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
I dismissed that claim alone and talked about nothing else. The comment I replied to added nothing to the conversation except an unsubstantiated and plainly false remark.
> Wild that people don't realize that these "separate" systems in AWS all share things like the same control plane.
This comment does not say anything about all being in one basket. It only claims that all services run the same control plane which is so far from truth it must have been made from somebody utterly unaware of how big cloud teams operate.
Seems that this community has lost the ability to reason and converse and would rather be outraged.
I mean, sure every Ford Pinto is strictly it's own vehicle, but each will predictably burst into flames when you impinge its fuel tank and I don't wanna travel on a company operating an all Pinto fleet.
If you're not sure why someone is being downvoted, there's a good chance there is a misunderstanding somewhere, and a good way for you to understand is to wait for people to comment, and then read those comments.
Alternatively (and in a more general sense), if you want to take a more active role in your learning, a good way to learn is to ask questions, and then read the answers.
If you were just yolo’ing it on your own identification without a contract, well, that’s that. You should have converted over to an enterprise agreement so they couldn’t fuck you over. And they will fuck you over.