Least required access, set up a dev environment with CI/CD and if they need to do any local dev(frontend contacting your API etc) then implement auth you are in control of.
If your team needs to make actual API calls to external services for dev, change that, you codebase is likely not testable. Integration tests should run on CI.
Much of this is likely a few days work if your codebase isn't massive and if your codebase is massive then put that in the next sprint.
IAM handles access control and the shell script makes it easy for folks to show, diff, download and upload secrets for a project.
When a secret gets changed it also sends a Slack notification so folks know to download the latest copy.
It costs practically nothing, is super easy to maintain and it keeps your app only needing to deal with environment variables.
I do have a blog at https://nickjanetakis.com/blog where I post about a bunch of tech topics. If it gets posted it would get posted there.
1password is the company password manager. It has shared 'vaults' where a team can share secrets with one another. They can thus be used for authorization, who can access which secrets.
direnv-1password is a plugin for direnv that will load secrets from 1password into envvars. With this, upon entering a project, you'll be asked to unlock 1password (using yubi or fingerprint scan) and it'll fetch the needed secrets from the project.
This way secrets secrets are not easily readable from your disk, like they would with .env files.
Other password managers likely have similar tooling for direnv. Though I don't know whether it'll be this convenient.
and use op run to inject the secrets into a subprocess instead of storing them in a file: op run --env-file="./.env.development" -- cargo run
export TOKEN=$(op item get 'My Service' --fields label=token --vault workwork)
So the naming of rot is because cryptographic keys have a tendency to "rot" rather quickly. Frequent use of keys inevitably leads to leakage and/or compromise, and the underlying encryption algorithms may not be secure in the future. Cryptographic material doesn't age well in general.
Put legitimate production keys into k8s secrets and expose these to containers (either natively via serviceaccounts, via secret files, or via environment variables), or into Docker containers, or if you're running bare metal, somewhere in /etc (or an NFS mount if you want to avoid dealing with secrets in Ansible). Deployment pipelines shouldn't ever touch any kind of credentials, they should all be provisioned out of band.
I joined a company where we used Slack to bother teammates, as we went around setting up dozens and dozens and dozens and dozens of microservices. Pretty poor consistency across variable names. People unsure what changes they'd made locally. It was a mess.
I was setting everything up for myself via Ansible. So I'd try to extract each variable into a vars/main.yaml, deduplicating with anything else having that value (unless there was a good reason). And then write the .env file for each project.
After a couple years of this, we ended up moving more and more into a .env-defaults file. Since I'd kind of paved the way of checking in some localdev creds, we encoded a lot of the copy-pasted secret values into git in each project.
Our services run on AWS's SSM Parameter Store. We've talked about trying to mine that for localdev, on and off, but hasn't happened.
We use the same pattern in production and CI/CD
The rest we share over a secure, company approved, channel, and save them into local KeePass-es (KeePassXC)
[1] https://docs.ansible.com/ansible/latest/vault_guide/index.ht...
Also, anytime I put an ansible vault secret into Bitbucket, I get a yelly email back from BB about “detected secreted in repo!”
So general question, is this within security standards or is it very bad and should be using off-the-repo secret infra like Hashicorp vault etc? Downside there is have to manually update the secrets on Hashi vault side (eg they are not in code/repo) and you still have to have some visibility to the hashi key in any case in your CI/CD/code/ansible in any case right?
No no, this is one of those secrets we share among the team and save to KeePass or whatever.
Your production deployment is what needs access to the third party services, not "the code". Developers shouldn't be accessing these services from dev instances. Developers should get their own credentials or use some kind of staging service which can be shared, but isn't secret.
Generally speaking they aren't willing to make accounts for each dev on your team and manage that on their end. You might have a test env, but the creds for it are still secret for many reasons.
Google, Facebook, etc do with 100 people what normal companies do with 2.
Generally, Doppler has solved both dev and prod env issues for us.
You can revoke access to that person by changing your .env file credentials and deleting your co-worker's key.
Not sure if it’s still fine, I see their last release is 2 years old nowadays.
This of course does not share the secrets themselves, but helps document what secrets an application requires.
All SSM parameters will be managed through pulumi.
Pulled via a script run via aws-vault.
Should be fine once the work is done, all
It utilizes GPG to store the secrets and Golang templates to support the files.
I would like to suggest that anyone that cares about this, first gets onto using direnv(1) — and getting their collaborators on projects to do the same — rather than relying on the `.env`-file support included in Python / Docker Compose / etc.
Why? Because of a specific function that direnv(1) injects into its script execution environment, called source_env_if_exists().
You can use this function to separate your env-var config into two "layers": a project-global env, and a user-specific local-worktree env. The global layer can just call source_env_if_exists at the end, pointing at the local layer.
(In my own projects, I usually name this local overlay env-file `.envrc.local`)
With this approach, rather that creating an `.envrc.template` and telling each dev that they need to copy it into place, you can instead create and commit the `.envrc` file itself, holding any public/static base configs; tell your devs to override and add env in `.envrc.local`; and add `.envrc.local` to your project's `.gitignore`, so that nobody will accidentally commit their env overrides.
This also has an interesting impact: direnv(1) normally de-authorizes the sourcing of the `.envrc` file whenever its content-hash changes — which is annoying when you're constantly tweaking local env-vars. But (perhaps surprisingly), direnv(1) does not recursively hash the script files depended on through source_env_if_exists() for purposes of calculating the overall authorization hash of the toplevel script — so changing `.envrc.local` in this setup, doesn't result in having to type `direnv allow`.
As another bonus, with the `.envrc.template` approach, as soon as you copy `.envrc.template` to `.envrc`, this base config is now no longer "managed" by the project, and so can age/break. Whereas, a repo base `.envrc` can be updated by the project maintainers, and the resulting derived env will change along with it! (But, of course, making any change to the project base `.envrc` will result in each developer needing to re-authorize the `.envrc` file with `direnv allow` — which is the correct security model, as you don't want maintainers of public projects being able to attack users who've `direnv allow`ed the `.envrc` of their repo, by silently changing the content of that `.envrc`.)
And as an interesting extension to this, you can add code after source_env_if_exists(). So you can:
• validate the env-var values the developer set in `.envrc.local`
• require that certain env-vars be set in `.envrc.local` — where a value not being set results in the `.envrc` script printing an error message and exiting. The dev will see this as soon as they `direnv allow` in the dir, and the message can guide them to configuring the env-vars. (You can even do this conditionally, where e.g. a connstring env-var must be set for a provider if-and-only-if that provider is enabled.)
• allow env-vars to be specified in various different styles, together as URLs or separately as host/username/password/etc, and then canonicalize / synthesize the env-vars your project actually wants, filling in the parts of a computed var the dev didn't specify with default values
• you can require the `.envrc.local` file export an ENVRC_ABI_VERSION var, and compare that to a constant burned into the `.envrc` file, and error on mismatch. Then, if the project's base `.envrc` gets updated in a way where it now looks for different env-vars than before, you can just bump the constant in `.envrc`. When devs re-authorize the new `.envrc` version, rather than just being confused why everything is now broken, the `direnv allow` command will error out, pointing the dev to a URL on the project wiki, where there's `.envrc.local` version-migration guidance (which tells them whatever semantic changes they need to make, and then ends by telling them the new value to update ENVRC_ABI_VERSION to.)
...and lots of other things that I can't even think of right now.
(Of course, a lot of these benefits could also come from using a fancy configuration library in your actual application. But if your app is intended to be configured in a very specific way in production — for example, if it's a k8s CRD controller, and expects its config as a YAML ConfigMap resource — then it wouldn't really make sense to have all this extra plumbing in your app just for development. And so it can make a lot of sense to instead handle env-config glue-logic like this purely at the dev-env level, outside of the software.)