Tricks of the trade: working with environments

April 8, 2024
Sergio Rua

Introduction

Digitalis is a managed services provider that caters to a diverse range of customers, including our sister company, AxonOps. We manage numerous on-premises and cloud-based environments for our clients. One of the key challenges faced early on was the secure storage of credentials and the ability to efficiently switch between different environments, whether for the same project and company or for a completely different one.

In a typical workday, I may be focused on a development environment, only to be called upon by a colleague or customer to address a separate matter. In such cases, the ability to quickly transition to the new environment is crucial for maintaining productivity and responsiveness.

My two goals are:

  • Seamless Environment Switching: I want to be able to quickly open a new terminal and switch all the environment configs to the new project/environment.
  • Secure Credential Management: There is a lot of sensitive information we have access to and we need to ensure sensitive login information is protected from unauthorized access, keeping both the company’s and its clients’ data safe.

Secure Credentials Management

It’s pretty common to see people storing credentials locally. One of the worst offenders is the AWS credentials in ~/.aws/credentials. Some companies use saml2aws or other commercial alternatives like okta but I dare to say that’s a minority.

Another common one is the KUBECONFIG file which is often kept locally and represents a security issue.

I do not have anything locally, in most cases it’s all stored in HashiCorp Vault or a similar secrets storage.

Environment Switching

I use a simple method of creating an environment file per each of the projects and deployment environments I would need access to. For my sanity, I use a simple naming convention of

$CUSTOMER-$PROJECT-$ENV.sh

This config file contains all I need to be able to access the project. More importantly, it ensures my credentials are not stored on my work laptop. Let’s have a look at an example:

CUSTOMER=medium
PROJECT=env-files
ENV=dev

export ENV_TMPDIR="~/Temp/$CUSTOMER/$PROJECT/$ENV"
mkdir -p ~/Temp/$CUSTOMER/$PROJECT/$ENV

# see section on `trap` below
env_cleanup() {
  rm -rf "$ENV_TMPDIR"
}

export VAULT_ADDR=https://vault.example.com
export VAULT_TOKEN=$(vault login -field=token -method=ldap username=${USER})

export AWS_ACCESS_KEY_ID=$(vault kv get -field=AWS_ACCESS_KEY_ID secret/${CUSTOMER}/${PROJECT}/${ENV}/environment)
export AWS_SECRET_ACCESS_KEY=$(vault kv get -field=AWS_SECRET_ACCESS_KEY secret/${CUSTOMER}/${PROJECT}/${ENV}/environment)

export KUBECONFIG=$ENV_TMPDIR/kube.yaml
vault kv get -field=KUBECONFIG secret/${CUSTOMER}/${PROJECT}/${ENV}/environment > $KUBECONFIG

As you can see to set up the environment I need to first log in to Vault (I’m using LDAP login on this example) and right after I’m able to download the configurations I require such as the AWS keys and the K8s config.

This config is loaded by running

source ~/my-envs/customer.env

The cool thing is that once I’m done working on that environment, I just log out of the Terminal window and it will all go away, nothing is stored… or is it? There is just one gotcha. You have probably noticed the KUBECONFIG is downloaded to a ~/Temp path. I must not forget to delete this on exit.

The easiest thing for this is to add the remove into your logout configuration. If you use bash, this will be ~/.bash_logout and if you are a ZSH you’ll use ~/.zlogout

if [ ! "$ENV_TMPDIR" == "" ]; then
  if [ -d "$ENV_TMPDIR" ]; then
    rm -rf "$ENV_TMPDIR"
  fi
fi

The other way to do this is using trap but unfortunately, it’s not that easy to set up when the script is sourced and not executed. The trap needs to be configured in the parent shell and not in the script:

# In the parent shell, ie, ~/.zshrc or ~/.bashrc
trap env_cleanup EXIT

# Source the script
source ~/my-envs/customer.env

Conclusion

Most people will not have to juggle multiple environments and often spread across cloud providers, on-premises, etc. And we DevOps hold a lot of power and responsibility. This method of working works for me and many of my AxonOps and Digitalis colleagues follow a similar pattern.

I hope you enjoyed reading this blog post. As usual, get in touch if there is anything we can help with. 👋

I for one welcome our robot overlords

Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to Transform 

Your Business?