Where do you keep credentials for your Lambda functions?

Davide de Paolis - Apr 29 '19 - - Dev Community

If your Lambda function has to access a Database ( or any other service that requires credentials) where and how do you store that configuration?

Recently we have been iterating over our MVP and the requirements and size of our app grew a bit and we have been discussing how to handle safely the configuration of the Database for different environments/stages and relative user/passwords.

There are a lot many possibilities, let's look at some of them:

Just keep the host, user and password hardcode in your files.

nope
Please don't. Should I really tell you why?

Use a .env file - which is committed to the repo

neither
Even though this solution might allow a bit more flexibility it is still very bad. Everyone that can access your repo can immediately see your credentials.

Use a .secrets file ( basically the .env file above but encrypted via serverless secrets plugin

mmmh, maybe
This was our very first quick approach but it didn't really prove well because:

  • the credentials are clearly visible in the AWS UI Console once the lambda function is deployed ( env variables are baked into the code at deploy time)
  • the risk of someone committing by mistake the decrypted file was high
  • we had to duplicate those files in many repos sharing similar credentials
  • most of all, the question arose - where do we store the password to decrypt those secrets?
plugins:
  - serverless-secrets-plugin
custom:
  secrets: ${file(secrets.${self:provider.stage}.yml)}
Enter fullscreen mode Exit fullscreen mode

Use a SSM encrypted env variable in your serverless.yml

better, but mmmh
This is a step further from the secrets-plugin, AWS Systems Manager Parameter Store allows you to get rid of the file and have only one configuration shared by many lambda/repos that can be quickly updated via AWS UI Console or AWS CLI, but it has the same drawbacks:

  • the configuration values are stored in plain text as Lambda environment variables - you can see them in clear in the AWS Lambda console - and if the function is compromised by an attacker (who would then have access to process.env) then they’ll be able to easily find the decrypted values as well- (this video explains how )
  • since you are deploying your code together with the env variables, if you need to change the configuration you need to redeploy, every single lambda to propagate all the changes.
custom:
  supersecret: ${ssm:/aws/reference/secretsmanager/secret_ID_in_Secrets_Manager~true}
Enter fullscreen mode Exit fullscreen mode

Access SSM or SecretsManager at runtime ( and use caching )

much better

Store your credentials safely encrypted on Systems Manager Parameter Store or on Secrets Manager ( which allows also automatic rotation ) and access them at runtime.
Then configure your serverless yaml granting access to your lambda via IAMRole Policies:

iamRoleStatements:
 - Effect: Allow
        Action:
         - ssm:GetParameter
        Resource:"arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/YOUR_PARAMETER"
Enter fullscreen mode Exit fullscreen mode

You can set this permission with growing levels of granularity

"arn:aws:ssm:*:*:parameter/*"
"arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/*"
"arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/YOUR_PARAMETER-*"
"arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/YOUR_PARAMETER-SOME_MORE_SPECIFIC"
Enter fullscreen mode Exit fullscreen mode

The code above is specifying directly your ARN / Region / Account - if you want to be more flexible you can set up the permission to grab those value automagically:

iamRoleStatements:
 - Effect: Allow
        Action:
         - ssm:GetParameter    
        Resource:
         - Fn::Join:
          - ':'
          - - arn:aws:ssm
            - Ref: AWS::Region
            - Ref: AWS::AccountId
            - parameter/YOUR_PARAMETER-*
Enter fullscreen mode Exit fullscreen mode

Since SecretsManager is integrated with ParameterStore you can access your secrets via SSM just prepending your Key with aws/reference/secretsmanager/

If you start playing around with these permissions ( especially if editing the policy in the UI console - and not redeploying the lambda - may take some time. normally in seconds, but it can happen that it is 2-5 minutes)

Once you have granted your lambda access to your secrets you can specify an environment variable to simply tell your lambda which credentials to load at runtime based on the environment/stage:

  custom:  
      credentialsKey:
        production: YOUR-PRODUCTION-CREDENTIALS-KEY
        development: YOUR-DEV-CREDENTIALS-KEY
        other: YOUR-OTHER-CREDENTIALS-KEY

functions:
  environment: 
    SECRETS_KEY:${self:custom.credentialsKey}
Enter fullscreen mode Exit fullscreen mode

This is a nifty little trick to apply a kind of conditionals to serverless deployment. Basically, you are telling serverless that you have three Secrets Keys: one for production, one for development and one for all other stages.
In the environment node of the lambda function then you set the key based on the current stage being deployed. If the current stage matches one of the variable names in the list it will be picked, otherwise, it will fallback to the ´other´ one.

Inside your lambda then, you just have to load the credentials from SSM or SecretsManager and connect to your DB.

const ssm = new AWS.SSM();
const params = {
  Name: process.env.SECRETS_KEY,
  WithDecryption: true 
};
ssm.getParameter(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data.Parameter.Value);    // here you have your values!
});
Enter fullscreen mode Exit fullscreen mode

Remember to implement some sort of caching so that when the lambda container is reused you avoid loading the keys from AWS ( and incurring in additional costs)

Something that I like to point out is that SSM requires the aws-region being defined at instantiation. As you see I am not passing that value though. That's because process.env.AWS_REGION is read automatically from AWS SDK and that env var is set by serverless offline.

You will not need to do anything until you have some integration tests trying to load the secrets - we added some tests to be sure after every deployment, that the secret for that env-stage was available on SecretsManager. In that case you must pass that variable to the integration tests ( remember to manually pass it to integration tests).

This is our npm script (we are using AVA for tests and Instanbul/nyc for code coverage):

"test:integration": "AWS_REGION=eu-west-1 SECRETS_KEY=MY_KEY_DEVSTAGE nyc ava tests-integration/**/*.*"
Enter fullscreen mode Exit fullscreen mode

Do you have any other approaches to deal with this common - id's say basic/fundamental - feature?


More resources on the topic:
https://docs.aws.amazon.com/en_us/systems-manager/latest/userguide/integration-ps-secretsmanager.html
https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-variables-using-aws-secrets-manager

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player