'Moving Lambda function from Serverless to Terraform' post illustration

Moving Lambda function from Serverless to Terraform

avatar

Introduction

In this article, I will describe how to move an AWS Lambda function deployed as a Serverless application into a Terraform module. The article may be useful for:

  • Those who are not too familiar with Lambda function development and deployment process and want to get a quick overview of it and compare how this could be done using Serverless and Terraform.
  • Those who already use Serverless Framework to deploy a Lambda function in AWS, but want to use Terraform to manage it. This might be the case when everything else is already managed by Terraform, except for the Lambda function, which was historically developed as a Serverless application, so you want to consolidate your infrastructure and use Terraform for everything.
  • Those who are going to create a new Lambda function and don’t want to use Serverless Framework for the same reasons as above.

The implementation also includes a special case of running npm install from Terraform when deploying.

Serverless Framework implementation

So let’s assume we’re given a Serverless application deployed in AWS, this could be simply an HTTP(s) endpoint that redirects requests somewhere else. Let’s also assume that the function has a layer with Node.js runtime dependencies.

The project structure could look like this:

1
2
3
4
5
6
7
test-lambda
├── layers
|   └── nodejs
|       ├── package.json
|       └── package-lock.json
├── index.js
└── serverless.yml

So we have:

  • serverless.yml where our Lambda function is described
  • index.js where we have an HTTP request handler
  • layers/nodejs directory where our layer with Node.js dependencies is placed (note that the nodejs directory name isn’t random, it’s restricted by the Node.js runtime, see here)

Our serverless.yml could look as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
service: test-lambda

provider:
  name: aws
  runtime: nodejs12.x
  stage: dev
  region: eu-west-1
layers:
  commonLibs:
    path: layers
    compatibleRuntimes:
      - nodejs12.x
functions:
  handle:
    handler: index.handle
    layers:
      - {Ref: CommonLibsLambdaLayer}
    events:
      - http:
          path: event/push
          method: post

And we deploy it with the following commands:

1
2
3
4
5
cd layers/nodejs
npm install

cd ../..
serverless deploy

For those who are not familiar with how Serverless Framework performs deployments, I will briefly describe the process, so that you can get an idea of how it works in general and which AWS resources are involved:

  1. An AWS CloudFormation template is created from our serverless.yml.
  2. An S3 bucket is created (if this is the first deployment), this is where zip files of the Lambda function code will be stored.
  3. The code of the function is packaged into zip files (in our case this would be 2 files - one for the function itself and another one for the Node.js layer).
  4. The hashes of the files are compared against the previous deployment (if any) and the deployment process is terminated if all file hashes are the same (this would mean we’re trying to deploy the same code).
  5. Zip files are uploaded to the S3 bucket.
  6. Any IAM Roles, Functions, Events and Resources are added to the CloudFormation template.
  7. The CloudFormation stack is updated with the new CloudFormation template.
  8. A new version is published for our Lambda function.

This process is described here.

This would create the following AWS resources:

  • CloudFormation template used to provision the stack
  • S3 bucket where zip files of the function are stored
  • Lambda serverless application
  • Lambda function, belonging to the application
  • Lambda layer with Node.js dependencies, belonging to the function
  • CloudWatch log group with log streams for each instance of the function
  • Role attached to the function with the following policies:
    • An “assume role” policy with a permission for AWS Lambda service to assume this role
    • A policy allowing the CloudWatch log group to create log streams and put log events to these streams
  • REST API Gateway with:
    • the /event/push POST endpoint integrated with the function
    • a permission to invoke the function

After we implement the Lambda function with Terraform, we will have pretty much the same set of resources, except for the:

  • CloudFormation template
  • S3 bucket
  • Lambda serverless application

So you can remove them after you’ve moved your function.

Creating Terraform module

So, let’s start to create a Terraform module for deploying our Lambda function. The module we’ll implement will work for Terraform v0.11.13 (may work for v0.12 too, but I didn’t test that) and AWS plugin v2.56.0 (some of the AWS resources are not supported in some earlier plugin versions). In this article, I will put everything into a single module, but you might want to split the code to different reusable modules, for instance, you could create modules for CloudWatch log group and API Gateway related resources and include them to the main module.

The main Terraform module will be organized as follows:

1
2
3
4
5
6
7
8
9
10
test-lambda
├── files
|   ├── layers
|   |   └── commonLibs
|   |       └── nodejs
|   |           ├── package.json
|   |           └── package-lock.json
|   └── index.js
├── main.tf
└── variables.tf

IAM role and logging

Let’s first create resources that the Lambda function is going to depend on - CloudWatch log group and IAM role with policies.

Note: In Terraform code, you can place resources in any order, because Terraform will define the order of creation according to the references of the resources within other resources in the module and the depends_on directives. But I personally like when resources are placed in the code in as much as possible the same order, this is why we will add these resources first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
data "aws_iam_policy_document" "test_lambda_assume_role_policy" {
  statement {
    effect = "Allow"
    actions = ["sts:AssumeRole"]

    principals {
      type = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "test_lambda_role" {
  name = "test-lambda-${var.test_lambda_function_stage}-eu-west-1-lambdaRole"
  assume_role_policy = "${data.aws_iam_policy_document.test_lambda_assume_role_policy.json}"

  tags {
    STAGE = "${var.test_lambda_function_stage}"
  }
}

locals {
  lambda_function_name = "test-lambda-${var.test_lambda_function_stage}"
}

resource "aws_cloudwatch_log_group" "test_lambda_logging" {
  name = "/aws/lambda/${local.lambda_function_name}"
}

data "aws_iam_policy_document" "cloudwatch_role_policy_document" {
  statement {
    effect = "Allow"

    actions = [
      "logs:CreateLogStream",
      "logs:CreateLogGroup",
    ]

    resources = ["${aws_cloudwatch_log_group.test_lambda_logging.arn}"]
  }

  statement {
    effect    = "Allow"
    actions   = ["logs:PutLogEvents"]
    resources = ["${aws_cloudwatch_log_group.test_lambda_logging.arn}:*"]
  }
}

resource "aws_iam_role_policy" "test_lambda_cloudwatch_policy" {
  name = "test-lambda-${var.test_lambda_function_stage}-cloudwatch-policy"
  policy = "${data.aws_iam_policy_document.cloudwatch_role_policy_document.json}"
  role = "${aws_iam_role.test_lambda_role.id}"
}

Nothing special here, just note that we will pass test_lambda_function_stage as a variable when deploying, this will allow us to deploy the function for different environments, e.g. dev/prod. Various resource names in the code below will include the STAGE as well. Also, I have declared a local variable for the Lambda function name as it will be reused further.

Lambda function

Our Lambda function requires two Terraform resources - aws_lambda_layer_version (the Node.js layer) and aws_lambda_function (the function itself with the layer attached to it). Both resources require the filename argument with a path to a deployment file.

Note: An alternative to this argument is a group of s3_* arguments that would specify a location of the deployment file in an S3 bucket, but we will not use this for our function.

We will create deployment packages using the archive_file data source. For the Lambda function itself, we can simply zip index.js, but for the Node.js layer, we need to run npm install first, and we will use null_resource with local-exec provisioner for this. The resource takes no action and the provisioner invokes a local executable after the resource is “created”.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
locals {
  build_directory_path = "${path.module}/build"
  lambda_common_libs_layer_path = "${path.module}/files/layers/commonLibs"
  lambda_common_libs_layer_zip_name = "${local.build_directory_path}/commonLibs.zip"
}

resource "null_resource" "test_lambda_nodejs_layer" {
  provisioner "local-exec" {
    working_dir = "${local.lambda_common_libs_layer_path}/nodejs"
    command = "npm install"
  }

  triggers = {
    rerun_every_time = "${uuid()}"
  }
}

data "archive_file" "test_lambda_common_libs_layer_package" {
  type = "zip"
  source_dir = "${local.lambda_common_libs_layer_path}"
  output_path = "${local.lambda_common_libs_layer_zip_name}"

  depends_on = ["null_resource.test_lambda_nodejs_layer"]
}

resource "aws_lambda_layer_version" "test_lambda_nodejs_layer" {
  layer_name = "commonLibs"
  filename = "${local.lambda_common_libs_layer_zip_name}"
  source_code_hash = "${data.archive_file.test_lambda_common_libs_layer_package.output_base64sha256}"
  compatible_runtimes = ["nodejs12.x"]
}

Note that the null resource has a trigger that generates a different string every time, so that the resource is “replaced” (in fact, just the local command is invoked) on every run. Also, the archive_file data source depends on the null resource, so that we create the deployment file only after npm install. You may want to add rm -rf node_modules before npm install if you are going to run Terraform in an environment which does not clean up its workspace.

Now, add the Lambda function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
data "archive_file" "test_lambda_package" {
  type = "zip"
  source_file = "${path.module}/files/index.js"
  output_path = "${local.lambda_function_zip_name}"
}

resource "aws_lambda_function" "test_lambda" {
  function_name = "${local.lambda_function_name}"
  filename = "${local.lambda_function_zip_name}"
  source_code_hash = "${data.archive_file.test_lambda_package.output_base64sha256}"
  handler = "index.handle"
  runtime = "nodejs12.x"
  publish = "true"
  layers = ["${aws_lambda_layer_version.test_lambda_nodejs_layer.arn}"]
  role = "${aws_iam_role.test_lambda_role.arn}"

  depends_on = ["module.test_lambda_cloudwatch_log_group"]

  tags {
    STAGE = "${var.test_lambda_function_stage}"
  }
}

Here, we have attached the role to the function and added an explicit dependency on the log group. You also may want to add memory_size and timeout arguments or aws_lambda_function_event_invoke_config resource if you want to use a non-default configuration for the Lambda function.

API Gateway

For the REST API Gateway, we will need to create the following Terraform resources:

  • aws_api_gateway_rest_api - the API itself
  • aws_lambda_permission - permission for the API Gateway service to invoke the Lambda function
  • aws_api_gateway_resource - represents a path part, for the /event/push path, you need to create aws_api_gateway_resource for both event and push, specifying the event as a parent for the push and the root API resource as a parent for the event
  • aws_api_gateway_method - POST method for the push resource
  • aws_api_gateway_integration - to integrate the Lambda function as the target backend
  • aws_api_gateway_deployment - adds a deployment for the API (you can switch between deployments in AWS management console on the Stages view for your API stage)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
resource "aws_api_gateway_rest_api" "test_lambda_api" {
  name = "${var.test_lambda_function_stage}-test-lambda"

  tags = {
    STAGE = "${var.test_lambda_function_stage}"
  }
}

resource "aws_lambda_permission" "test_lambda_api_gateway_permission" {
  function_name = "${local.lambda_function_name}"
  principal = "apigateway.amazonaws.com"
  action = "lambda:InvokeFunction"
  source_arn = "${aws_api_gateway_rest_api.test_lambda_api.execution_arn}/*/*"

  depends_on = ["aws_lambda_function.test_lambda"]
}

resource "aws_api_gateway_resource" "test_api_event_resource" {
  rest_api_id = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  parent_id = "${aws_api_gateway_rest_api.test_lambda_api.root_resource_id}"
  path_part = "event"
}

resource "aws_api_gateway_resource" "test_api_event_push_resource" {
  rest_api_id = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  parent_id = "${aws_api_gateway_resource.test_api_event_resource.id}"
  path_part = "push"
}

resource "aws_api_gateway_method" "test_api_event_push_method" {
  rest_api_id = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  resource_id = "${aws_api_gateway_resource.test_api_event_push_resource.id}"
  http_method = "POST"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "test_api_lambda_integration" {
  rest_api_id = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  resource_id = "${aws_api_gateway_resource.test_api_event_push_resource.id}"
  http_method = "${aws_api_gateway_method.test_api_event_push_method.http_method}"
  integration_http_method = "POST"
  type = "AWS_PROXY"
  uri = "${aws_lambda_function.test_lambda.invoke_arn}"
}

resource "aws_api_gateway_deployment" "test_api_deployment" {
  rest_api_id = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  stage_name = "${var.test_lambda_function_stage}"

  depends_on = ["aws_api_gateway_integration.test_api_lambda_integration"]
}

A couple of notes:

  • for Lambda integration, you have to specify the AWS_PROXY type, not AWS
  • the aws_api_gateway_deployment explicitly depends on the aws_api_gateway_integration to avoid race conditions
  • for the same reasons, aws_lambda_permission depends on the aws_lambda_function
  • the API stage is created automatically, because we’ve tagged our API with the STAGE tag, but you can add more stages with aws_api_gateway_stage
  • the default Empty and Error models are created automatically, but you can add more models with aws_api_gateway_model

Additionally, you may want to add a custom domain name for the API Gateway. Domain name and certificate creation/importing with Terraform is out of scope of this article, I will only show how to register it for use with our REST API Gateway.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
resource "aws_api_gateway_domain_name" "test_lambda_api_domain" {
  domain_name = "${var.domain_name}"
  certificate_arn = "${var.certificate_arn}"
  security_policy = "TLS_1_2"

  endpoint_configuration {
    types = ["EDGE"]
  }
}

resource "aws_api_gateway_base_path_mapping" "test_lambda_api_domain_connection" {
  domain_name = "${var.domain_name}"
  api_id      = "${aws_api_gateway_rest_api.test_lambda_api.id}"
  stage_name  = "${var.test_lambda_function_stage}"

  depends_on = ["aws_api_gateway_domain_name.test_lambda_api_domain"]
}

We have added an explicit dependency for the aws_api_gateway_base_path_mapping, because we don’t have any references of the aws_api_gateway_domain_name in it.

Importing resources to the Terraform state

Since all the AWS resources have been created by Serverless Framework, we don’t have them in the Terraform state file, so we need to import them. In the comments below you can see what you need to use (i.e. name, id, ARN, etc.) to import the resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
terraform import aws_iam_role.test_lambda_role test-lambda-dev-eu-west-1-lambdaRole # IAM role name

terraform import aws_cloudwatch_log_group.test_lambda_logging /aws/lambda/test-lambda-dev # log group name

terraform import aws_iam_role_policy.test_lambda_cloudwatch_policy test-lambda-dev-eu-west-1-lambdaRole:test-lambda-dev-cloudwatch-policy # role_name/policy_name

terraform import aws_lambda_layer_version.test_lambda_nodejs_layer arn:aws:lambda:eu-west-1:012345678910:layer:commonLibs:1 # layer version ARN

terraform import aws_lambda_function.test_lambda test-lambda-dev # Lambda function name

terraform import aws_api_gateway_rest_api.test_lambda_api a12bc34de5 # REST API ID

terraform import aws_lambda_permission.test_lambda_api_gateway_permission test-lambda-dev/test-lambda-dev-TestLambdaPermissionApiGateway-1ABCDEFG2HIJ3 # function_name/statement_id

terraform import aws_api_gateway_resource.test_api_event_resource a12bc34de5/fg6hij # rest_api_id/resource_id

terraform import aws_api_gateway_resource.test_api_event_push_resource a12bc34de5/kl7mn8

terraform import aws_api_gateway_method.test_api_event_push_method a12bc34de5/kl7mn8/POST # rest_api_id/resource_id/http_method

terraform import aws_api_gateway_integration.test_api_lambda_integration a12bc34de5/kl7mn8/POST

terraform import aws_api_gateway_domain_name.test_lambda_api_domain test-lambda.example.com # domain name

terraform import aws_api_gateway_base_path_mapping.test_lambda_api_domain_connection test-lambda.example.com/ # domain name with base path

You apparently don’t need to import the null resource (as this is just a helper) and the aws_api_gateway_deployment, because it doesn’t make sense to import previous deployments, and we want a new deployment to be created when we run Terraform. Terraform actually doesn’t even have a functionality to import API deployments.

If you run terraform plan after you’ve imported everything, you should see that Terraform wants to do the following:

  1. Update some of the resources with some minor changes:
    • aws_iam_role_policy
    • aws_lambda_layer_version
    • aws_lambda_function
    • aws_lambda_permission
  2. Create a new API deployment (aws_api_gateway_deployment), which will be basically the same as the previous one
  3. Create a new aws_lambda_layer_version and update aws_lambda_function with it - because we run npm install and “replace” the related null resource everytime, and the Lambda layer resource depends on it.

You can run terraform apply now, so that the state file is fully synchronized with the existing infrastructure. If you don’t want it to create a new Lambda layer version and API Gateway deployment, use the -target=<resource_address> parameter to limit the operation to a subset of resources.

Conclusion

Our Lambda function is now completely managed by Terraform and the state file is updated with all the related AWS resources. We can now update it when we need using Terraform and use it with other Terraform resources that we have in our infrastructure. Thank you for reading and feel free to let me know in the comments below if you've found a mistake or have a question.

If you're looking for a developer or considering starting a new project,
we are always ready to help!