Use Terraform to create an AWS Lambda function that runs your TypeScript code

Leejjon
11 min readJun 27, 2023

In this guide, I will show you how to quickly set up infrastructure needed to deploy TypeScript code to an AWS Lambda function.

There are times when you just want to run some code in the cloud. The fastest way usually is to “click” your way around in your cloud provider until you have something that works. Unfortunately, if you create your Lambda function in the AWS Management Console, it creates a lot of resources that you won’t know about. This makes it difficult to troubleshoot what you just configured and even harder to clean up the resources after you are done. Because most likely you were clicking around to try if something works.

Rather than doing “clickOps” you should define your infrastructure as code. There are a few possible ways to do this on AWS:

  • AWS Cloudformation — the Infra as Code solution that AWS offers. It uses JSON or YAML files to define AWS resources.
  • AWS CDK — a library to write Infra as Code in your favourite languages such as TypeScript, Python, Java or Kotlin. In the end, it just generates one or more CloudFormation stack(s).
  • AWS SAM — The Serverless Application Model is an extension of CloudFormation.
  • Serverless framework is a framework for AWS Lambda.
  • Terraform — The vendor agnostic Infra as Code solution by HashiCorp. It uses .tf files with the HCL (HashiCorp Configuration Language) syntax to define resources.

Why I am picking Terraform?

As an AWS Certified Developer — Associate, I have worked a lot with CloudFormation and more recently the CDK. While they work, these skills are only ever useful on AWS. Fortunately I’m not married to AWS. As a freelancer I will work with whatever cloud my customer wants me to use. It could be AWS, Azure, Google Cloud Platform or more specialized platforms like Vercel. By learning how to use Terraform, I hope it becomes easier for me to write infrastructure as code on whatever cloud I need to use.

Besides being cloud agnostic, Terraform seems to create/update/delete resources faster than CloudFormation does and has a more concise syntax.

What are we building?

We are building a Terraform script that deploys a lambda function that returns a “Hello world!” message in JSON. The function will be written in TypeScript. We won’t set up full blown CI/CD, but if you know how to deploy all infrastructure from your terminal, it’s very easy to do the same in a CI/CD pipeline.

What do you need?

  • Install terraform (I’m using Ubuntu Linux 22.04.2 LTS).
  • Install the AWS CLI and configure it to point to an active AWS account. You need to have an IAM user with programmatic access.
  • Recommended: Use any Jetbrains IDE with the Terraform and HCL plugin to have autocompletion in your terraform files. There is also one for Visual Studio Code.

Let’s start

Create a project folder with two subfolders:

  • terraform (create a subfolder src in here, you’ll need it later)
  • lambda

In the terraform folder, create a main.tf file in a folder and copy this content in it:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.47.0"
}
}
# backend "s3" {
# bucket = "<your unique bucket name>"
# key = "my_lambda/terraform.tfstate"
# region = "eu-central-1"
# }
}

provider "aws" {
region = "eu-central-1"
}

resource "aws_s3_bucket" "terraform_state" {
bucket = "<globally unique bucket name>"

# Prevent accidental deletion of this S3 bucket
lifecycle {
prevent_destroy = true
}
}

Replace the <globally unique bucket name> with a unique name for your S3 bucket to store your terraform state file.

Run:

terraform init
terraform apply

After the apply command your bucket should be created in AWS. Check if it is there, mine is called bucket-for-terraform-state-ts-lambda

Now you can uncomment the following code in main.tf (by removing the #’s) and enter the name of your bucket to start storing state:

#  backend "s3" {
# bucket = "<your unique bucket name>"
# key = "my_lambda/terraform.tfstate"
# region = "eu-central-1"
# }

Re-run terraform init again and enter yes at the prompt.

After this step you should see a my_lambda folder that contains a terraform.tfstate file in your S3 bucket:

Now you can run terraform plan and terraform apply. (technically nothing changed in AWS yet, but from now on your can let a team member work on your project and use this same statefile).

Adding the Lambda function resource

Before we can create a Lambda function, we first need to create a role in the main.tf:

resource "aws_iam_role" "ts_lambda_role" {
name = "ts_lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}

Next, add the Lambda function:

resource "aws_lambda_function" "ts_lambda" {
filename = "src/lambda_function_${var.lambdasVersion}.zip"
function_name = "ts_lambda"
role = aws_iam_role.ts_lambda_role.arn
handler = "index.handler"
runtime = "nodejs18.x"
memory_size = 1024
timeout = 300
}

You’ll notice that there is the ${var.lambdasVersion} variable will give an error in your IDE.

In order to resolve this variable, I created a second Terraform file called input.tf in the same folder with the following content:

variable "lambdasVersion" {
type = string
description = "version of the lambdas zip on S3"
}

The code that the Lambda function needs to run

Before we can deploy a lambda, we need to write some TypeScript. Open up a terminal and cd into the lambda folder of your project. Execute the following commands:


cd yourproject/lambda
# Create a package.json, you can use all defaults
npm init
# Install typescript
npm i typescript -D
# Generate a tsconfig.json file
tsc --init
# Create a src folder
mkdir src
# Create an index.ts file
touch index.ts

In the index.ts file, paste this code (don’t hate me on the any type, we’ll fix that later):

export const handler = async (
event: any
): Promise<any> => {
const message = "Hello World!";
console.log(`Returning ${message}`);
return {
statusCode: 200,
body: JSON.stringify(message)
}
}

In the package.json you generated, add this build command in the scripts:

{
"name": "lambda",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "tsc"
},
"author": "",
"license": "ISC",
"devDependencies": {
"typescript": "^5.1.3"
}
}

Writing a deploy script

The ugly part of AWS Lambda (and other AWS services such as Beanstalk) is that it’s not able to just grab an npm project and run npm start. That’s why in the above package.json I don’t specify a start script.

It works with zip files like we live in the year 2000. We can create a deploy.sh file in our terraform folder to automate the creation of a zip file:

function deploy {
# Generate a version number based on a date timestamp so that it's unique
TIMESTAMP=$(date +%Y%m%d%H%M%S)
cd ../lambda/ && \
# Run the npm commands to transpile the TypeScript to JavaScript
npm i && \
npm run build && \
npm prune --production &&\
# Create a dist folder and copy only the js files to dist.
# AWS Lambda does not have a use for a package.json or typescript files on runtime.
mkdir dist &&\
cp -r ./src/*.js dist/ &&\
cp -r ./node_modules dist/ &&\
cd dist &&\
find . -name "*.zip" -type f -delete && \
# Zip everything in the dist folder and
zip -r ../../terraform/zips/lambda_function_"$TIMESTAMP".zip . && \
cd .. && rm -rf dist &&\
cd ../terraform && \
terraform plan -input=false -var lambdasVersion="$TIMESTAMP" -out=./tfplan && \
terraform apply -input=false ./tfplan
}

deploy

To run this script:

# Give the script permission to run (you only need to do this once)
chmod +x deploy.sh
# To run the script
./deploy.sh

Once the script has run, check out your AWS Management console to see if the Lambda function was actually created and has the correct code in your generated index.js.

Running the Lambda

To quickly test your Lambda you can create a test event by clicking on the blue test button. It will open a window to create a test event. You cannot define such a test event in Terraform according to StackOverflow, so just create one via the AWS Management Console. Run the lambda with your new event by pressing the blue test button:

The code of this first Lambda can be found on GitHub.

Pushing the logs to CloudWatch

You can view the logs of your Lambda function in CloudWatch. The View CloudWatch logs button brings you right there.

But wait, if you click that button it says it cannot find a log group!?

This is because we haven’t created one. To make a log group we need to add the following resources to your main.tf file:


resource "aws_cloudwatch_log_group" "ts_lambda_loggroup" {
name = "/aws/lambda/${aws_lambda_function.ts_lambda.function_name}"
retention_in_days = 3
}

data "aws_iam_policy_document" "ts_lambda_policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = [
aws_cloudwatch_log_group.ts_lambda_loggroup.arn,
"${aws_cloudwatch_log_group.ts_lambda_loggroup.arn}:*"
]
}
}

resource "aws_iam_role_policy" "ts_lambda_role_policy" {
policy = data.aws_iam_policy_document.ts_lambda_policy.json
role = aws_iam_role.ts_lambda_role.id
name = "my-lambda-policy"
}

The first block is the log group itself. The second block is a policy document which contains the permissions that the Lambda function needs to be able to store logs in CloudWatch. The third block assigns the policy document to the role.

Run ./deploy.sh again and you’ll see that the log group now exists when you click on the View CloudWatch logs button.

If you haven’t ran the Lambda since, there will be no log streams. So go to the Lambda function and click on the blue Test button to run it. A log stream will appear that shows the console.log message from your index.ts file along with some information on how many milliseconds your lambda took to finish.

The code up to this point can be found here on GitHub.

Triggering the Lambda via a public URL

While it’s easy to do quick tests with the blue test button, you most likely want to trigger your Lambda function by calling an HTTP endpoint.

If you’re going to build an entire API, you might want to define an API gateway. For a single HTTP endpoint, there is something simpler called a Lambda function URL. Let’s define one for our function in the main.tf:

resource "aws_lambda_function_url" "ts_lambda_funtion_url" {
function_name = aws_lambda_function.ts_lambda.id
authorization_type = "NONE"
}

After you run the ./deploy.sh script again you should see the Function URL appear in the AWS Management Console. Click on it to execute it.

Here you can find the source code up to this point.

While you can visit the Function URL in your browser, the Lambda doesn’t allow to be fetched from JavaScript by default. You can see this if you create an index.html file with the following content:

<html>
<head><title>Test to fetch data from Lambda</title></head>
<body>
<script>
fetch("https://dentg3vcgg27xxqzzu5wifohga0jmvfd.lambda-url.eu-central-1.on.aws/").then(response => console.log(response.status))
</script>
</body>
</html>

If you open this in a browser you’ll see the CORS (Cross-Origin Resource Sharing) policy error in the console:

We can allow requests from all origins in our main.tf:

resource "aws_lambda_function_url" "ts_lambda_funtion_url" {
function_name = aws_lambda_function.ts_lambda.id
authorization_type = "NONE"
cors {
allow_origins = ["*"]
}
}

Once you run the ./deploy.sh script you can refresh your index.html file in your browser you see that it starts working:

Fixing the types

Now why would we want to use TypeScript if our index.ts contains the any type on both the incoming request event and the response object?

export const handler = async (
event: any
): Promise<any> => {
const message = "Hello World!";
console.log(`Returning ${message}`);
return {
statusCode: 200,
body: JSON.stringify(message)
}
}

You can read in the official AWS Lambda documentation that they haven’t created type definitions for AWS Lambda events. There is an open source library that contains types for most of the events. You can install it with:

npm install -D @types/aws-lambda

Even with this library installed, the structure of the event parameter depends on how the Lambda is triggered. We just triggered our lambda via the Function URL. According to StackOverflow a Function URL gives an event with the APIGatewayProxyEventV2 type. The response object will be a APIGatewayProxyResultV2<T> type. Let’s apply the types:

import {APIGatewayProxyEventV2, APIGatewayProxyResultV2} from "aws-lambda";

export const handler = async (
event: APIGatewayProxyEventV2
): Promise<APIGatewayProxyResultV2<string>> => {
const message = "Hello World!";
console.log(`The event: ${JSON.stringify(event)}`);
console.log(`Returning ${message}`);
return {
statusCode: 200,
body: JSON.stringify(message),
} as APIGatewayProxyResultV2<string>;
}

Thanks to the types, we now have code completion of what is expected in the event:

I added a log line to see the full event. When I called the URL from the browser I saw the following event in the CloudWatch logs:

INFO The event: 
{
"version": "2.0",
"routeKey": "$default",
"rawPath": "/",
"rawQueryString": "",
"headers": {
"sec-fetch-mode": "navigate",
"x-amzn-tls-version": "TLSv1.2",
"sec-fetch-site": "none",
"accept-language": "en-US,en;q=0.5",
"x-forwarded-proto": "https",
"invitation": "felix",
"x-forwarded-port": "443",
"x-forwarded-for": "2a02:a45f:5e20:1:5ba6:8963:aecb:99d5",
"sec-fetch-user": "?1",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"x-amzn-tls-cipher-suite": "ECDHE-RSA-AES128-GCM-SHA256",
"x-amzn-trace-id": "Root=1-649b68d4-7289361a13bc41416a14e044",
"host": "dentg3vcgg27xxqzzu5wifohga0jmvfd.lambda-url.eu-central-1.on.aws",
"upgrade-insecure-requests": "1",
"accept-encoding": "gzip, deflate, br",
"sec-fetch-dest": "document",
"user-agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0"
},
"requestContext": {
"accountId": "anonymous",
"apiId": "dentg3vcgg27xxqzzu5wifohga0jmvfd",
"domainName": "dentg3vcgg27xxqzzu5wifohga0jmvfd.lambda-url.eu-central-1.on.aws",
"domainPrefix": "dentg3vcgg27xxqzzu5wifohga0jmvfd",
"http": {
"method": "GET",
"path": "/",
"protocol": "HTTP/1.1",
"sourceIp": "2a02:a45f:5e20:1:5ba6:8963:aecb:99d5",
"userAgent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0"
},
"requestId": "cc36778c-7c91-422e-81a0-691fc5f66a4d",
"routeKey": "$default",
"stage": "$default",
"time": "27/Jun/2023:22:55:16 +0000",
"timeEpoch": 1687906516726
},
"isBase64Encoded": false
}

If you define the APIGatewayProxyEventV2 type you have to make sure you’ll only invoke your Lambda from the Function URL. If you for example try again from the blue test button, the event will be:

INFO The event: {"key1":"value1","key2":"value2","key3":"value3"} 

Of course you can update the test event to contain an event that matches the type that our Lambda now expects.

My conclusions

  • Once you have set up Terraform with a state file in S3, it’s very fast and easy to modify the infrastructure.
  • Lambda is good to quickly host a simple function as stateless back-end.
  • It’s unfortunate that you have to zip the files in your application, so you need a little bash scripting to create a .zip file and hand it to Terraform.
  • Even though it’s not maintained by AWS, they do recommend the @types/aws-lambda library to give types to your requests and responses.

You can find the final code on my GitHub.

  • Thank you for reading!
  • Leave a comment if you have questions.
  • Follow me on Medium/Twitter/LinkedIn if you want to read more original programming content.

--

--

Leejjon

Java/TypeScript Developer. Interested in web/mobile/backend/database/cloud. Freelancing, only interested in job offers from employers directly. No middle men.