There are tons of resources out there on how to create a Lambda Layer and this is likely just another one. However, I was struggling to find a straightforward example of how to create a python layer that can be shared by any Lambda application. So, hoping this might be helpful for someone out there I wanted to share my experience and how I now create Lambda Layers.
TL;DR
If you’re here looking for an example script to create a layer deployment package and actually create the layer in AWS then scroll to the bottom.
Why Layers?
As the AWS documentation indicates a Layer is basically a way to reduce the size of your lambda deployment by moving packages and shared code to a common spot that AWS makes available when a Lambda function is executed.
You can put all your (big) and most used packages (numpy etc.) into a layer so you don’t have to keep packaging it up for each Lambda function you create. This is a great use-case for layers but I also like to use them for shared code within a large application. Having layers allows common code to be shared between multiple lambda functions keeps the footprint as small as possible and encouraging code reuse.
Structure of a Python Layer
For a layer to be successfully created (in Python) you need to make sure it’s setup using the desired folder structure. I use a root folder called “build” where I setup the folders:
/build/python/lib/python3.8/site-packages
Installing Packages
Once you have the structure in place you just need to install packages into it. To do that you use pip as normal but make sure you provide a destination directory:
$ pip install {package-name} -t {destintation-folder}
Where {package-name} is the name of the package you wish to install and {destination-folder} is, well, the destination folder for the package. For example, the following will install requests into my sample layer package:
$ pip install requests -t build/python/lib/python3.8/site-packages
That’s it. Simply install whatever packages you want to have in the layer.
Creating the Layer
To create a layer it needs a zip file, compressed from the “/python” path:
$ cd build $ zip -r python-layer.zip python
Now you have the zip package you can upload it during the creation of the layer or upload it to an S3 bucket and point the layer to it. There are restrictions on how large the zip file can be before it forces you to upload to an S3 bucket so I tend to just do that for consistency sake (I recommend creating a dedicated bucket for this purpose).
I like to use a script to create the layer because it’s rare I need to do something once and only once. Code changes, packages need to be updated etc. so I like to script it out. For bigger infrastructure I use Cloud Formation templates or SAM for Serverless applications but that’s a topic for another day.
To create a layer using the AWS CLI it looks like this:
aws lambda publish-layer-version \
--layer-name $layer_name \
--description "Common Python functions for Lambda" \
--license-info "MIT" \
--content S3Bucket={s3_bucket},S3Key={s3_file} \
--compatible-runtimes python3.8
Where {s3_bucket} is the bucket name and {s3_file} is the name of the package zip file you uploaded. If your s3 structure includes sub-folders be sure to include that in the S3Key.
So if you have “s3://some-bucket-name/a-directory/an-object.zip” it should be split like this:
- S3Bucket=s3://some-bucket-name
- S3Key=a-directory/an-object.zip
Putting it all Together
I wanted a quick and reusable way to create layers so the script I came up with looks like this:
#!/bin/bash
# leave this alone!
folder_structure=python/lib/python3.8/site-packages
# variables:
deployment_directory=build
s3_bucket=sample-bucket-name
s3_path=lambda-layers
zip_file=common_python_lambda_functions.zip
layer_name=common_python_lambda_functions
# create the relevant folder structure
if [ -d "$deployment_directory" ]; then
echo "deleting deployment folder"
rm -Rf $deployment_directory
fi
mkdir -p $deployment_directory/$folder_structure
# add all packages here:
pip install requests -t $deployment_directory/$folder_structure
# zip the python folder (in the build directory)
cd $deployment_directory
zip -r $zip_file python
# remove the s3 object (if it already exists) and upload it
aws s3 rm s3://$s3_bucket/$s3_path/$zip_file
aws s3 cp $zip_file s3://$s3_bucket/$s3_path/$zip_file
# finally, create the layer
aws lambda publish-layer-version \
--layer-name $layer_name \
--description "Common Python functions for Lambda" \
--license-info "MIT" \
--content S3Bucket=$s3_bucket,S3Key=$s3_path/$zip_file \
--compatible-runtimes python3.8
When using the script be sure to update the following variables (at the top of the script):
- deployment_directory
- s3_bucket
- s3_path
- zip_file
- layer_name
You can also adapt it to take in those variables as inputs to the script. I just wanted something that I could run quickly knowing the structure wouldn’t change π
Hope you find this useful.
