04 Jul

AWS Lambda guide part III – Adding S3 trigger in Lambda function

This is third part of the tutorial of AWS Lambda. In previous chapters I presented my small Python app I created for signing certificate requests and imported it to AWS Lambda service (check AWS Lambda guide part I – Import your Python application to Lambda). Then I modified the code so instead of using reference to static local files we can read and write to S3 bucket (check AWS Lambda guide part II – Access to S3 service from Lambda function). Now let’s move forward and add S3 trigger in Lambda function.

We can always execute Lambda function manually either from web panel or using CLI. We can also execute it from our other application if required. But microservices are often triggered by events. In this article I will show you how to automatically sign certificate using my Lambda function when request file is uploaded to S3 bucket. Let me show you how to program S3 trigger in Lambda.

Trigger event function execution

If we think about trigger we actually think about event that result in execution of our code. We think about something that can be detected (programmatically of course) and can lead to predefined actions. It may include or not passing some variables specific to this event to the action we call.

In AWS Lambda we generally have two groups of triggers:

  • Built-in AWS events
  • User-defined triggers

Build-in AWS events are events in other AWS services. It can be something called in API Gateway, it can be specific log that is received by CloudWatch, it can be SNS action or operation on S3 bucket. This list is not finite as new services are included all the time.

User-defined triggers are something specific to code we develop. Sky is the limit here but it’s our responsibility to include Lambda execution code in our application. But it gives lots of flexibility as we can write app in Java but execute function written in Python.

In this example I want to execute my Lambda function every time new certificate request file is uploaded to my S3 bucket. I will use S3 bucket I created in previous chapter – it’s name is certsigninglambdabucket. I also assume that certificate request file have .crt.csr extension and signed certificate will have same name but just .crt extension.

AWS events in Python code

Let’s look at the code from previous chapter. We call function main() in our Lambda code. This function gets two parameters:

def main(event, context):

First parameter is event and I will focus only on this. AWS Lambda uses this parameter to pass in event data to the handler. This parameter is usually of the Python dict type. It can also be list, str, int, float, or NoneType type. Second parameter is context. It’s used to to provide runtime information to your handler and is of the LambdaContext type.

Parameters in event are JSON structures for all AWS services that we can use as a trigger. Please refer to official AWS documentation for a reference of actual structure of JSON object used for S3 events. The really interesting for us here parameter is key (file) name.

 

Create event triggered Lambda function

First let’s create new Lambda function. I will name it CertSigningS3Trigger. In second step we define the trigger parameters

Define S3 trigger parameters for Lambda function, this is required if you want to create S3 trigger for Lambda function

Define S3 trigger parameters for Lambda function

Paramaters we need to specify are:

  • Bucket name – we provide name of our bucket used for this oproject
  • Event type – from the list we can select either specific event or group of events, I will just select group of all events that create new object in bucket
  • Prefix – if there is any prefix (subfolder) we want to watch we need to define it here
  • Suffix – define keys (files) you want to look for or leave empty for all keys
  • Enable trigger – don’t enable yet. If you do function will be automatically invoked, now it’s disabled.

In next step use CertSigningS3Trigger.main as handler and reuse CertSigningLambdaRole from last chapter. Also if you have any .crt files in your bucket from previous chapter please delete them.

We can add multiple triggers to Lambda function which is really flexible in more complicated scenarios.

 

Create test event

Before we focus on updating the code let’s do one more thing. While working on code you really want to test your function. But imagine what may happen if there is mistake in code? I don’t think you really want to enable the trigger unless you are sure the code is working fine.

To test the function you can use test event. It’s simply the JSON object that you can customize for testing all possible scenarios. You can access it using event variable when you invoke it manually. If you don’t define this by yourself AWS will ask you to do that anyway before first run of the function. This is the step we pretty much skipped in first chapter.

We will simulate uploading file to our bucket via PUT method. To create test even click Action button and select Configure test event from menu. From Sample event template list select S3 PUT and you will see JSON structure based on template from documentation. Because we will use only Key value that represent name of uploaded file we modify only this value replacing default string with proper filename uploaded to the bucket.

The modified JSON for test event trigger is available on my GitHub.

 

S3 trigger in Lambda event variables in Python code

We need one information that is provided in event – name of key (file) with certificate request. Because event is a JSON structure we can easily access it’s every value. To fetch the key we need to refer to key in JSON structure assigned to event variable:

event['Records'][0]['s3']['object']['key']

As you can see we move down the tree of JSON object using its key names. Second paramater [0] refer to first key described by the object. It can contain information about multiple keys if we upload multiple files at the same time. In our case it will be just about one but we still need to use the index.

For safety reasons we will additionally use urllib.unquote_plus() function from urllib library. This function analyze the string and replace any %xx escapes known from HTML with their single-character equivalent and additionally replaces plus signs with spaces.

When we know how to access key (file) name value we just need to use it in out code instead of hardcoded value

crt_req_s3obj = s3.Object('certsigninglambdabucket', urllib.unquote_plus(event['Records'][0]['s3']['object']['key']))

Last thing we also need to change is key (file) name we are using to store signed certificate in the S3 bucket. As you remember this will be same file name but without .csr extension. You can do this in many ways (check StackOverflow for one that suite you most). I will just strip string from JSON key by last 4 characters. So my updated code responsible of writing the file to bucket is:

s3.Object('certsigninglambdabucket',urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])[:-4]).put(Body=crypto.dump_certificate(crypto.FILETYPE_PEM, new_crt))

I refer to the JSON key again from event argument but [:-4] construction will remove last four characters.

 

Delete .csr after signing

We don’t really need to store the .csr file with certificate request after we successfully sign it. So I will extend my code a little so if .crt file is successfully written in bucket the respective .crt.csr file is removed from there.

try:
    s3.Object('certsigninglambdabucket',urllib.unquote_plus(event['Records'][0]['s3']['object']['key'])[:-4]).put(Body=crypto.dump_certificate(crypto.FILETYPE_PEM, new_crt))
except ClientError as e:
    print(e.message)
else:
    try:
        crt_req_s3obj = s3.Object('certsigninglambdabucket', urllib.unquote_plus(event['Records'][0]['s3']['object']['key']))
        crt_req_s3obj.delete()
    except ClientError as e:
            print(e.message)

return

As you can see I used nested try structure. The else block is executed only if there is not exception while executing code in try section. To delete the key from bucket I simply used delete() method from boto3 library for S3 object.

Now we are ready to test the function using test event and then enable trigger and use it for every file uploaded to our bucket.

 

Summary

In this chapter we added a trigger that executes Lambda function. This trigger is event of uploading file to S3 bucket. Then we updated the code so we can use information provided by event trigger in our function, in this case just name of uploaded file.

What we need to remember:

  • We can use AWS defined triggers that goes from other AWS services. We can set parameters to each trigger
  • Information about trigger is provided to our function as JSON structure so we can easily access information about the event. Refer to official documentation for information about JSON structure template and key meanings.

As you can see using S3 trigger in Lambda is not that hard. In general processing information associated with trigger

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: