Using Azure DevOps for Continuous Integration and Continuous Delivery of AWS Lambda Functions

Madhuvanthi Sridhar
14 min readAug 25, 2021

--

When we write code, often times we want it to be used repeatedly and on demand; to create multiple resources of the same type or run a process a certain number of times, etc. Initially, this necessitated manual involvement, i.e. a human was needed to trigger the code every single time we needed it to run to perform a particular activity. Over the last several decades, the world has revolutionized technologically, and we’ve started adopting more interest into automation of everything, from the code we run on our laptops to our daily modes of transportation. Now, although my background is in mechanical engineering and I have a lot to say on the latter mentioned above, for the purposes of this blog, let’s stick to IT and coding to see how deployment techniques have changed over the years.

For those that are new to the concept (feel free to skip ahead if you aren’t), continuous integration and continuous delivery/deployment (CI/CD) are phases in the software development process that allow multiple developers to work on different parts of the code but integrate it all together in the end for one final deployment. Continuous integration specifically deals with combining code into one repository under different branches or the same master branch. Continuous delivery/deployment works on automating the building, testing, and release processes of the code using a pipeline that can trigger on schedule or manually by a human.

In the previous blog, we explored how to set up an AWS Lambda Function, function role, and policies using the in-demand IaC tool Terraform. We then had the function connect to Amazon Dynamo DB, SQS, and SNS to deliver and read messages based on viewership data obtained for a TV show. Check out my previous blog Deploying an AWS Lambda Function that Processes Historical Data… if you haven’t had a chance to view it yet or need a refresher on these concepts. In this blog, we’ll be looking into how we can automate our builds and deployments of the previously created Lambda Function to AWS when code changes are implemented. Figure 1 shows an architecture diagram of the process flow we’ll be going through, from migrating our code base to Azure DevOps, to setting up the build pipelines to generate artifacts, and finally to creating releases that will deploy our changes to AWS.

Figure 1. Process Flow Schematic for Automating Builds and Deployments of Code via Azure DevOps for AWS Lambda Functions

In the next few sections, we’ll look into how each part of this process fits together, as outlined below.

· If you’re new to setting up CI/CD projects on Azure Devops…

· Setting up the build and configuring the YAML to zip the Lambda Function code and publish build artifacts to a container

· Creating a release using the artifacts from the triggered build to deploy the code changes to the Lambda Function on AWS

If you’re new to setting up CI/CD projects on Azure Devops…

In order to set up builds and releases for our code and automate deployments, we must first create a project on Azure DevOps and upload our code base to that project. Generally, developers upload to different working branches based on the version of code they’re updating, but to keep matters simple and straightforward to understand here, we’ll be dealing only with a master branch to which all our code will be uploaded. Figures 2 and 3 show how to perform this set up.

Figure 2. Setting up a project on Azure DevOps (Part 1)
Figure 3. Setting up a project on Azure DevOps (Part 2)

Here I’ve created a new project called testLambdaDeployment which is configured to use Git version control and has private visibility. Depending on your organization and the level of privacy they want to maintain, you can change this to public visibility, meaning the whole world has access to seeing your work. That is something that I definitely wouldn’t recommend if you’re working on higher profile projects (for security purposes), but for small demos like this one, it should be fine.

Moving on, as shown in Figure 4, we can see that an empty project has now been created and Azure DevOps recommends that we download this project to our local system or upload code to it via HTTPS/SSH protocol. We’re going to go with the latter option and upload our code base to the project’s repository using HTTPS protocol as represented in Figures 5 and 6. Here, I’ve added the HTTPS URL for the repository as a remote origin to my local project git and authorized stage, commit, and push operations to the remote repository on Azure DevOps using CLI. As mentioned earlier, I’ve pushed to the master branch to avoid any confusion from multiple branches and versions of code.

Figure 4. Setting up the remote origin to upload files from local system to repository (Part 1)
Figure 5. Setting up the remote origin to upload files from local system to repository (Part 2)
Figure 6. Setting up the remote origin to upload files from local system to repository (Part 3)

If all is successful and the code is uploaded to the remote repository as indicated by the success message on the CLI, we should see the files present under repos when refreshing the page, shown in Figure 7. It can be seen here that Azure DevOps has created a repository with the same name as the project name, as it mentions in Figure 4, and has listed all the files and folders we uploaded from our local system via CLI under this repository.

Figure 7. Updated repository with files from local system

Now that our code base has been pushed to Azure DevOps, the next step is to set up the build for our project, discussed in the next section.

Setting up the build and configuring the YAML to zip the Lambda Function code and publish build artifacts to a container

To set up the build, we should start off by clicking that option in our repo, represented in Figure 7. This will take us to a page that provides options on how to configure the build pipeline, shown in Figure 8.

Figure 8. Configuring the build pipeline yaml file (Part 1)

We can choose an existing pipeline yaml file if we have one in our repository, or generate some starter yaml code that we can use to build our own custom tasks. I will be choosing the latter option. As can be seen in Figure 9, the starter pipeline yaml is very basic, merely showing how to set up tasks and print statements to the console.

Figure 9. Configuring the build pipeline yaml file (Part 2)

In our case, we need to modify this file to read the function code (index.js), zip it up, name the zip file, and send it to the build artifacts staging directory. Let’s take this one step at a time. First, we want to remove all the starter code except for the trigger and pool. We need to keep both of these parameters the same since the trigger specifies which branch we want to read from (which in our case is the master branch) when setting up the build, and the pool notes which OS image we want to use to run our commands. We should always use the latest OS image to ensure our builds don’t break due to deprecated commands and syntax. After we’ve cleaned up the starter code, we can go ahead and write a step, specifying a script that will first install the zip tool on the system and then zip up the index.js file to store under the build artifacts staging directory. Figure 10 shows all the starter yaml code up to this point. We need to list out all the installations that we have to perform whenever working with pipelines since they start with a clean OS image that has no services already working on it.

Figure 10. Configuring the build pipeline yaml file (Part 3)

Once we’ve installed all we need to run the rest of our code, we can execute the commands we need to run to start the processes necessary. After this first step, the build artifacts staging directory will just contain the deployLambda zip file. For this file to be published to the artifacts from staging, we need to add one more step, which can be done by clicking on publish build artifacts, shown in Figure 10. This opens up a window asking for certain parameters like path to publish (which would be the build artifacts staging directory that contains our deployLambda zip file), the artifact name we want to provide for storage, and where we should publish this, represented in Figure 11.

Figure 11. Configuring the build pipeline yaml file (Part 4)

Since this is an Azure Pipelines build artifact, naturally we should publish it in the Azure Pipelines artifact publish location. Clicking the add button inserts a task named PublishBuildArtifacts@1 with the input parameters we just specified, shown in Figure 12.

Figure 12. Configuring the build pipeline yaml file (Part 5)

At this point, we are ready to save the yaml file and run the build. Triggering the build will bring you to a page similar to the one displayed in Figure 13.

Figure 13. Triggering the build (job in queue state)

Here, you’ll initially see the status of our job as queued, since it must wait to be picked up by a CI/CD runner that can execute the steps we defined in the yaml file. Once the job has completed successfully, which should be the case if you followed through this demo with me step by step, you’ll see a screen similar to what is represented in Figure 14, showing the status as success with a green check mark next to the job. Under the summary section, you should see that you have 1 published artifact, indicating that the job has successfully zipped the file, sent it to the build artifacts staging directory, and published the directory’s contents to the pipeline artifacts section.

Figure 14. Build succeeded and artifact is published

We can click on these words to route us to the published artifacts page, under which we will see the artifact we pushed via our code, which we named testLambda as mentioned earlier, with the deployLambda zip file that we committed from the build artifacts staging directory. This ensures that we are ready to move on to the next step, which is creating a release for deploying the changes to AWS Lambda.

Creating a release using the artifacts from the triggered build to deploy the code changes to the Lambda Function on AWS

Just as with builds, releases must also be created via a pipeline that is triggered with the build artifacts or the repository files. If this is the first release being created as shown in Figure 15, we need to set up the template, artifacts, and stages of the release pipeline.

Figure 15. Configuring the release pipeline (Part 1)

The templates offer a wide variety of commonly performed tasks, like deploying to Azure App Service, Kubernetes, etc. If you are executing any of these tasks, you may use the templates freely by all means. Here, I’m going to show you how to set up the release pipeline from scratch to deploy to AWS Lambda, so we’re going to use the empty job template, shown in Figure 16.

Figure 16. Configuring the release pipeline (Part 2)

We can set up the stage name to be anything descriptive to the type of release being performed. Generally, developers like to name releases by environment, so a QA environment release would be named QA , a PROD release would be named PROD, etc. Here, since we haven’t delved into branching or different environments for simplicity sake, we’ll call our release test as displayed in Figure 17.

Figure 17. Configuring the release pipeline (Part 3)

The next step is to add an artifact, for which we must choose the source type as build. As can be seen in Figure 18, there are various different options that we can choose for the source type like Azure Repos (which would get files from our repository), Github, and TFVC (another type of version control). In our case however, we need the release to utilize the build artifact that we generated above; hence we must use the build source type. This will supply our build pipeline artifacts as inputs to our release pipeline for change and deployment. Generally clicking on the build source type option should fill out the remaining fields to add the artifact, but in the event that they are not filled, we just need to supply the name of our project (which we set up in the first section), the build pipeline name (which can be found via the dropdown next to that field), and the default version (which should always be latest). The source alias is always automatically provided by Azure DevOps based on the build pipeline name.

Figure 18. Configuring the release pipeline (Part 4)

Once this step has been complete, we can set up the artifact trigger for continuous deployment, so that the release does not have to be manually triggered every time a change is made and a build has been respectively executed. This is the lightning symbol under the Artifacts section. To provision this trigger, we need to toggle the button to enable the trigger and add in a branch filter that includes the branch we want to trigger our release from (in our case, this is master), as represented in Figure 19.

Figure 19. Configuring the release pipeline (Part 5)

We’ve officially completed half of the release pipeline set up process. To finish the remaining steps, add in a task on the tasks tab, by choosing the plus sign on the Agent Job option. Here, we should search for Lambda to find the AWS Lambda Deploy Function option that will allow our changes to the function body to be deployed directly on to the AWS Lambda function, shown in Figure 20.

Figure 20. Configuring the release pipeline (Part 6)

The deploy lambda function has a bunch of parameters that need to be filled in, most important of which are the AWS credentials. These are an access key and a secret access key combination that need to be added as a service connection to the project. Figure 21–23 show how to go into AWS IAM on the console to retrieve security credentials for a particular user. Here, although I already had an active access key in place, I generated a new one as shown in Figure 23, so that I can read the secret access key value, which is only displayed when an access key is initially created.

Figure 21. Setting up the AWS IAM User Access Key (Part 1)
Figure 22. Setting up the AWS IAM User Access Key (Part 2)
Figure 23. Setting up the AWS IAM User Access Key (Part 3)

Then, I headed over to the project settings and chose the service connections tab to create a new AWS service connection with the access key ID and secret access key value that I just obtained from IAM, as displayed in Figures 24–26.

Figure 24. Creating a service connection (Part 1)
Figure 25. Creating a service connection (Part 2)
Figure 26. Creating a service connection (Part 3)

We must remember to name our service connection appropriately and provide a description as to what we’re using it for, as represented in Figure 27, so that we don’t get confused when we have 10 other service connections for different pipelines in the same project. Granting access permission to all pipelines is optional and not recommended unless you’re working on a personal project or a demo like this, where using one service connection to perform multiple different tasks is not a security issue.

Figure 27. Creating a service connection (Part 4)

Now, going back to our task Deploy Lambda Function, we can alter the name of the task (here I included the name of the original Lambda Function so it’s clear what we are deploying our changes to), supply the credentials we just created, specify the region where our resource is located (us-east-1) and its name, and finally the location of the build artifact zip file (deployLambda), as shown in Figure 28.

Figure 28. Configuring the release pipeline (Part 7)

We can now save these changes, click on create release, and wait for the deployment to AWS Lambda to complete. While the release is executing, you should see a screen similar to that displayed in Figure 29, where it shows a release triggered from the master branch named Release — 1 that is running the stage test.

Figure 29. Triggering the release

We can click on the stage to view the logs for the release to see if the steps are passing successfully or failing in between. If you followed everything up to now, you should be seeing four steps with green checks next to them, indicating that the entire deployment process to AWS Lambda has succeeded; from job initialization, to artifact retrieval, to deployment of artifact changes to Lambda, and finalization of the job, represented in Figure 30.

Figure 30. Release succeeded (all steps passed)

Now let’s head over to the AWS console to check whether this really did work. Under Lambda Functions, we should find our function and check the time at which it was last updated to see if the release changes have been made to it. Here, shown in Figures 31 and 32, I can see that my function was updated 1 minute ago and that the code matches the index.js file I had committed to the Azure DevOps Repository, indicating that my release has worked successfully.

Figure 31. Function updated on AWS after release succeeded
Figure 32. Function body reflects changes made to code and deployed by release

To double test this though, I’m going to make a change to the console.log statement in my function as shown in Figure 33 and commit the change to the Azure DevOps Repository.

Figure 33. Testing whether deployment and release process works when code is changed (Part 1)

This will automatically trigger the build and release respectively to reflect the changes on the AWS Lambda Function body in a few minutes, as seen in Figure 34.

Figure 34. Testing whether deployment and release process works when code is changed (Part 2)

This confirms that the entire build and release deployment processes have been configured correctly on Azure DevOps.

Wrapping it up…

The role of continuous integration/continuous deployment in the IT industry during the past few decades has been extremely influential for promoting automated builds and deliveries of production code. We’re in a world where technology is sprouting up on the minute basis, and we’re long past having humans do all the manual work to run systems or trigger code releases. Azure DevOps is one tool, like Gitlab, BitBucket, Jenkins, etc. that allows users to utilize a web UI or CLI to create build and release pipelines that can trigger automatically when changes are made to the code base. Once the build and release has been set up the first time, human intervention is no longer needed to supervise these processes, as error messages and failures as well as success responses are automatically sent to a user’s email or provided via the logs for the pipelines. In this blog, my goal was to share a relatively straightforward use of Azure DevOps CI/CD to deploy code changes to an AWS Lambda Function I created in the previous blog; primarily to demonstrate how all the services integrate with one another from a larger perspective.

--

--

Madhuvanthi Sridhar
Madhuvanthi Sridhar

Written by Madhuvanthi Sridhar

BSMS Mechanical Engineering Grad from Georgia Tech now working as a DevOps Developer at Warner Media. I have a passion towards both cars and coding alike.

No responses yet