How To Deploy An Amazon Web Services (AWS) EC2 Instance Using Terraform

Terraform enables you to create, change and improve infrastructure reliably and predictably. It is open source and lets you create declarative configuration files that can be treated as code, (Infrastructure As Code). In this article, we are going to step through the process to create an EC2 instance using Terraform.

The first step is to install Terraform. This is a very easy process and can be followed at https://www.terraform.io/intro/getting-started/install.html.

Next, we then create an IAM account in AWS. This will be needed so that we can use it within the Terraform code, but not quite within the code. That would be reckless! We can create a local profile which will let Terraform read those credentials, but not include them in the actual code so that the code can be stored and shared safely.

Have a look at this video by Bryce McDonald:  How To Set Up Profiles To Manage Amazon Web Services (AWS) From The Command Line Using AWS CLI And PowerShell  to complete this configuration.

We now need to look at the configuration file that will create your EC2 instance. This is simply called a Terraform configuration file, it has an extension .tf.

These files are made up of providers, and resources. We populate the providers section with the configuration information used to define our AWS environment (Our provider)

Next, we are required to define our resources. We define the Amazon marketplace image (AMI) that we will use. Please check the ID for your region as this can differ from region to region. If you follow along with this code, there will be no need to update. We have selected a Windows 2016 image to use in this case.

At this stage we are ready to apply the configuration, however, Terraform will need the AWS plugin and will also need to initialize the Terraform environment. We use the command terraform init

Now you can see from the screenshot, we have the AWS plugin and some more information regarding the environment.

So now we are ready to execute the configuration and create our instance. Terraform will use the command ‘Apply’ to execute this, and you are advised on what actual configuration will be executed. At this point, you have not actually run anything. (In earlier versions you would have used Terraform plan to view the configuration that is to be implemented).

By typing yes, this configuration will now be sent to AWS, you can see it’s now ‘creating’.

If we switch over to the Amazon console we can see the instance, this few lines of code demonstrate how powerful and easily infrastructure can be created using Terraform.

Search by the tag we set in the Terraform configuration file.

Use terraform show to view the configuration changes. This is a very rich output that gives you detail on all aspects of the resources you have created.

It is also just as easy to remove your configuration using the terraform destroy command. You must be careful with this command as it will analyze any Terraform scripts it finds in the same directory as candidates for removal.

Let’s run terraform destroy.

We now type ‘yes’

Back in the AWS console, we can see that the instance has been terminated.

I hope this article has given you some insight into how powerful Terraform is and how easy it is to get a basic configuration up and running!

 

 

How to “Rename” Amazon S3 “Folder” Objects with Python

 

To rename a folder on a traditional file system is a piece of cake but what if that file system wasn’t really a file system at all? In that case, it gets a little trickier! Amazon’s S3 service consists of objects with key values. There are no folders or files to speak of but we still need to perform typical filesystem-like actions like renaming folders.

Renaming S3 “folders” isn’t possible; not even in the S3 management console but we can perform a workaround. We can create a new “folder” in S3 and then move all of the files from that “folder” to the new “folder”. Once all of the files are moved, we can then remove the source “folder”.

To do this, I’ll be using Python and the boto3 module. If you’re working with S3 and Python and not using the boto3 module, you’re missing out. It makes things much easier to work with.

Prerequisites

For the demonstration I’ll be showing you to work, you’ll need to meet a few prereqs ahead of time:

  • MacOS/Linux
  • Python 3+
  • The boto3 module (pip install boto3 to get it)
  • An Amazon S3 Bucket
  • An AWS IAM user access key and secret access key with access to S3
  • An existing “folder” with “files” inside in your S3 bucket

Renaming an Amazon S3 Key

To rename our S3 folder, we’ll need to import the boto3 module and I’ve chosen to assign some of the values I’ll be working with as variables.

Once I’ve done that, I’ll need to authenticate to S3 by providing my access key ID and secret key for the IAM user I’ll be using. In this case, I’ve chosen to use a boto3 session. I’ll be using a boto3 resource to work with S3.

Once I’ve done that, I then need to find all of the files matching my key prefix. You can see below that I’m using a Python for loop to read all of the objects in my S3 bucket. I’m using the optional filter action and filtering all of the S3 objects in the bucket down to only the key prefix for the folder I want to rename.

Once I’ve started the for loop iterating over the “folder” key and all of the “file” keys inside of it, I’ll then need to exclude the “folder” key itself since I won’t be copying that. I just need the file keys. I’m excluding that by an if statement that matches all key values that don’t end with a forward slash.

After I’m in the block that will only contain file key values, I’m now assigning the file name and destination key names to make it easier to reference.

Once I have all of that setup, I then finally do the actual copy using the copy_from action. You can see below that I’m creating an S3 object using the bucket name and destination file key. I’m then passing the source key to the copy_from action.

Once the loop has finished and all of the files have been copied to the new key, I’ll then need to use the delete action to clean all of the files including the “folder” key since it is not inside of the if condition.

At this point, we’re done! You should now see all of the files that were previously in the source key under the destination key with no sign of the source key!

Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an entrepreneur, IT influencer, Microsoft MVP, blogger, trainer and content marketing writer for multiple technology companies. Adam is also the founder of the popular IT career development platform TechSnips.