How To Deploy An Amazon Web Services (AWS) EC2 Instance Using Terraform

Terraform enables you to create, change and improve infrastructure reliably and predictably. It is open source and lets you create declarative configuration files that can be treated as code, (Infrastructure As Code). In this article, we are going to step through the process to create an EC2 instance using Terraform.

The first step is to install Terraform. This is a very easy process and can be followed at https://www.terraform.io/intro/getting-started/install.html.

Next, we then create an IAM account in AWS. This will be needed so that we can use it within the Terraform code, but not quite within the code. That would be reckless! We can create a local profile which will let Terraform read those credentials, but not include them in the actual code so that the code can be stored and shared safely.

Have a look at this video by Bryce McDonald:  How To Set Up Profiles To Manage Amazon Web Services (AWS) From The Command Line Using AWS CLI And PowerShell  to complete this configuration.

We now need to look at the configuration file that will create your EC2 instance. This is simply called a Terraform configuration file, it has an extension .tf.

These files are made up of providers, and resources. We populate the providers section with the configuration information used to define our AWS environment (Our provider)

Next, we are required to define our resources. We define the Amazon marketplace image (AMI) that we will use. Please check the ID for your region as this can differ from region to region. If you follow along with this code, there will be no need to update. We have selected a Windows 2016 image to use in this case.

At this stage we are ready to apply the configuration, however, Terraform will need the AWS plugin and will also need to initialize the Terraform environment. We use the command terraform init

Now you can see from the screenshot, we have the AWS plugin and some more information regarding the environment.

So now we are ready to execute the configuration and create our instance. Terraform will use the command ‘Apply’ to execute this, and you are advised on what actual configuration will be executed. At this point, you have not actually run anything. (In earlier versions you would have used Terraform plan to view the configuration that is to be implemented).

By typing yes, this configuration will now be sent to AWS, you can see it’s now ‘creating’.

If we switch over to the Amazon console we can see the instance, this few lines of code demonstrate how powerful and easily infrastructure can be created using Terraform.

Search by the tag we set in the Terraform configuration file.

Use terraform show to view the configuration changes. This is a very rich output that gives you detail on all aspects of the resources you have created.

It is also just as easy to remove your configuration using the terraform destroy command. You must be careful with this command as it will analyze any Terraform scripts it finds in the same directory as candidates for removal.

Let’s run terraform destroy.

We now type ‘yes’

Back in the AWS console, we can see that the instance has been terminated.

I hope this article has given you some insight into how powerful Terraform is and how easy it is to get a basic configuration up and running!

 

 

How To Enumerate File Shares On A Remote Windows Computer With PowerShell

It can be challenging to keep track of just what file shares have been set up in your environment. This becomes even more difficult if you have to track this information across multiple servers. Adding to the tedium is remotely connecting to each server to find the list the shares. Thankfully, using PowerShell makes this task a snap, whether you need to enumerate shares on just one server, or many.

Enumerate Shares on a Single File Server

Let’s start by connecting to a remote file server to gather this information from a single server. We will accomplish this by entering into a remote PowerShell session with our file server “FILE01”.

Once connected, it takes a single cmdlet to get file share information:

As you can see, this gives us a list of all of the share on this server. This also includes the administrative shares, whose share names are appended by $.

This does accomplish the task of getting a list of shares, but it is a little cluttered. We can clean up this list by using the -Special parameter and setting it to $false to specify that we do not wish to see the administrative shares:

There, that gives us a much clearer view of the share information we are looking for.

Now that we have our share on this server identified, it might be useful to list all of the properties for this share, especially if we are looking for specific details about our share:

This allows us to view quite a bit of information about our share, including things like the type of share, folder enumeration mode, caching mode, and of course, our share name and path, to name a few.

It is also possible to view the share permissions for this share by switching to the Get-SmbShareAccess cmdlet:

This gives us a list of the users and groups, and their current level of access to the share.

We might also have a time where we need to enumerate the share permissions to find out who has full access to a share:

With this information, it is easy to tell who has full access to the share and then take steps to remove that access if it isn’t appropriate for an individual or group.

Now that we are done enumerating shares on a single server, we need to make sure we close our remote PowerShell session:

Enumerate Shares on Multiple File Servers

It is also possible to retrieve this same information from multiple file servers, which is an area where PowerShell really shines. Using Invoke-Command to run Get-SmbShare, we can list the shares on both the FILE01 and FILE02 servers. If we also pipe the output through Format-Table, we can also get a nice organized list:

While entering the file server names manually is fine if there are only two or three servers, it becomes tedious if there are many dozens of servers to check. To get around this, we can assign the output of Get-ADComputer to the variable $FileServAD and get a list of all servers in the “File Servers” Organizational Unit (OU). From there, it’s easy to get the information:

There we have it! A nice tidy list of all of the file shares on all of our file servers.

Additional Resources

Companion Video: “How To Enumerate File Shares On A Remote Windows Computer With PowerShell

How to Manage Docker Volumes on Windows

This blog post was created from a snip created by Matt McElreath. You can check out the video Managing Docker Volumes on Windows if you’re more into video format.

Docker volumes are the preferred way for handling persistent data created by and used by Docker containers. Let’s take a look at how this works.

If you want to store persistent data for containers, there are a couple of options. First, I’ll show you how to use a bind mount. I’m currently in a folder called data on my C-drive. If I list the contents of this folder, you can see that I have five text files.

If I want to make this folder available to a container, I can mount it when starting the container. Let’s go ahead and run a container using docker run. I’m going to run this container in interactive mode, then specify -V. Here, I’m going to put the path to my data folder, followed by a colon, then I will specify the path inside the container where I would like this folder to be mounted.

For this, I’m going to specify the shareddata folder on the C-drive. Then I’ll specify the Windows server core image and finally, I’ll specify that I want to run PowerShell once I’m inside the container.

Now that I’m inside the new container, if I list the contents of the C-drive, you can see that I have a shareddata folder.

Let’s go into that folder and list the contents. Here are my five test files that are located on my container host.

I can also create files in this folder, which will be available to other containers or my container host. Let’s go ahead and run new item to create a file called containertest.

We can see above that the new file has been created from within the container. Now I’ll exit this container which will shut it down by running exit.

If I run docker ps, you can see that there are currently no running containers.

Now let’s list the contents of the data folder again from my container host.

We can see the new file that was created from inside the container called containertest. Bind mounts have some limited functionality, however, so volumes are the preferred way to accomplish what we are trying to do. To get started with volumes, we can run the same command to start up a container, but this time with a couple of small differences. Where we specified the volume, instead of using the path on the container hosts’ file system, I’m going to use the word hostdata as the name of a volume I want to create and use.

From inside the new container, if I list the contents of the C-drive, you can see again that I have a folder called shareddata.

If I list the contents of that folder, it is currently empty because we created a blank volume. Now let’s run Ctrl-P-Q which will take us out of the running container, but keep it running in the background.

From the container host, let’s run docker volume ls. This will list the current volumes on this container host. I have a volume called hostdata, which was created when I specified it in the docker run command.

If I run docker ps we can see our running container.

Let’s stop that container using docker stop. Now we have no running containers.

Let’s remove the stopped containers by running docker rm. If I list the volumes again, you can see that the hostdata volume is still available and can be mounted to new containers.

Another way to create a volume is to use the docker volume create command. If you don’t specify a name, docker will give it a name which is a long list of random characters. Otherwise, you can specify a name here. I’m going to call this volume logdata. Now we can see it is in the list when we list the volumes again.

Now let’s go ahead and mount that to a new container. I’m going to use docker run again and for the volume I’m going to specify the volume that I just created and mount it to c:\logdata.

From inside the container, I’m going to go into the logdata folder and create a couple of files. Right now, there are no files in this directory, so let’s go ahead and create some.

Now I have two log files in this directory.

Let’s run Ctrl-P-Q again to exit this container while it is still running. While that container’s running, let’s start up a new container with the same volume mounted.

If we run a listing on the logdata folder in the new container we can see the two log files being shared.

Now let’s exit this container. I currently still have one running container and two exited containers.

I’m going to go ahead and stop all running containers, then run docker rm to remove all exited containers.

Let’s go ahead and list the volumes again. The logdata volume is still available to be mounted to future containers.

If I just run docker volume, I’ll get some usage help for the command.

We already looked at create, so let’s move on to inspect. If I run docker volume inspect against the logdata volume, it will return the properties for that volume, including the mount point which is the physical path to the volume on the container host.

Let’s open that folder using Invoke-Item and have a look. Under the logdata folder, there’s a folder called _data. If we open that, we can see the files that were created from the container earlier.

To delete a volume, we can run docker volume rm, followed by the name of the volume you want to delete.

Now if I list the volumes, logdata is no longer there.

Finally, we can use prune to remove all unused local volumes. This will delete all volumes that are not mounted to a running or stopped container.

You want to be careful with this command, so there’s a warning and a prompt to make sure that you are sure that you want to do this. If I type Y and hit enter, it will show me which volumes were deleted.

And if I list my volumes again you can see that they have all been deleted.

How to Set the DNS Server Search Order on Windows with PowerShell

To follow along, you can find a copy of the code used in the SnipSnips GitHub repo.

Setting your DNS server search order with PowerShell is actually really easy. We’ll start with the Get-DNSClientServerAddress to get a look at our existing settings as you can see below.

So there you can see, we have our existing settings on ethernet interface index seven, and our addresses are 192.168.2.52 and the secondary server is at .51.

So we’ll do a quick nslookup to file01.corp.ad, to verify that our primary is in fact responding.

So there we go, you can see above that a responding DNS server is our primary at .52, and successfully returned .55 is our file server.

Now, let’s change the order of our DNS servers. To do that, we’ll use the Set-DNSClientServerAddress cmdlet. We’ll point it to interface index seven as listed above, and I’ll change our order, so 192, 168.2.51 is our primary, and .52 is now our secondary.

We’ll do a quick verification. I’ll check interface index seven.

There, now you can see above, .51 is now our primary as it’s listed first, and .52 is our secondary.

Do another quick nslookup, and you can see that that now returns from .51, which is our primary DNS server.

How to Use Tags in Pester for Targeted Testing

“There’s no sense in being precise when you don’t even know what you’re talking about. -John von Neumann”

 

http://developers-club.com/posts/264697/

I thought this was a good quote as the theme for this post. Re-read the quote, take it in, and then continue reading.

 

During the construction of a set of Pester tests, it can be increasingly difficult to follow the flow of each of the tests and the subjects against which these tests will be performed.

Since Pester tests PowerShell code, you are able to use a PowerShell method called Regions to separate section of code. These Regions will allow you to collapse large sections of similar code to create an overall more pleasant script reading experience. While that may assist to some extent, it does not help you find and run specific tests.

Here’s an example of what using a region would look like:

Example 1 - Regions
Region example

The problem is that this technique is only useful when reading code, but not necessarily nearly has helpful when executing code. A region doesn’t allow you to pick out precisely what test you want to run. That’s where using Tags comes into play.

Tagging is a Pester parameter that allows you to filter Describe blocks using a string or keyword value. When using Invoke-Pester with the -Tag & -Passthru parameters, only the Describe blocks that contain the -Tag value specified will execute.
Here is a simple example:

Example 2
Tag Example

This is useful because you do not have to have multiple test files with only one Describe block in each, you could instead create a single master file with multiple Describe blocks, each with their own tag. This is very useful to do when you have an application or infrastructure stack you want to test, and have the ability to add any new regression tests you may need in the future.

You can use multiple tags for a Describe block, but you cannot use multiple tags when running Invoke-Pester. This took me a little time to figure out but after thinking about it, it does make sense. You want to run a single or suite of tests that have the tag, not multiple tests with multiple tags because that is what Invoke-Pester will do by itself!

Since I first started learning about Pester, I have been building a few infrastructure tests for use in the environments I work in often. One particular task involves working with domain controllers and occasionally do some investigation into replication issues.

Taking what I know and turning it into some tests that run quickly and uniformly has improved my response time to domain controller issues. Tags have been extremely helpful during a couple of troubleshooting tasks now that I could target the specific failed component(s) in my test set based on results of the initial test.

There is no reason to keep running full tests for example if, for example, DNS is not working. I can then target DNS services and infrastructure without chasing other possibilities, thus wasting time. Another benefit was that I did not have to go find additional scripts because I already had the -tag parameter set for the Describe block in question.

How to Undeniably KNOW You’re an Entrepreneur

Have you ever been in a job you hate? Have you ever been in a position you love? Probably. Lots of people get a job, make decent money and are happy with our lives.

We’re content to separate work and personal life and talk about “work/life balance.” But what if work and life felt like one to you?

You’d be done with the “work/life” balance problem but depending on where you live and who your social circles are you’d feel out of place. You’d hear comments like “What are you thinking?”, “Are you crazy?”, “You’re going to give up that security?” when commenting to others about venturing out on your own and pursuing your dream.

Entrepreneurs want to break free…constantly

What others around you may not realize is that you possess this deep-rooted desire to break free of the full-time employment chains. Golden handcuffs or not, your whole being is telling you to rip them off and be free but cultural pressures and your sanity are screaming you’re crazy.

Any “normal” person with a six-figure job that they enjoyed doing with ultimate flexibility would have a screw loose for wanting to throw it away, right?

The traditional way of thinking of a good job is security, comfort and merely that thing you do for 40 hours/week that sometimes gets in the way of the fun stuff.

“Good” jobs are toxic to an entrepreneur

An entrepreneur with a “good” job thinks that’s 40 hours/week of wasted potential she could use to pursue her dreams and build a business. These opposing forces, when coming together, produce a volatile mix of fear, angst, sadness and excitement.

An entrepreneur trapped in a “good” job is like an intelligent frog sitting in boiling water. It’s sure is warm and comfortable in that water, but the smart frog knows that, at some point, they’re going to be boiled alive. But that water feels so nice!

Entrepreneurs are different

An entrepreneur is different than most. As that iconic Apple commerical correctly puts it, entrepreneurs are [sic] “the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square holes, the ones who see things differently.”

An entrepreneur isn’t like anyone else. He’ll never be happy working for the best boss in the world, working on the absolute coolest projects and making millions of dollars a year if the goal is for someone else. He’ll always yearn to follow his own path.

An entrepreneur is independent. He has a vision in his head and uses every ounce of his being to fulfill that vision.

A “good” job is just a distraction from reaching the ultimate goal. “Good” jobs are toxic to an entrepreneur. They tempt her into entertaining the idea that she may be happy working for someone else but at some point, her true calling will surface again and begin knawing at her to pursue her true passion again – entrepreneurship.

A “good” job temporarily masks an entrepreneur’s spirit. It’s a pill that makes an entrepreneur fill warm and cozy until the effect wears off and he’s back left staring in the mirror questioning himself over and over again.

Entrepreneurs are relentless

An entrepreneur is impatient. He has a vision and will make that vision a reality at whatever cost. He has no time for others that hinder steps to fulfill the vision. He has one goal and one goal only; to build a successful business no matter what the cost.

An entrepreneur will fail, but the fire an entrepreneur has in his belly is unrelenting. It’s a fire that is not extinguishable. Entrepreneurship is an addiction to success. It’s an endless pursuit that is so ingrained in his DNA that it leaves no room for escape no matter how many bonuses, raises and stock options are thrown at him. It’s a force unlike no other.

Some questions to ask yourself if you’re an entrepreneur are:

  • Do you have that drive, that unrelenting knawing inside you that can be pushed down temporarily but always seems to come back up?
    – Do you feel delighted and fulfilled when the work that you do directly contributes to your own success?
    – Do you feel like a job merely is 40 hrs/week that’s holding you back from your real potential?
    – Are you willing to pursue a passion that may end up failing miserably?
    – Do you think of failure as inevitable and use it as a learning experience in your next venture?
    – Will you do anything to work for yourself rather than going back to another job?

If you’ve answered yes to most of these questions, you are an entrepreneur.

If you’re still in a job, quit. Quit now. You’re only delaying the inevitable. You’re fooling yourself into thinking you’re happy.

You’re not.

You’re merely being bamboozled by the money, power and prestige a “good” job gives you. You will be much happier making a quarter of the money you’re making now working on your business, not someone else’s.

The fulfillment you’ll get out of life will increase exponentially and regardless if you fail or succeed, you’ll get back up and do it again until you’re so much better off than what that 3% raise every year gave you.

Entrepreneur vs. Engineer: A Founder’s Dilemma

 

Us IT folk are a unique breed or so I’m told. We’re logical, black and white thinkers that require strict rules and are prone to outbursts around tabs vs. spaces at any moment.

We love either Windows or Linux but not both, have a major case of imposter syndrome and are socially inept. At least that’s the stereotype. It turns out a lot of this is true of myself.

A lot of engineers are completely happy going to work, solving interesting problems, working with good people and toiling on interesting open-source side projects in their free time. This used to be me and life was a lot simpler.

The Hidden Entrepreneur Inside

You see, I’m just like your typical IT engineer but with one key difference. For whatever reason, I have this insatiable desire to blaze my own path and to build something on my own that benefits other people like me. It’s a blessing and a curse.

I have a deep-rooted entrepreneur inside of my brain that refuses to give into the simpler ways of living as just an engineer. Lucky for people that enjoy free, IT screencasts, the entrepreneur is what brought you this fine TechSnips platform.

For those that either silence their entrepreneur brain or for those completely happy working as an engineer may not completely understand this phenomenon. Let me explain.

Imagine constantly thinking you should be doing something else while working on a task you enjoy more at the time. Consider when you leave the volume at 19 on your TV, see a red M&M in a bowl of green ones or get interrupted when you’re just about finished with an awesome TV show. It’s kinda like that.

Entrepreneur vs. Engineer

I, as an engineer, love solving real, tactical problems. I enjoy spending time seeing the fruits of my labor immediately in the form of a passing test, working code or automation workflow doing its thing. The reward is immediate, same-day and doesn’t require strategizing about content, researching future clients or trying to get the word out about TechSnips.

I, as an entrepreneur, am not supposed to focus on the tactical logistics of building the product. I shouldn’t be spending time neck-deep in code in favor of focusing on the TechSnips marketing strategy, SEO or ensuring that next big opportunity doesn’t pass us by!

My inner entrepreneur and engineer constantly battle. On one hand, I love working for myself, building a business, helping others and fulfilling this vision I’ve had for a long time with TechSnips but on the other hand, I sometimes just want to push the world aside and work on a cool PowerShell script!

Learning Balance

Since TechSnips is only a few months old, I’ve still got some time to find my groove. I’m still allowed to have those weeks where I don’t do the “right” thing focusing on strategic vision, marketing and managerial work in favor of the fun stuff like coding.

I’ve still got time but it’s time to start figuring out what I’m good at, not good at and finding more people to help me in this endeavor.

From what I can tell, as a founder running a successful business is about choosing your battles. You need customers to survive, a motivated team behind you, a great product and a vision for the future. I’m getting there. You also need to know when and when not to spend time on frivolous activities in favor of the greater good.

TechSnips continues to grow month after month and I’m excited about the future. I’m looking forward to growing as an entrepreneur and business owner and learning how to adapt my engineer tendendencies to business development!

 

 

How Did I Arrive at TechSnips?

Photo by Danka & Peter on Unsplash

Where I Came From

It’s been about 4 years since I decided that I was no longer content to simply use the Internet as a source of information. I knew at that time that I wanted to give back to the online IT community that had helped my career along for so many years. It seemed that the easiest way to begin was to start a blog. Since I already had one created, one that had been all but abandoned, I thought this was a good place to start.

So, in May 2014 I started blogging. I started off by writing once a week about what I had learned during my MSCE studies, as it gave me a good, constant source of ideas. A few months later, I decided to launch a second blog that would focus primarily on Windows Server and PowerShell tips & tricks, guides, lab setups, and walk-throughs. This is where I ran into a bit of a wall.

I was struggling to find ideas that I thought would be interesting to others. The “Imposter Syndrome” was in full force. I just didn’t think I had anything worth sharing, certainly nothing that hadn’t already been done before. That is where I finally stalled out and all but quit writing.

Over the next few years, my writings continued but were quite sporadic as I was still struggling to come up with ideas. It wasn’t until I read an article by Don Jones titled Become the Master, or Go Away that I realized just how much this imposter syndrome was holding me back.

It didn’t immediately break me out of my shell, but it did help me renew my interest in writing. I even thought about creating videos to go along with the blog posts. I still had the same problem though, no idea about what to write about.

How I Got Here

I have always found it interesting how timing plays such an important role in life, and this is one of those times. I had just finished reading “Be the Master” by Don Jones, and found myself resolved to start doing something right away. A few days later I read a guest post written by Adam Bertram on the blog hosted by Mike F. Robbins. The article, TechSnips is Looking for Content and Recruiting Contributors was a good read, and I felt that it was something I should seriously look at.

By the time I got to this part of the article: “You will learn presentation skills through feedback from myself and your peers…” I had already decided that this was something I was going to do. No doubt about it. A chance to record how-to videos sounded like a great idea. Then I read the words “and you will get paid”. This was the icing on the cake!

So, I clicked on the sign-up link, provided the required information, and submitted an audition video. Waiting to see if I would be accepted as a contributor was the longest 14 hours ever. I finally received the e-mail I had hoped for! I was accepted as a contributor to TechSnips, and I was even provided with feedback on this video so that I could improve the next video I recorded.

My Experience So Far

I didn’t know it at the time, but one of the first things I would learn about TechSnips is that everything moves quickly. It can be quite a refreshing change of pace if you’re used to things moving at a slower cadence. I have found that pace to be very motivating and quite exciting, and I love the fact that changes to TechSnips are made quickly and frequently as the business evolves. Keeping up with the changes was a challenge at first, but I quickly adjusted.

One of those changes that were made during my first few weeks was the introduction of contributor blog posts. The thing I enjoyed most about that change was the fact that Adam went from ‘No, I don’t think we are going to do blog posts’ to ‘yes we are, and here’s how we are going to do it’ inside of a single sentence. So, as you can see, changes are made rapidly.

The second lesson I learned was that there is always feedback being provided, and at every stage of the production process. For me, this advice is invaluable, as I am quite new to producing videos. The great thing about the advice is that it doesn’t just come from Adam but from everyone. If you have a question, whether it be about submitting a video, or setting up a recording environment on a budget, a quick post to the Slack channel will usually elicit a rapid response with helpful and valuable advice.

Having access to this group of professionals has been a wonderful learning experience, as everyone brings their own skills and unique point of view to the team.

TechSnips also successfully addressed the issue I was having with generating ideas. There is always a constant supply of ideas, both from the other contributors, subscribers and sometimes from Adam himself. Once I took a look through those lists of ideas, I realized just how much I had to offer the community. Imposter syndrome….deleted! Well, not entirely, but it isn’t as ever-present as it used to be.

The production quality at TechSnips impressed me right from day one. Every time I submit a video or a blog post, I think to myself “Yeah, that looks pretty good.” Then the editors get a hold of it and give it this incredibly polished look. I will confess to being happily surprised at how good that first video looked after the editing was complete. Now, I find myself anxiously awaiting the final product every time I submit a new video. I just can’t wait to see how good they look.

I am having an absolutely fantastic time with TechSnips! I haven’t had this much fun or felt this excited to work on a new project in a very long time. The sense of teamwork, constant advice, support, and being able to see my content published alongside the that of so many other professionals has been quite rewarding. I cannot think of anywhere better to spread my wings and learn some new skills. I have found everything I need here; Training, guidance, teamwork, ideas, and enough work and excitement to keep me coming back for more.

I would highly recommend to anyone who is thinking about publishing content to give TechSnips a try. There is nothing to lose from the attempt, and so very much to gain.

Making all Objects in an AWS S3 Bucket Always Public

At TechSnips, we use Amazon S3 to store all of the stuff required to operate. One ability we need is to provide a publicly accessible repository of files. Luckily, S3 has this ability to set objects to public read-access.

To set an object to public read-access, you can click on Make Public via right-clicking on the object inside of the S3 Management Console.

S3 Make Public

This is all well and good but if you’ve got tons of files constantly being uploaded to S3, I’m not about to manually make all of my objects public like this!

After a bit of digging, I was able to figure out how to make all objects be public the moment they are added to a particular bucket. To do this requires creating a bucket policy. This bucket policy applies to all GetObject actions. You can see the bucket policy I used below for our techsnips-public S3 bucket.

This bucket policy can be assigned to the bucket via the Management Console

Once you have the bucket policy set, you’ll then need to also assign Public Access to the Everyone group as well via the Access Control List.

How to Manage DNS Records with PowerShell

Most of the time, DNS records are managed dynamically by your DNS server. However, at times you may find that you need to manually create, edit, or remove various types of DNS records. It is at times like this that PowerShell is quite useful for managing these records.

Viewing DNS Records

You can view all of the resource records for a given DNS zone by simply using the Get-DnsServerResourceRecord cmdlet and specifying the zone name parameter:

As you can see, this generates quite a lengthy list of records. This nicely highlights one of the advantages of this particular cmdlet over the graphical DNS console. This view gives you all of the records for this zone, regardless of which folder they are in. In the graphical console, it would take quite some time to piece this information together.

Now, let’s thin out this list a bit. Using the same cmdlet, but adding the RRType parameter to search for A records (IPv4 hosts) and filtering for records where the Time To Live (TTL) is greater than 15 minutes gives us a bit more of a manageable list:

Taking this one step further, we can also search for records in a different DNS zone, on a different DNS server. In this example, we will search for A records in the “canada.corp.ad” zone on DNS server DC03:

Adding and Removing Host Records (A and AAAA)

To add a host record, we will need to use the Add-DnsServerResourceRecordA cmdlet. In this example, we need to add a host record for a new printer that we are adding to the network. It will be added to the corp.ad zone with the name “reddeerprint01”, and it’s IP address is 192.168.2.56.

If it turns out that we need to remove a record, for example, if the printer has been decommissioned, we can use the following code to remove the host record that we just created:

It is also just as easy to add an IPv6 host record. Of course, these records differ slightly, as they are listed as AAAA records. You may notice that we are now using the Add-DnsServerResourceRecordAAAA cmdlet. It’s a subtle change, but an important one. Let’s add a record to the “corp.ad” zone for the new IT Intranet server at “fc00:0128” and then quickly verify that it has been created:

Adding Reverse Lookup Records (PTR)

A reverse lookup record allows the client to query a DNS server to request the hostname for a supplied IP address. Creating a PTR record is a relatively easy process, but there is one important bit of information you will need to know before you start adding PTR records. Reverse lookup zones are not created by default. You will need to set up your reverse lookup zone prior to adding records.

Fortunately, it is relatively easy to do. You just need to use the Add-DnsServerPrimaryZone cmdlet and provide it with the Network ID. In this example, I have also chosen to set the replication scope to the entire AD forest, and I have specifically targeted “DC03” as the preferred DNS server:

Now that our reverse lookup zone is in place, we can add our PTR record for a new printer called “CYQF-Printer-01.canada.corp.ad” that has an IP address of 192.168.2.56. As this record is for the “canada.corp.ad” zone, we will be targeting the DNS server “DC03”.

When using the Add-DnsServerResourceRecordPtr cmdlet, it is important to note a couple of things. First, that you need to specify the zone name using the network ID in reverse order, then add “.in-addr.arpa”. So for our “192.168.2.0/24” network ID, the zone name is “2.168.192.in-addr.arpa”. Second, the “Name” parameter is simply the host portion of the IP address. For our printer at 192.168.2.56, the “Name” is simply “56”.

Once you have those pieces of information, the code required to create the PTR record is relatively simple, if a bit long:

Adding Alias Records (CNAME)

To finish off, we will create a host alias record or CNAME record using the Add-DnsServerResourceRecordCName cmdlet. These records allow you to specify an alias for an existing host record in the zone. This becomes especially useful, for example, if you want to provide your finance users with an address for their web-enabled finance app. You could create an alias called “finance”, and point it to the web server “webapp25.corp.ad”. Then when you need to migrate the app to a new web server with a new hostname, you simply change the CMANE record to point “finance” to the new host. This way, the users don’t have to update their bookmarks. They can continue to access their application using the address “finance.corp.ad”.

Additional Resources

Companion video: “How To Manage DNS Records With PowerShell