How to Quickly Export and Import an OVF Into vCenter With PowerCLI

Background

In my current day job, I’m often asked about using PowerCLI to perform a number of tasks in a vCenter cluster. This is a story about a recent request for assistance from a colleague who needed to export a custom monitoring appliance template to a new vCenter cluster that was being built. My colleague was under a time constraint and did not have the necessary access to the template.

Getting Started

Never wanting to miss a chance to use PowerShell or PowerCLI, I jumped in head first to help. I gathered the necessary information from my colleague, and began connecting to the cluster:

This takes just a moment to complete. Next, I know what verbs I’m going to need, so I look up what commands are available. Notice that I truncate my verb using the correct quoting syntax, as explained in a previous post of mine:

Discovering PowerCLI commands using Get-Command

There are two cmdlets that stand out; Export-VApp and Import-VApp.

Both of these cmdlets appear to be exactly what I need. But first, I’ll educate myself a little more on the proper use for each. I start with Export-VApp This cmdlet will export the powered off VM as an OVF to the current directory my session is in by default if I do not specify a path. I have a path in mind, so I’m going to go with the following code:

But there’s an issue:Powered On VM Error

I should have thought about that a bit more before running the command. You cannot export a running VM to an OVF! No worries, this is a quick fix. I’ll modify my code a little more:

That was easy. With the template appliance now offline, I could resume running the Export-VApp cmdlet I tried to run earlier. This process took about 10 minutes, and wasn’t a very large appliance to begin with. Now I have a 3.5 GB appliance ready to be deployed into another vCenter environment. Or do I?

Trouble Ahead

Feeling like I’m driving the train now, I enter and run the following code:

The notion just crossed my mind that I got ahead of myself, and failed to find out if I was actually connecting to another vCenter cluster.

Something happened when I began to import the previously exported VM appliance. A sea of red error messages.

Trouble Behind

I read the error message, and sure enough, the host is not a part of a vCenter cluster and therefore does not have proper licensing to complete the import using PowerCLI. This is a limitation that VMware enforces. No worries, I could still connect to the web interface of the host and manually import using the HTML 5 interface. The wizard walks you through each step, give the imported appliance a name, choose the OVF, datastore, deployment type (thick or thin provisioned), and verify the configuration. After that, select finish and the import begins. While the previous import attempt would work great with a vCenter cluster, it was simply not going to work in this situation. This took a little longer than expected but was straight forward. You can read more about the process here.

In the end, the import was a success, and my colleague met their deadline.

Final Thoughts

Until this exercise, I was not aware that not all PowerCLI cmdlets were available in all situations. However, Both of us learned a new skill and, while experiencing some unforeseen adversity, we still accomplished the task at hand. Too often we rush through IT projects looking for the ‘quick’ fix. Watch your speed, take another minute or two to ask questions, step back and understand the problem you are trying to solve. You may find you’ll learn something new.

The Difference Between Single and Double Quotes in PowerShell

Photo by Luca Bravo on Unsplash

“Quote me as saying I was mis-quoted.” -Groucho Marx

There are two types of quotes that can be used in PowerShell. Single and Double quotations. Some critical differences between the two can make or break a script. Knowing these differences will make you a more effective PowerShell scriptwriter and help you avoid a rather simple mistake.

In this post, I’ll quickly explain these differences and provide examples of each scenario.

‘Single Quotation’

Single quotation strings are what you will most often use and encounter when creating or troubleshooting PowerShell scripts.

Consider the following example:

Now examine the output:

In the above case, PowerShell ignores  $MyVar1 and treats the variable literally as $MyVar1, exactly what was typed. There is no substitution here.

But how do you get PowerShell to recognize the variable value within a quoted string value? That’s where double quotation comes in.

“Double Quotation”

Double quotation gives you a dynamic element to string values. You will encounter this type of string quotation when the string contains dynamic data from variables stored in memory or dynamically generated.

Consider the following example:

Now examine the output:

In the above case, PowerShell processes $MyVar2 because it was enclosed by a double-quoted string. Double quotes make PowerShell parse for text (the variable) preceded by a dollar sign and substitutes the variable name the corresponding value.

Real World Scenario

Now, apply this knowledge to a real scenario. Let’s say that you need to create a small function that will give an operator on your team some real basic information:

  • Date / Time
  • Disk % Used
  • Disk % free

You need to return this information visually to an operator. Simple.

First, some pseudo code. We need to display the date time as today’s date and time. Think about how this string value will work. We can use Get-Date and the -Uformat  parameter to give us the required date/time by using the correct patterns:

Testing the code in a PowerShell terminal confirms this works:

That takes care of the first part of the script. Now, I need to gather some disk information to also output to the terminal. The key metric I’m looking for is the percentage of free space remaining.  I’ll display this information using Write-Host again, but this time I’ll need to insert additional code inside the double-quoted string. Remember, this information will be dynamic. For the purposes of this example, I’m going to create a variable, then utilize an available member type property to get the value I’m looking for:

Testing the code in a PowerShell terminal confirms this works:

Perfect. We now have two variables that we can place in the strings that the operator will see when running this function. So let’s assemble the bits into the final script that will become our function:

Testing again in a PowerShell terminal, here is what the operator would see:

Notice what I did inside the last Write-Host line with the $disk variable. PowerShell evaluates the $( ) construct as an entire subexpression then replaces the result. Doing it this way also helps you avoid having to create more variables, which saves memory and can even make your script faster.

The function still needs some work. So let’s finish it off by adding some math to show a full calculation to the operator:

Results:

The operator can now make some faster decisions while supporting a remote system by using this function.

Final Thoughts

There’s not much to quotes in PowerShell. The one key concept to remember is that you need to know when to be literal ( ‘ ‘ ), and when to be dynamic ( ” ” ). By default, you should always use single quotes unless there is a requirement for dynamic data in the string construct. I hope you found this information useful!

Additional Resources

To learn more about quotation rules, visit the about_Quoting_Rules PowerShell documentation from Microsoft or this excellent MSDN article.
For even more examples of single/double quote usage, read Kevin Marquette’s “Everything you wanted to know about variable substitution in strings” .

How to Build a Basic Report of Recently Installed Windows Updates

Photo by rawpixel on Unsplash

“Distrust and caution are the parents of security.” -Benjamin Franklin

If you’ve ever deployed Windows Updates to clients on your network, you have probably been asked by your manager(s) what KB’s were deployed, and when if an issue comes up on a workstation or server. Unfortunately, sometimes the built-in WSUS reporting tool can leave you frustrated and doesn’t have great functionality for generating them outside of the WSUS management GUI. A problem I regularly encounter is a crashing MMC, which then crashes the WSUS services, causing me to have to reset the node and start over. It’s very annoying.

Distrust & Caution

I was recently asked by a group of managers that were working on validating a security vulnerability scan for some assistance. This vulnerability scan was claiming that a set of systems were missing particular Microsoft KB’s, KB’s that were recently approved, deadlined, and showing as installed in the WSUS management console. I sent some screenshots of the console status along with my sysadmin reply. I didn’t give it much thought at the time because I was busy with other projects and this was a routine request.

A day or so went by, and another vulnerability scan was run, producing the same results. Management was not convinced that the updates were installed. Having issues with WSUS from time to time, I started to distrust the built in reports and the management console. To be cautious, and a little more diligent, I decided to bypass the WSUS management console and go straight to the workstations and servers that were showing up in the security vulnerability scan.

Some Explicit Remoting Here, A Couple of Cmdlets There….

Luckily, the security vulnerability scan only found about 4 workstations and 12 servers with these supposedly missing KB’s. So I created a simple list in a text file using the Fully Qualified Domain Name (FQDN) of each host.  I also knew for a fact, that the missing KB’s would have been installed in the past 30 days as I just completed a maintenance cycle.

With this knowledge in hand, I jotted down some pseudo code to help me begin. Here’s what I outlined:

  • Store my text file that contains the list of hosts.
  • For each of the hosts in that file, run a command.
    • The command must gather installed KB’s installed in the last 30 days.
    • The output only needs to contain the hostname, KB/HotFix ID, and the install date.
    • The output needs to be readable, and just needs to be a simple file.
  • No fancy coding needed, just comparing visually to what WSUS reporting was displaying.

Based on my notes, I had a good idea of what I was looking for and what cmdlets I might need. The primary focus was on the Get-HotFix cmdlet. What this cmdlet does is query all the hotfixes (more commonly referred to as security updates) that have been applied to a Windows host. You can read more about this cmdlet and how to use it here.

Get-HotFix does not support implicit remoting so I needed to come up with method to run this cmdlet on the systems I needed to report on. Invoke-Command does and you can pass multiple values to the -ComputerName parameter. I already have saved a list of hosts I am targeting, so I’ll save myself some typing and store those hosts as a variable. To do so, I’ll have to assign a variable name and make the value the list of hosts.  Get-Content  will read the content of the text file line by line creating an array of sorts. Let’s call this array $Hosts . Now I have a command, some data to feed to the next set of commands, but I need to make the resulting data readable and concise.

I want to take a moment here to emphasize “Filter First, Format Last.” . Remembering this will help you when working with these types of scripts. Now, running the Get-Hotfix cmdlet by itself will typically result in a long list of updates that have been applied to a host. Filtering helps gather just the information you need. Without filtered data, formatting is useless at this point. Think of filtering as your data type requirements, and formatting as how you want that data displayed. For my purposes, I already had the requirements thought out. I needed to get updates installed in the past 30 days.

To filter, I will need to use the Where-Object cmdlet and then pass along some member properties and comparison operators with a dash of math. To do this, my pseudo code will take every object returned ( $_.) from Get-HotFix , Where-Object -Property installed on data is greater than ( -gt)today’s date (or whenever I run the script) minus (-30) days ago. That will get the initial data I’m looking for but I want to filter the returned objects and their properties a little more. This is where Select-Object will help, allowing me to further trim the amount of data to be displayed to just a couple of crucial properties.

Now that I have the data properly filtered, now I can move on to formatting the results into a usable format. To do so I’ll pipe ( | ) the results from my previous filtering to Format-Table -Autosize and output as a file type of my choosing. I’ll need to use -Append and -ErrorAction SilentlyContinue parameters to ensure that each result is written to the next line in the output file and if an error occurs, it won’t cause the rest of the hosts to not be contacted.

I chose to go with a text file because I didn’t require anything fancy. You can change the output to meet your needs. My output looked something similar to this:

Example Output text file

Here’s the final script came up with and used:

For me, this was simple, concise, and offered proof that the KB’s were indeed installed. The report was well received by the management team and in a format easily read.

Creating a PowerShell Script from Written Processes & Procedures

Photo by Fabian Grohs on Unsplash

As a mentor, I’m often asked, “How do you get inspiration for a PowerShell script?”, followed by something sounding similar to, “I just don’t know what I can script or where to start.” When I’m told that, the person saying it sounds defeated and about to give up. This was a question & feeling I had myself early in my PowerShell journey too.

“So, what’s the answer, Bill?” you might ask. Well…the answer you seek, young grasshopper is…

Documentation.

How I Started

Let’s talk about how I started to approach scenarios and challenges by using existing documentation as my base of reference, or pseudo code. Many years ago, I struggled to make scripts. No matter the language, it was an awful feeling of imposter syndrome. I could read some code and stumble around clumsily figuring out some bits here and there, but it was a constant struggle. It wasn’t until I started documenting my IT processes that I began to correlate written word to small bits of pseudo code that I could then translate in to PowerShell one-liners. Once I started doing that, things got a lot easier.

Sample Scenario

I have some maintenance tasks that I have to perform at least twice a month for User Acceptance Testing (UAT), Quality Assurance (QA) and production environments. Lucky for me, these tasks are already written down and stored in a team KB article. With half the battle already won, I carefully read through the documented steps for taking systems and applications down gracefully for maintenance. The tasks progresses something like this:

  1. Place monitoring agents in maintenance mode (nothing like getting email alerts for known issues)
  2. Stop IIS application pools 1,2 & 3 on X server
  3. Stop IIS application pool 4 on Y server
  4. Stop services on A,B,C & D servers
  5. Log into WSUS, approve & deadline OS updates to specific groups
  6. Allow reboots to occur.

Looks pretty straight forward right? My predecessors were manually performing these steps for years. Well, I’m not my predecessors. There’s enough information here to begin making a script. Let’s begin.

Task 1 could be automated, but for the purposes of this post I’m skipping it because not all monitoring platforms are the same. Moving on.

Task 2. Now we have something to work with. Using key words, I begin by discovering what commands I have available that might stop an IIS application pool:

Get-Command -Module 'WebAdministration' -verb 'Stop'

Awesome. Stop-WebAppPool Appears to be exactly what I need to complete this task. Spend a minute or two reading the help if it’s the first time you’ve seen this cmdlet: Get-Help Stop-WebAppPool -Online

Now I know how to tackle Task 2 and Task 3. My code now looks like this:

On to Task 4. Now this one should be simple for anyone who is new to PowerShell, as it’s a common task that is demonstrated in a lot of training material. This task will make use of the Stop-Service cmdlet. There are a few ways this can be done, but I’ll keep it simple for now so we don’t get into the weeds and detract from the overall goal.

On each host, there are two services that work concert with each other as part of an application that was hosted on the IIS servers in Tasks 2 & 3.  Stop-Service will allow us to enter multiple values in the -ComputerName parameter, and since the naming scheme I’m using is short, it’s not a big deal to enter them all here. I’ll also be using the -ServiceName parameter, which also accepts multiple string values. When finished assembling the code, it looks like this:

Great! I’ve just saved a few minutes of not having to RDP to each of these systems, or use Server Manager, or type this all out in a PowerShell terminal.

Let’s Add More Stuff!

The whole reason for shutting all these services down gracefully is to be able to apply Windows security patches to the server OS without screwing up the applications if they were still being used during a scheduled maintenance window (humor me for a moment and save the snark about Windows Updates).

How can I work with WSUS? There has to be a module I can use…

Enter PoshWSUS. This handy PowerShell module contains exactly what I need for the final component in my scripted task.  There are a lot of cmdlets available in this module and I’m not going to explain all of them right now.

In order to complete the last step, I need to:

  1. Connect to my WSUS server.
    Connect-PSWSUSServer -WsusServer localhost -Port 8530 -Verbose
  2. Store the KB’s to be deployed as a variable.
    $Updates = Get-Content 'C:\PScripts\Maintenance\updates.txt'
  3. Store a deadline of 1 hour ahead of the time the script executes as a variable.
    $Deadline = (get-date).addHours(1)
  4. Get updates, approve then set install flag along with the deadline flag to assigned groups.
  5. Close the connection.

As you’ll see above, I’ve thought out the logical steps and created some pseudo code to get started. It’s the same process you’ll follow when trying to create your own scripts. It’s almost as if there’s a theme developing here!

Now on to what you’ve been waiting for. Let’s assemble all the bits into the final script:

The key thing to remember here is, if you can write it down, you probably can script it. So go back and look at some of your documented processes and procedures, and you’ll soon discover that you’ll have enough inspiration to keep you busy for a while making PowerShell scripts.

Bonus Round

If you’ve read my past blog post on “How I Learned Pester by Building a Domain Controller Infrastructure Test”, it should be pretty obvious that I’m a fan and love using Pester now. I even build small tests for small scripts like the one above.

I need to a quick test with some visual output since I’m typically running this script from a PowerShell terminal manually. So, with the same pseudo code used earlier, let’s build a simple test that will verify all the actions in our script did what we expected:

I left out tests for servers B,C & D because they would be identical to the test shown for server A in the above example. Now all that is required to run the this test as part of the My-MaintenanceTask.ps1 script would be to add this line at the end of that script:

Once the script has completed, you will then see output in the terminal showing the results of the tests.

If you really want to gussy up a script up to include some progress bars and have your pester results placed in a nice report you can give to a manager, then I would strongly recommend reading Adam Bertram’s “A Better Way to Use Write-Progress” & watching Nick Rimmer’s “How to Create a Simple Pester Test Report in HTML” to supercharge your maintenance scripts.

How I Decided to Join TechSnips and Became a Contributor

 

 

After the birth of my first son, I was feeling like I was at a crossroads in my career. I have been working in IT on my own and for many others for a while now (I built my first computer somewhere around 1999-2000) and have been exposed to many different types of environments and tasks. Since his arrival into my life, I have had a growing sense of responsibility to move beyond just having mentors, to being the mentor and teaching; from being the apprentice to becoming the master.

The Journey

Back in May, I was following quite a few industry peers who were tech bloggers, presenters, and evangelists. One of those peers is Adam Bertram. I had seen a few tweets about an opportunity to get involved in a new venture that centered on the IT pro and career advancement. Having just finished reading the book “Be The Master” by Don Jones, I was inspired to take a leap and do more, I just wasn’t sure exactly how. I responded to one of Adam’s messages. Not long after, I had received a message from him. He explained that he was developing a new business in which IT pros could gain some valuable exposure to the IT community, teach others and further their careers. Everyone had something worth teaching.

I left that conversation having been inspired and was convinced that this was an opportunity worthy of investing time in. Therefore, I began to think of something that I knew that I could teach. Having some experience with Group Policy, and a passion for PowerShell, I did something…

I Just Hit Record…

It sounds easy, and for the most part that is true. For many people, including myself, not so much. IT pros tend to be a little shy, and afraid to put themselves out there. Aside from occasionally being very active on some forums, I had never recorded myself. I have taught some people one on one, but never a group of random strangers on the Internet.

This was an undiscovered country for me.

To Boldly Go…

After a period of reflection on what I could possibly contribute that was in my mind worthy of teaching. Side note, everything is worth teaching! Not everyone who is in IT is at the same point in his or her career as you. I remember having a hard time with some topics and not having someone there to teach me, but I digress.

I searched my geek stash for any supplies I could find to aid in making a recording. A spare monitor here, a junk microphone there. Some free screen-recording software. With zero initial budget, there were some struggles. Specifically audio battles. With the help of @MichaelBender, I obtained a much more high-quality microphone that eliminated most of the poor sound quality that was keeping my demo from passing acceptance to become a snip contributor. I am forever grateful for that random act of kindness.

With newfound confidence and better audio, I regrouped and submitted another demo, and it was accepted. The first hurdle passed, I learned a lot and was inspired to keep going. So I made a short snip on How to Create a Starter Group Policy Object with PowerShell on Windows Server 2016 which demonstrates how to quickly create an empty Starter GPO that can be configured with baseline settings, creating a template of sorts for future use. With some guidance from Adam, @_BryceMcDonald and the TechSnips editing team, the final snip was polished and published. It was a milestone for my career:

I was now a professionally published contributor.

Conclusion

The feeling of knowing that I have left something valuable for someone to learn from has not faded. It’s driven me to also submit writings about Pester to the TechSnips.io blog, and to publish two additional snips since then with more in the works as time allows. I take great pride in telling people about what TechSnips has done for me and why my fellow IT pros should consider joining. We all have something worthy of teaching the next generation of IT pro. We all need to “Be the Master.” Help someone else. Just hit record, and see where it takes you.

How to Use Tags in Pester for Targeted Testing

“There’s no sense in being precise when you don’t even know what you’re talking about. -John von Neumann”

 

http://developers-club.com/posts/264697/

I thought this was a good quote as the theme for this post. Re-read the quote, take it in, and then continue reading.

 

During the construction of a set of Pester tests, it can be increasingly difficult to follow the flow of each of the tests and the subjects against which these tests will be performed.

Since Pester tests PowerShell code, you are able to use a PowerShell method called Regions to separate section of code. These Regions will allow you to collapse large sections of similar code to create an overall more pleasant script reading experience. While that may assist to some extent, it does not help you find and run specific tests.

Here’s an example of what using a region would look like:

Example 1 - Regions
Region example

The problem is that this technique is only useful when reading code, but not necessarily nearly has helpful when executing code. A region doesn’t allow you to pick out precisely what test you want to run. That’s where using Tags comes into play.

Tagging is a Pester parameter that allows you to filter Describe blocks using a string or keyword value. When using Invoke-Pester with the -Tag & -Passthru parameters, only the Describe blocks that contain the -Tag value specified will execute.
Here is a simple example:

Example 2
Tag Example

This is useful because you do not have to have multiple test files with only one Describe block in each, you could instead create a single master file with multiple Describe blocks, each with their own tag. This is very useful to do when you have an application or infrastructure stack you want to test, and have the ability to add any new regression tests you may need in the future.

You can use multiple tags for a Describe block, but you cannot use multiple tags when running Invoke-Pester. This took me a little time to figure out but after thinking about it, it does make sense. You want to run a single or suite of tests that have the tag, not multiple tests with multiple tags because that is what Invoke-Pester will do by itself!

Since I first started learning about Pester, I have been building a few infrastructure tests for use in the environments I work in often. One particular task involves working with domain controllers and occasionally do some investigation into replication issues.

Taking what I know and turning it into some tests that run quickly and uniformly has improved my response time to domain controller issues. Tags have been extremely helpful during a couple of troubleshooting tasks now that I could target the specific failed component(s) in my test set based on results of the initial test.

There is no reason to keep running full tests for example if, for example, DNS is not working. I can then target DNS services and infrastructure without chasing other possibilities, thus wasting time. Another benefit was that I did not have to go find additional scripts because I already had the -tag parameter set for the Describe block in question.

How I Learned Pester by Building a Domain Controller Infrastructure Test

“What is simple, is understood. What is understood, is executed.” -Anonymous

Let me start by saying that Pester for the longest time was very intimidating to me early in my PowerShell journey. I’m still wondering what exactly it was that made me think that way. Was it being too busy just trying to learn what I needed for that moment, or was it that I didn’t see how I could implement it into my suite of scripts? I don’t have any good answers.

The Scenario

I’m a Sysadmin by trade and recently had experienced some system issues after performing some typical routine maintenance. Some of this work I had scripted by referencing the original checklists provided to me when I first started doing this work. This particular cycle, the checklist wasn’t enough.

Continue reading “How I Learned Pester by Building a Domain Controller Infrastructure Test”

Creating Starter Group Policy Objects for Quick Policy Baselines

If you are lucky to build a complete Active Directory infrastructure from scratch, then you know how much planning and consideration goes into the whole process. And it doesn’t just stop with delivering the environment. You have to also consider ongoing management of the environment.

That’s why you should consider using Starter Group Policy objects.

Starter Group Policy object is just a blank, or clean slate if you will, Group Policy Object. The purpose of these objects is to allow an administrator to create and have a pre-configured group of settings that represent a baseline for any future policy that is to be created. These settings can then be copied into a more formal Group Policy Object that is then applied to single or multiple organizational units (OU’s for short). Copying these starter objects preserves your baseline strategy and allows you to dynamically add or remove settings that shouldn’t be applied to future objects.

Continue reading “Creating Starter Group Policy Objects for Quick Policy Baselines”