How to Build a Basic Report of Recently Installed Windows Updates

Photo by rawpixel on Unsplash

“Distrust and caution are the parents of security.” -Benjamin Franklin

If you’ve ever deployed Windows Updates to clients on your network, you have probably been asked by your manager(s) what KB’s were deployed, and when if an issue comes up on a workstation or server. Unfortunately, sometimes the built-in WSUS reporting tool can leave you frustrated and doesn’t have great functionality for generating them outside of the WSUS management GUI. A problem I regularly encounter is a crashing MMC, which then crashes the WSUS services, causing me to have to reset the node and start over. It’s very annoying.

Distrust & Caution

I was recently asked by a group of managers that were working on validating a security vulnerability scan for some assistance. This vulnerability scan was claiming that a set of systems were missing particular Microsoft KB’s, KB’s that were recently approved, deadlined, and showing as installed in the WSUS management console. I sent some screenshots of the console status along with my sysadmin reply. I didn’t give it much thought at the time because I was busy with other projects and this was a routine request.

A day or so went by, and another vulnerability scan was run, producing the same results. Management was not convinced that the updates were installed. Having issues with WSUS from time to time, I started to distrust the built in reports and the management console. To be cautious, and a little more diligent, I decided to bypass the WSUS management console and go straight to the workstations and servers that were showing up in the security vulnerability scan.

Some Explicit Remoting Here, A Couple of Cmdlets There….

Luckily, the security vulnerability scan only found about 4 workstations and 12 servers with these supposedly missing KB’s. So I created a simple list in a text file using the Fully Qualified Domain Name (FQDN) of each host.  I also knew for a fact, that the missing KB’s would have been installed in the past 30 days as I just completed a maintenance cycle.

With this knowledge in hand, I jotted down some pseudo code to help me begin. Here’s what I outlined:

  • Store my text file that contains the list of hosts.
  • For each of the hosts in that file, run a command.
    • The command must gather installed KB’s installed in the last 30 days.
    • The output only needs to contain the hostname, KB/HotFix ID, and the install date.
    • The output needs to be readable, and just needs to be a simple file.
  • No fancy coding needed, just comparing visually to what WSUS reporting was displaying.

Based on my notes, I had a good idea of what I was looking for and what cmdlets I might need. The primary focus was on the Get-HotFix cmdlet. What this cmdlet does is query all the hotfixes (more commonly referred to as security updates) that have been applied to a Windows host. You can read more about this cmdlet and how to use it here.

Get-HotFix does not support implicit remoting so I needed to come up with method to run this cmdlet on the systems I needed to report on. Invoke-Command does and you can pass multiple values to the -ComputerName parameter. I already have saved a list of hosts I am targeting, so I’ll save myself some typing and store those hosts as a variable. To do so, I’ll have to assign a variable name and make the value the list of hosts.  Get-Content  will read the content of the text file line by line creating an array of sorts. Let’s call this array $Hosts . Now I have a command, some data to feed to the next set of commands, but I need to make the resulting data readable and concise.

I want to take a moment here to emphasize “Filter First, Format Last.” . Remembering this will help you when working with these types of scripts. Now, running the Get-Hotfix cmdlet by itself will typically result in a long list of updates that have been applied to a host. Filtering helps gather just the information you need. Without filtered data, formatting is useless at this point. Think of filtering as your data type requirements, and formatting as how you want that data displayed. For my purposes, I already had the requirements thought out. I needed to get updates installed in the past 30 days.

To filter, I will need to use the Where-Object cmdlet and then pass along some member properties and comparison operators with a dash of math. To do this, my pseudo code will take every object returned ( $_.) from Get-HotFix , Where-Object -Property installed on data is greater than ( -gt)today’s date (or whenever I run the script) minus (-30) days ago. That will get the initial data I’m looking for but I want to filter the returned objects and their properties a little more. This is where Select-Object will help, allowing me to further trim the amount of data to be displayed to just a couple of crucial properties.

Now that I have the data properly filtered, now I can move on to formatting the results into a usable format. To do so I’ll pipe ( | ) the results from my previous filtering to Format-Table -Autosize and output as a file type of my choosing. I’ll need to use -Append and -ErrorAction SilentlyContinue parameters to ensure that each result is written to the next line in the output file and if an error occurs, it won’t cause the rest of the hosts to not be contacted.

I chose to go with a text file because I didn’t require anything fancy. You can change the output to meet your needs. My output looked something similar to this:

Example Output text file

Here’s the final script came up with and used:

For me, this was simple, concise, and offered proof that the KB’s were indeed installed. The report was well received by the management team and in a format easily read.

Creating a PowerShell Script from Written Processes & Procedures

Photo by Fabian Grohs on Unsplash

As a mentor, I’m often asked, “How do you get inspiration for a PowerShell script?”, followed by something sounding similar to, “I just don’t know what I can script or where to start.” When I’m told that, the person saying it sounds defeated and about to give up. This was a question & feeling I had myself early in my PowerShell journey too.

“So, what’s the answer, Bill?” you might ask. Well…the answer you seek, young grasshopper is…

Documentation.

How I Started

Let’s talk about how I started to approach scenarios and challenges by using existing documentation as my base of reference, or pseudo code. Many years ago, I struggled to make scripts. No matter the language, it was an awful feeling of imposter syndrome. I could read some code and stumble around clumsily figuring out some bits here and there, but it was a constant struggle. It wasn’t until I started documenting my IT processes that I began to correlate written word to small bits of pseudo code that I could then translate in to PowerShell one-liners. Once I started doing that, things got a lot easier.

Sample Scenario

I have some maintenance tasks that I have to perform at least twice a month for User Acceptance Testing (UAT), Quality Assurance (QA) and production environments. Lucky for me, these tasks are already written down and stored in a team KB article. With half the battle already won, I carefully read through the documented steps for taking systems and applications down gracefully for maintenance. The tasks progresses something like this:

  1. Place monitoring agents in maintenance mode (nothing like getting email alerts for known issues)
  2. Stop IIS application pools 1,2 & 3 on X server
  3. Stop IIS application pool 4 on Y server
  4. Stop services on A,B,C & D servers
  5. Log into WSUS, approve & deadline OS updates to specific groups
  6. Allow reboots to occur.

Looks pretty straight forward right? My predecessors were manually performing these steps for years. Well, I’m not my predecessors. There’s enough information here to begin making a script. Let’s begin.

Task 1 could be automated, but for the purposes of this post I’m skipping it because not all monitoring platforms are the same. Moving on.

Task 2. Now we have something to work with. Using key words, I begin by discovering what commands I have available that might stop an IIS application pool:

Get-Command -Module 'WebAdministration' -verb 'Stop'

Awesome. Stop-WebAppPool Appears to be exactly what I need to complete this task. Spend a minute or two reading the help if it’s the first time you’ve seen this cmdlet: Get-Help Stop-WebAppPool -Online

Now I know how to tackle Task 2 and Task 3. My code now looks like this:

On to Task 4. Now this one should be simple for anyone who is new to PowerShell, as it’s a common task that is demonstrated in a lot of training material. This task will make use of the Stop-Service cmdlet. There are a few ways this can be done, but I’ll keep it simple for now so we don’t get into the weeds and detract from the overall goal.

On each host, there are two services that work concert with each other as part of an application that was hosted on the IIS servers in Tasks 2 & 3.  Stop-Service will allow us to enter multiple values in the -ComputerName parameter, and since the naming scheme I’m using is short, it’s not a big deal to enter them all here. I’ll also be using the -ServiceName parameter, which also accepts multiple string values. When finished assembling the code, it looks like this:

Great! I’ve just saved a few minutes of not having to RDP to each of these systems, or use Server Manager, or type this all out in a PowerShell terminal.

Let’s Add More Stuff!

The whole reason for shutting all these services down gracefully is to be able to apply Windows security patches to the server OS without screwing up the applications if they were still being used during a scheduled maintenance window (humor me for a moment and save the snark about Windows Updates).

How can I work with WSUS? There has to be a module I can use…

Enter PoshWSUS. This handy PowerShell module contains exactly what I need for the final component in my scripted task.  There are a lot of cmdlets available in this module and I’m not going to explain all of them right now.

In order to complete the last step, I need to:

  1. Connect to my WSUS server.
    Connect-PSWSUSServer -WsusServer localhost -Port 8530 -Verbose
  2. Store the KB’s to be deployed as a variable.
    $Updates = Get-Content 'C:\PScripts\Maintenance\updates.txt'
  3. Store a deadline of 1 hour ahead of the time the script executes as a variable.
    $Deadline = (get-date).addHours(1)
  4. Get updates, approve then set install flag along with the deadline flag to assigned groups.
  5. Close the connection.

As you’ll see above, I’ve thought out the logical steps and created some pseudo code to get started. It’s the same process you’ll follow when trying to create your own scripts. It’s almost as if there’s a theme developing here!

Now on to what you’ve been waiting for. Let’s assemble all the bits into the final script:

The key thing to remember here is, if you can write it down, you probably can script it. So go back and look at some of your documented processes and procedures, and you’ll soon discover that you’ll have enough inspiration to keep you busy for a while making PowerShell scripts.

Bonus Round

If you’ve read my past blog post on “How I Learned Pester by Building a Domain Controller Infrastructure Test”, it should be pretty obvious that I’m a fan and love using Pester now. I even build small tests for small scripts like the one above.

I need to a quick test with some visual output since I’m typically running this script from a PowerShell terminal manually. So, with the same pseudo code used earlier, let’s build a simple test that will verify all the actions in our script did what we expected:

I left out tests for servers B,C & D because they would be identical to the test shown for server A in the above example. Now all that is required to run the this test as part of the My-MaintenanceTask.ps1 script would be to add this line at the end of that script:

Once the script has completed, you will then see output in the terminal showing the results of the tests.

If you really want to gussy up a script up to include some progress bars and have your pester results placed in a nice report you can give to a manager, then I would strongly recommend reading Adam Bertram’s “A Better Way to Use Write-Progress” & watching Nick Rimmer’s “How to Create a Simple Pester Test Report in HTML” to supercharge your maintenance scripts.

How I Decided to Join TechSnips and Became a Contributor

 

 

After the birth of my first son, I was feeling like I was at a crossroads in my career. I have been working in IT on my own and for many others for a while now (I built my first computer somewhere around 1999-2000) and have been exposed to many different types of environments and tasks. Since his arrival into my life, I have had a growing sense of responsibility to move beyond just having mentors, to being the mentor and teaching; from being the apprentice to becoming the master.

The Journey

Back in May, I was following quite a few industry peers who were tech bloggers, presenters, and evangelists. One of those peers is Adam Bertram. I had seen a few tweets about an opportunity to get involved in a new venture that centered on the IT pro and career advancement. Having just finished reading the book “Be The Master” by Don Jones, I was inspired to take a leap and do more, I just wasn’t sure exactly how. I responded to one of Adam’s messages. Not long after, I had received a message from him. He explained that he was developing a new business in which IT pros could gain some valuable exposure to the IT community, teach others and further their careers. Everyone had something worth teaching.

I left that conversation having been inspired and was convinced that this was an opportunity worthy of investing time in. Therefore, I began to think of something that I knew that I could teach. Having some experience with Group Policy, and a passion for PowerShell, I did something…

I Just Hit Record…

It sounds easy, and for the most part that is true. For many people, including myself, not so much. IT pros tend to be a little shy, and afraid to put themselves out there. Aside from occasionally being very active on some forums, I had never recorded myself. I have taught some people one on one, but never a group of random strangers on the Internet.

This was an undiscovered country for me.

To Boldly Go…

After a period of reflection on what I could possibly contribute that was in my mind worthy of teaching. Side note, everything is worth teaching! Not everyone who is in IT is at the same point in his or her career as you. I remember having a hard time with some topics and not having someone there to teach me, but I digress.

I searched my geek stash for any supplies I could find to aid in making a recording. A spare monitor here, a junk microphone there. Some free screen-recording software. With zero initial budget, there were some struggles. Specifically audio battles. With the help of @MichaelBender, I obtained a much more high-quality microphone that eliminated most of the poor sound quality that was keeping my demo from passing acceptance to become a snip contributor. I am forever grateful for that random act of kindness.

With newfound confidence and better audio, I regrouped and submitted another demo, and it was accepted. The first hurdle passed, I learned a lot and was inspired to keep going. So I made a short snip on How to Create a Starter Group Policy Object with PowerShell on Windows Server 2016 which demonstrates how to quickly create an empty Starter GPO that can be configured with baseline settings, creating a template of sorts for future use. With some guidance from Adam, @_BryceMcDonald and the TechSnips editing team, the final snip was polished and published. It was a milestone for my career:

I was now a professionally published contributor.

Conclusion

The feeling of knowing that I have left something valuable for someone to learn from has not faded. It’s driven me to also submit writings about Pester to the TechSnips.io blog, and to publish two additional snips since then with more in the works as time allows. I take great pride in telling people about what TechSnips has done for me and why my fellow IT pros should consider joining. We all have something worthy of teaching the next generation of IT pro. We all need to “Be the Master.” Help someone else. Just hit record, and see where it takes you.

How I Arrived at TechSnips

I Get Paid to do This?

Two things I get paid to do, solve problems and learn new stuff.

I am fortunate to know very early that I wanted to work with computers. The only problem was I could not concentrate long enough to complete a formal school course.

I was kicked out of two community colleges for academic suspension. I tried to take the required courses like history, calculus, economics etc… but I got bored and never did the required homework or study.

It All Started in the Navy

I eventually ended up in the Navy. I signed up for an extra year to get the technical training I wanted. It was perfect. 8 hours a day 5 days a week for a year. Straight electronics. Resistors, capacitors and circuit boards, Oh My!

The best part? I was given a box of parts and spent 3 months putting it together to build a radio. If it worked, I passed, if not I failed. ( Hint: I passed)

Oh and the discipline, self-esteem and doing things I never thought I could do turned a manchild into a proud, self-confident man. The Navy instilled in me the self- discipline I was sorely lacking.

My Consultant Years

I became an IT consultant right out of the Navy. My lack of a formal degree was an obstacle at times but I never said no to a problem. I knew I could solve any problem thrown at me. After all, I just spent 6 years in the Navy doing just that.

My love of learning also kept up to speed on technology. Two hours every night either on a self-study course or learning a new technology that had just come out.

My first network security job even had me build my own Linux PC on the first day. Back then it was a very manual process with a lot of compiling. If you ever built a Linux PC using “Linux From Scratch” you will know what I mean. I learned a lot and I loved it.

A lot of what I learned did not immediately apply to my job. For example, I am not a programmer but I learned how to use Git just because I thought it was cool. It was the same for a lot of technology I learned. This lifelong learning kept me employed.

How did I arrive at TechSnips?

Well, after many years of IT consulting, it is getting a little uncomfortable being the oldest tech on the IT team. I have been asked in more than one interview why am I not an IT Manager or have some supervisory experience. Yes, age discrimination is a thing.

You know what IT managers do? Answer phones, create budgets, develop strategic plans that no one will use, use words like “Synergy”, “heterogeneous” and  “teamwork”. No, I need to be in front of a keyboard. I can make servers dance, sing and do your dishes.

So while my peers were getting management jobs, I was designing networks for data centers, installing hundreds of servers, laying out cabling, and learning to break into systems. I was good at it

I had been looking for something that I could do remotely and still generate income. I do have a family to support. I started (and failed) at several blogs. I realized that I did not have the business acumen to make it successful.

Feeding My ADHD

This is where TechSnips comes in. I actually came across Techsnips from a tweet posting about a snip for a problem that had. I watched the snip and it leads me to a solution that I needed.

I had been working from home for a while now and I was not looking to go back to the office again. I wondered if they were hiring and, even better, if they were hiring remote techs. I navigated to the contributor page and immediately knew I had to apply.

The more I read about the contributor role, the more excited I became. I was hesitant to submit a video audition (the presenter role), but Anthony Howell said it did not matter. Produce a snip and let me see what you got.

The format of producing a snip fed very well into my ADHD. Short, technical videos on a specific topic that I am interested in. If I get bored with a topic, choose another topic or suggest one. Each Snip was to be no more than 15 minutes. I thought this was perfect. I could do this.

So, I produced my first snip and it was accepted. a couple of days later, I got a call from Anthony Howell welcoming me to the team. The more I heard about what TechSnips is all about, the more excited I became and I knew I had made the right decision.

So, today, I am a producer of several snips and have many more in the works. Producing snips also has given me the confidence to improve my presenting skills.

I am not used to talking in front of people or teaching online. The team at TechSnips has provided valuable advice on how to present technical videos and engage an audience.

TechSnips is giving me the opportunity to not only do what I love but actually get paid to do it.

My TechSnips Saga

A little background

Do you ever sit back and look at the path you took in life and wonder how you ever managed to get to where you are? I sure do. Like every day. I’m currently involved heavily with TechSnips, but I sure wouldn’t be here if I hadn’t started getting into the twittersphere back in April, and I sure wouldn’t have been there if I hadn’t made it to the PowerShell Summit this year, and I sure wouldn’t have been there if a spot hadn’t opened up, and I sure wouldn’t have been there if I didn’t decide to start dictating my own career this year, and well, I’ll stop there.

2018 was not going to be an exciting year for me. I had a steady low stress job that paid me my worth, but worked me like an IT pro with half my experience. So I ended up being paid more than ever, but after a leadership change, I was also stuck answering more phone calls than ever, even compared to my short stint in a Geek Squad call center. I initially decided that the pay plus benefits were too good to walk away from, but like any independent spirit, I always asked myself: What would this look like if I were calling the shots?

From a reminiscent standpoint, I see that my career was treading water. In our industry, you might as well just call the lifeguard if you are content treading water. I, fortunately, was not. I requested to go to the PowerShell Summit, but that request was rejected. And so, in a move that was unprecedented in my career, I made the call to request PTO for that week and take myself. The problem? I missed registration by 3 days.

I dutifully put myself on the waiting list even after reading the statement about how the waiting list is rarely utilized. Kind of like the old ‘we’ll keep your resume on file in case something comes up’. A fun fact about me is that I live within driving distance of Bellevue and I decided to share this information with Don Jones in case something came up. He promised that if something came up last minute, that I would be the first to know. And, I’m sure you saw this coming, but a last minute opening did arise and within 15 minutes of receiving the notification via email, I was a registered attendee for the PowerShell Summit 2018, my first conference ever.

Due to the situation mentioned previously, I decided to put in my notice for my day job the week before the PowerShell Summit with the intention of going out on my own as a consultant. I had no clients, no plan, and no consulting experience, but I had a dream that I was prepared to burn through my savings trying to achieve.

Without venturing too far out of scope here, the PowerShell Summit was career changing for me, literally the best $2000 (admission + expenses) I’ve ever spent. If I hadn’t already put in my notice, I definitely would have the moment I got back, though by that point I didn’t have a job to come back to, not that I shed any tears over that.

While I was at the Summit, I realized just how much a topic like PowerShell thrives in it’s community. Heck, I sat in on Adam Bertram’s side session on how blogging increased his income two-fold because of how attracted a knowledge sharing expert is to some businesses. One of his pieces of advice was to be active in the community, so I decided to take that to heart, and thus The PoSh Wolf was born.

After I got back from the Summit, I made my first tweet ever. Within two weeks I had a blog up and running and I even managed to make my first pull request on a GIT repository, specifically a simple typo fix in the README for PlatyPS. It wasn’t anything major, but it was a start. I had finally put myself in the position that I could start giving back to the community and it felt good.

How I found TechSnips

It wasn’t long after that I responded to Adam’s tweet for content producers. TechSnips was looking for folks interested in sharing knowledge in snip format, an unproven how-to style. I didn’t realize it at the time, but this is revolutionary when compared to the rest of the technical training landscape. In a 3 minute snip, we can walk someone through how to create a LightSail VM in AWS and you don’t even need to know what AWS stands for! You’ll never want to sit through a full AWS course after one of those videos.

The nerve-racking part about applying to be a contributor is the audition process. You have to pick a topic and demonstrate your skills. For someone like me, this was tough! I felt like an imposter. Sure I had 8 years of IT experience, but when put on the spot, it didn’t feel like it. If you want a laugh, check out my audition video (https://youtu.be/NYkZpE_IDjs). Its obvious why it was never published. It was terrible! But, after practice and some good feedback from the peer reviewing stage in our pipeline, I’ve gone from being a shy imposter to a confident presenter. This has made me realize that the only difference between the well-known content producers and the rest of us is that they choose to share their experience. For the most part, they aren’t stuck up jerks, they are just IT pros that are happy to share their knowledge.

Why I’m still here

Beyond the concise format, one of the things I really like about TechSnips is that they’ve fostered a community of like-minded IT professionals that are passionate about sharing their knowledge. These folks stick around because TechSnips has an amazingly efficient publishing pipeline that removes most barriers between an experienced expert and a polished how-to video. And this process improves as fast as you can make recommendations. After working in financial IT, I can verify that this level of nimbleness in a platform is insane.

Now, before you asked about this ‘efficient publishing pipeline’, let me ask you this: Have you ever tried to produce a training for YouTube or somewhere else? It takes a TON of time. Preparation, recording, editing, and finally publishing. Well, TechSnips takes care of the editing and publishing for all of their snips. This means that I, as a contributor, just need to hit record and walk the viewer through a how-to and the editors go back and add the flashy title, the highlights, and remove my mistakes. So, after having a few snips under my belt, I can submit a snip in an hour or two, depending on the depth of content. Then it gets published after some review and the editing process. It is that simple and that is what I love about it. You can keep putting off learning Adobe Premiere and focus on snipping.

One thing I scoffed at when I initially joined up was TechSnips calling itself a ‘Career Development’ platform. They are obviously just using that as a marketing gimmick to attract interest, right? But do you think improving your confidence develops your career? Or maybe having a portfolio of snips would look good on your resume? It sure looks good one mine.

Installing PowerShell Core Everywhere

DevOps is requiring that SysAdmins be experts in more than one operating system. That used to mean learning more than a few shell scripting languages. PowerShell Core is changing that.

With PowerShell Core, it is no longer necessary to learn a new scripting language to support heterogeneous environments.

PowerShell Core is a new edition of PowerShell that is cross-platform (Windows, macOS, and Linux), open-source, and built for heterogeneous environments and the hybrid cloud.

It has recently become available on Windows Internet of Things (IoT). The cross-platform nature of PowerShell Core means that scripts that you write will run on any supported operating system.

What’s the Difference?

The main difference is the platforms they are built on.

Windows PowerShell is built on top of .NET FrameWork and  because of that dependency, is only available on Windows and is launched as powershell.exe

PowerShell Core is built on .NET Core and is available cross platform and is launched as pswh.exe

Installing PowerShell Core

To install on a Windows client or Windows Server, navigate to the GitHub repository – PowerShell Core – and download the .msi package appropriate for your system.

Windows IoT devices already have PowerShell installed which we will use for installing Powershell Core

For Linux Distributions, it just a matter of adding the repository and installing with the package manager.

For Ubuntu, Debian

For CentOS and RedHat

For OpenSUSE

and finally, for Fedora

 

For macOS, Homebrew is the preferred package manager.

Installing Homebrew package manager is a single line command from a terminal, then install Powershell Core.

Embracing DevOps means being able to manage different platforms and OS’s and learning different shell scripting programs maintain them.  With PowerShell Core, you write once, deploy everywhere. It’s another tool in your toolbox.

If you don’t learn it, someone else will.

Duplicating SharePoint Farms with SharePointDSC.Reverse

 

SharePoint farm configurations are notoriously difficult in not only documenting accurately but also migrating those configurations to a new SharePoint farm.

 

Commercial tools and utilities help, but each tool has its pluses and minuses and some of them are not effective and often buggy.  Additionally, the tools can be expensive and come with a high learning curve.

SharepointDSC.Reverse

SharePointDSC.Reverse is a script developed by Nik Charlebois that utilizes SharePoint DSC resources to gather detailed information about the farm and outputs into a configuration file that can be consumed by PowerShell DSC and SharePointDSC resources.

The resulting PowerShell DSC configuration files can be used to create a near perfect copy of the farm to replicate in the new environment or can be used as a template for Azure automation.

SharePointDsc.Reverse currently supports SharePoint Server 2013/ 2016 and soon SharePoint 2019, running on Windows Server 2008 R2, Windows Server 2012 or Windows Server 2012 R2 or higher.

Getting Started

There are a few prerequisites before running the script. PowerShell v 5.1 is required. Two PowerShell DSC modules are also required and will need to be installed.

Log into the Central Administration server and open a PowerShell session as administrator. The SharePointDSC reverse script is installed with a similar command but using a script instead of module. To install the SharePoint Reverse script, we’ll use

How To Use

Now that we have all the necessary modules installed, it’s fairly easy to use. To start the process, enter sharepointdsc.reverse.

As the script runs, it asks for the credentials for the various managed accounts. Using the DSC resource provided by SharePointDSC, the script performs a detailed scan of the farm, gathering all the settings and configurations.

For a large farm, this will take several minutes to complete. Once it’s complete, It prompts for a directory to save the results. the resulting files can be consumed by SharePointDSC.

To validate the configuration, compile the spfarmconfig.ps1 file to create the .mof resources. 

The resulting files from SharePointDSC.reverse can be used to duplicate the SharePoint farm in different environments, on-premises or in the cloud. The configuration file, the error log, and the environment data file, all contain detailed configuration settings of the farm.  Custom solutions (.wsp files) are copied into the directory as well.

Duplicating the SharePoint farm

SPFarmConfig.ps1 file can also be uploaded to Azure Automation to duplicate farm configurations for your Azure based SharePoint farm. To duplicate the SharePoint farm in a new environment, apply the configuration to the farm by starting the DSC configuration.

Additional Details

In a multi-node farm, the configurationdata.ps1 file already has the node names, roles, and services that are running on each server in the farm. The file is formatted very similar to JSON and editing this file for the new environment can easily be completed using Visual Studio Code.

The spfarmconfig.ps1 file has the detailed farm configuration and also lists products installed and version numbers. It will also have details about each web application, site collection, and farms settings. Patches applied and version numbers of products installed are also displayed.

One additional benefit of these files is that they can be part of a disaster recovery plan. Restoring the farm from a complete loss can now be accomplished in hours instead of days.

 

 

How To Deploy An Amazon Web Services (AWS) EC2 Instance Using Terraform

Terraform enables you to create, change and improve infrastructure reliably and predictably. It is open source and lets you create declarative configuration files that can be treated as code, (Infrastructure As Code). In this article, we are going to step through the process to create an EC2 instance using Terraform.

The first step is to install Terraform. This is a very easy process and can be followed at https://www.terraform.io/intro/getting-started/install.html.

Next, we then create an IAM account in AWS. This will be needed so that we can use it within the Terraform code, but not quite within the code. That would be reckless! We can create a local profile which will let Terraform read those credentials, but not include them in the actual code so that the code can be stored and shared safely.

Have a look at this video by Bryce McDonald:  How To Set Up Profiles To Manage Amazon Web Services (AWS) From The Command Line Using AWS CLI And PowerShell  to complete this configuration.

We now need to look at the configuration file that will create your EC2 instance. This is simply called a Terraform configuration file, it has an extension .tf.

These files are made up of providers, and resources. We populate the providers section with the configuration information used to define our AWS environment (Our provider)

Next, we are required to define our resources. We define the Amazon marketplace image (AMI) that we will use. Please check the ID for your region as this can differ from region to region. If you follow along with this code, there will be no need to update. We have selected a Windows 2016 image to use in this case.

At this stage we are ready to apply the configuration, however, Terraform will need the AWS plugin and will also need to initialize the Terraform environment. We use the command terraform init

Now you can see from the screenshot, we have the AWS plugin and some more information regarding the environment.

So now we are ready to execute the configuration and create our instance. Terraform will use the command ‘Apply’ to execute this, and you are advised on what actual configuration will be executed. At this point, you have not actually run anything. (In earlier versions you would have used Terraform plan to view the configuration that is to be implemented).

By typing yes, this configuration will now be sent to AWS, you can see it’s now ‘creating’.

If we switch over to the Amazon console we can see the instance, this few lines of code demonstrate how powerful and easily infrastructure can be created using Terraform.

Search by the tag we set in the Terraform configuration file.

Use terraform show to view the configuration changes. This is a very rich output that gives you detail on all aspects of the resources you have created.

It is also just as easy to remove your configuration using the terraform destroy command. You must be careful with this command as it will analyze any Terraform scripts it finds in the same directory as candidates for removal.

Let’s run terraform destroy.

We now type ‘yes’

Back in the AWS console, we can see that the instance has been terminated.

I hope this article has given you some insight into how powerful Terraform is and how easy it is to get a basic configuration up and running!

 

 

How To Enumerate File Shares On A Remote Windows Computer With PowerShell

It can be challenging to keep track of just what file shares have been set up in your environment. This becomes even more difficult if you have to track this information across multiple servers. Adding to the tedium is remotely connecting to each server to find the list the shares. Thankfully, using PowerShell makes this task a snap, whether you need to enumerate shares on just one server, or many.

Enumerate Shares on a Single File Server

Let’s start by connecting to a remote file server to gather this information from a single server. We will accomplish this by entering into a remote PowerShell session with our file server “FILE01”.

Once connected, it takes a single cmdlet to get file share information:

As you can see, this gives us a list of all of the share on this server. This also includes the administrative shares, whose share names are appended by $.

This does accomplish the task of getting a list of shares, but it is a little cluttered. We can clean up this list by using the -Special parameter and setting it to $false to specify that we do not wish to see the administrative shares:

There, that gives us a much clearer view of the share information we are looking for.

Now that we have our share on this server identified, it might be useful to list all of the properties for this share, especially if we are looking for specific details about our share:

This allows us to view quite a bit of information about our share, including things like the type of share, folder enumeration mode, caching mode, and of course, our share name and path, to name a few.

It is also possible to view the share permissions for this share by switching to the Get-SmbShareAccess cmdlet:

This gives us a list of the users and groups, and their current level of access to the share.

We might also have a time where we need to enumerate the share permissions to find out who has full access to a share:

With this information, it is easy to tell who has full access to the share and then take steps to remove that access if it isn’t appropriate for an individual or group.

Now that we are done enumerating shares on a single server, we need to make sure we close our remote PowerShell session:

Enumerate Shares on Multiple File Servers

It is also possible to retrieve this same information from multiple file servers, which is an area where PowerShell really shines. Using Invoke-Command to run Get-SmbShare, we can list the shares on both the FILE01 and FILE02 servers. If we also pipe the output through Format-Table, we can also get a nice organized list:

While entering the file server names manually is fine if there are only two or three servers, it becomes tedious if there are many dozens of servers to check. To get around this, we can assign the output of Get-ADComputer to the variable $FileServAD and get a list of all servers in the “File Servers” Organizational Unit (OU). From there, it’s easy to get the information:

There we have it! A nice tidy list of all of the file shares on all of our file servers.

Additional Resources

Companion Video: “How To Enumerate File Shares On A Remote Windows Computer With PowerShell

How to Manage Docker Volumes on Windows

This blog post was created from a snip created by Matt McElreath. You can check out the video Managing Docker Volumes on Windows if you’re more into video format.

Docker volumes are the preferred way for handling persistent data created by and used by Docker containers. Let’s take a look at how this works.

If you want to store persistent data for containers, there are a couple of options. First, I’ll show you how to use a bind mount. I’m currently in a folder called data on my C-drive. If I list the contents of this folder, you can see that I have five text files.

If I want to make this folder available to a container, I can mount it when starting the container. Let’s go ahead and run a container using docker run. I’m going to run this container in interactive mode, then specify -V. Here, I’m going to put the path to my data folder, followed by a colon, then I will specify the path inside the container where I would like this folder to be mounted.

For this, I’m going to specify the shareddata folder on the C-drive. Then I’ll specify the Windows server core image and finally, I’ll specify that I want to run PowerShell once I’m inside the container.

Now that I’m inside the new container, if I list the contents of the C-drive, you can see that I have a shareddata folder.

Let’s go into that folder and list the contents. Here are my five test files that are located on my container host.

I can also create files in this folder, which will be available to other containers or my container host. Let’s go ahead and run new item to create a file called containertest.

We can see above that the new file has been created from within the container. Now I’ll exit this container which will shut it down by running exit.

If I run docker ps, you can see that there are currently no running containers.

Now let’s list the contents of the data folder again from my container host.

We can see the new file that was created from inside the container called containertest. Bind mounts have some limited functionality, however, so volumes are the preferred way to accomplish what we are trying to do. To get started with volumes, we can run the same command to start up a container, but this time with a couple of small differences. Where we specified the volume, instead of using the path on the container hosts’ file system, I’m going to use the word hostdata as the name of a volume I want to create and use.

From inside the new container, if I list the contents of the C-drive, you can see again that I have a folder called shareddata.

If I list the contents of that folder, it is currently empty because we created a blank volume. Now let’s run Ctrl-P-Q which will take us out of the running container, but keep it running in the background.

From the container host, let’s run docker volume ls. This will list the current volumes on this container host. I have a volume called hostdata, which was created when I specified it in the docker run command.

If I run docker ps we can see our running container.

Let’s stop that container using docker stop. Now we have no running containers.

Let’s remove the stopped containers by running docker rm. If I list the volumes again, you can see that the hostdata volume is still available and can be mounted to new containers.

Another way to create a volume is to use the docker volume create command. If you don’t specify a name, docker will give it a name which is a long list of random characters. Otherwise, you can specify a name here. I’m going to call this volume logdata. Now we can see it is in the list when we list the volumes again.

Now let’s go ahead and mount that to a new container. I’m going to use docker run again and for the volume I’m going to specify the volume that I just created and mount it to c:\logdata.

From inside the container, I’m going to go into the logdata folder and create a couple of files. Right now, there are no files in this directory, so let’s go ahead and create some.

Now I have two log files in this directory.

Let’s run Ctrl-P-Q again to exit this container while it is still running. While that container’s running, let’s start up a new container with the same volume mounted.

If we run a listing on the logdata folder in the new container we can see the two log files being shared.

Now let’s exit this container. I currently still have one running container and two exited containers.

I’m going to go ahead and stop all running containers, then run docker rm to remove all exited containers.

Let’s go ahead and list the volumes again. The logdata volume is still available to be mounted to future containers.

If I just run docker volume, I’ll get some usage help for the command.

We already looked at create, so let’s move on to inspect. If I run docker volume inspect against the logdata volume, it will return the properties for that volume, including the mount point which is the physical path to the volume on the container host.

Let’s open that folder using Invoke-Item and have a look. Under the logdata folder, there’s a folder called _data. If we open that, we can see the files that were created from the container earlier.

To delete a volume, we can run docker volume rm, followed by the name of the volume you want to delete.

Now if I list the volumes, logdata is no longer there.

Finally, we can use prune to remove all unused local volumes. This will delete all volumes that are not mounted to a running or stopped container.

You want to be careful with this command, so there’s a warning and a prompt to make sure that you are sure that you want to do this. If I type Y and hit enter, it will show me which volumes were deleted.

And if I list my volumes again you can see that they have all been deleted.