Installing PowerShell Core Everywhere

DevOps is requiring that SysAdmins be experts in more than one operating system. That used to mean learning more than a few shell scripting languages. PowerShell Core is changing that.

With PowerShell Core, it is no longer necessary to learn a new scripting language to support heterogeneous environments.

PowerShell Core is a new edition of PowerShell that is cross-platform (Windows, macOS, and Linux), open-source, and built for heterogeneous environments and the hybrid cloud.

It has recently become available on Windows Internet of Things (IoT). The cross-platform nature of PowerShell Core means that scripts that you write will run on any supported operating system.

What’s the Difference?

The main difference is the platforms they are built on.

Windows PowerShell is built on top of .NET FrameWork and  because of that dependency, is only available on Windows and is launched as powershell.exe

PowerShell Core is built on .NET Core and is available cross platform and is launched as pswh.exe

Installing PowerShell Core

To install on a Windows client or Windows Server, navigate to the GitHub repository – PowerShell Core – and download the .msi package appropriate for your system.

Windows IoT devices already have PowerShell installed which we will use for installing Powershell Core

For Linux Distributions, it just a matter of adding the repository and installing with the package manager.

For Ubuntu, Debian

For CentOS and RedHat

For OpenSUSE

and finally, for Fedora

 

For macOS, Homebrew is the preferred package manager.

Installing Homebrew package manager is a single line command from a terminal, then install Powershell Core.

Embracing DevOps means being able to manage different platforms and OS’s and learning different shell scripting programs maintain them.  With PowerShell Core, you write once, deploy everywhere. It’s another tool in your toolbox.

If you don’t learn it, someone else will.

Duplicating SharePoint Farms with SharePointDSC.Reverse

 

SharePoint farm configurations are notoriously difficult in not only documenting accurately but also migrating those configurations to a new SharePoint farm.

 

Commercial tools and utilities help, but each tool has its pluses and minuses and some of them are not effective and often buggy.  Additionally, the tools can be expensive and come with a high learning curve.

SharepointDSC.Reverse

SharePointDSC.Reverse is a script developed by Nik Charlebois that utilizes SharePoint DSC resources to gather detailed information about the farm and outputs into a configuration file that can be consumed by PowerShell DSC and SharePointDSC resources.

The resulting PowerShell DSC configuration files can be used to create a near perfect copy of the farm to replicate in the new environment or can be used as a template for Azure automation.

SharePointDsc.Reverse currently supports SharePoint Server 2013/ 2016 and soon SharePoint 2019, running on Windows Server 2008 R2, Windows Server 2012 or Windows Server 2012 R2 or higher.

Getting Started

There are a few prerequisites before running the script. PowerShell v 5.1 is required. Two PowerShell DSC modules are also required and will need to be installed.

Log into the Central Administration server and open a PowerShell session as administrator. The SharePointDSC reverse script is installed with a similar command but using a script instead of module. To install the SharePoint Reverse script, we’ll use

How To Use

Now that we have all the necessary modules installed, it’s fairly easy to use. To start the process, enter sharepointdsc.reverse.

As the script runs, it asks for the credentials for the various managed accounts. Using the DSC resource provided by SharePointDSC, the script performs a detailed scan of the farm, gathering all the settings and configurations.

For a large farm, this will take several minutes to complete. Once it’s complete, It prompts for a directory to save the results. the resulting files can be consumed by SharePointDSC.

To validate the configuration, compile the spfarmconfig.ps1 file to create the .mof resources. 

The resulting files from SharePointDSC.reverse can be used to duplicate the SharePoint farm in different environments, on-premises or in the cloud. The configuration file, the error log, and the environment data file, all contain detailed configuration settings of the farm.  Custom solutions (.wsp files) are copied into the directory as well.

Duplicating the SharePoint farm

SPFarmConfig.ps1 file can also be uploaded to Azure Automation to duplicate farm configurations for your Azure based SharePoint farm. To duplicate the SharePoint farm in a new environment, apply the configuration to the farm by starting the DSC configuration.

Additional Details

In a multi-node farm, the configurationdata.ps1 file already has the node names, roles, and services that are running on each server in the farm. The file is formatted very similar to JSON and editing this file for the new environment can easily be completed using Visual Studio Code.

The spfarmconfig.ps1 file has the detailed farm configuration and also lists products installed and version numbers. It will also have details about each web application, site collection, and farms settings. Patches applied and version numbers of products installed are also displayed.

One additional benefit of these files is that they can be part of a disaster recovery plan. Restoring the farm from a complete loss can now be accomplished in hours instead of days.

 

 

How To Deploy An Amazon Web Services (AWS) EC2 Instance Using Terraform

Terraform enables you to create, change and improve infrastructure reliably and predictably. It is open source and lets you create declarative configuration files that can be treated as code, (Infrastructure As Code). In this article, we are going to step through the process to create an EC2 instance using Terraform.

The first step is to install Terraform. This is a very easy process and can be followed at https://www.terraform.io/intro/getting-started/install.html.

Next, we then create an IAM account in AWS. This will be needed so that we can use it within the Terraform code, but not quite within the code. That would be reckless! We can create a local profile which will let Terraform read those credentials, but not include them in the actual code so that the code can be stored and shared safely.

Have a look at this video by Bryce McDonald:  How To Set Up Profiles To Manage Amazon Web Services (AWS) From The Command Line Using AWS CLI And PowerShell  to complete this configuration.

We now need to look at the configuration file that will create your EC2 instance. This is simply called a Terraform configuration file, it has an extension .tf.

These files are made up of providers, and resources. We populate the providers section with the configuration information used to define our AWS environment (Our provider)

Next, we are required to define our resources. We define the Amazon marketplace image (AMI) that we will use. Please check the ID for your region as this can differ from region to region. If you follow along with this code, there will be no need to update. We have selected a Windows 2016 image to use in this case.

At this stage we are ready to apply the configuration, however, Terraform will need the AWS plugin and will also need to initialize the Terraform environment. We use the command terraform init

Now you can see from the screenshot, we have the AWS plugin and some more information regarding the environment.

So now we are ready to execute the configuration and create our instance. Terraform will use the command ‘Apply’ to execute this, and you are advised on what actual configuration will be executed. At this point, you have not actually run anything. (In earlier versions you would have used Terraform plan to view the configuration that is to be implemented).

By typing yes, this configuration will now be sent to AWS, you can see it’s now ‘creating’.

If we switch over to the Amazon console we can see the instance, this few lines of code demonstrate how powerful and easily infrastructure can be created using Terraform.

Search by the tag we set in the Terraform configuration file.

Use terraform show to view the configuration changes. This is a very rich output that gives you detail on all aspects of the resources you have created.

It is also just as easy to remove your configuration using the terraform destroy command. You must be careful with this command as it will analyze any Terraform scripts it finds in the same directory as candidates for removal.

Let’s run terraform destroy.

We now type ‘yes’

Back in the AWS console, we can see that the instance has been terminated.

I hope this article has given you some insight into how powerful Terraform is and how easy it is to get a basic configuration up and running!

 

 

How To Enumerate File Shares On A Remote Windows Computer With PowerShell

It can be challenging to keep track of just what file shares have been set up in your environment. This becomes even more difficult if you have to track this information across multiple servers. Adding to the tedium is remotely connecting to each server to find the list the shares. Thankfully, using PowerShell makes this task a snap, whether you need to enumerate shares on just one server, or many.

Enumerate Shares on a Single File Server

Let’s start by connecting to a remote file server to gather this information from a single server. We will accomplish this by entering into a remote PowerShell session with our file server “FILE01”.

Once connected, it takes a single cmdlet to get file share information:

As you can see, this gives us a list of all of the share on this server. This also includes the administrative shares, whose share names are appended by $.

This does accomplish the task of getting a list of shares, but it is a little cluttered. We can clean up this list by using the -Special parameter and setting it to $false to specify that we do not wish to see the administrative shares:

There, that gives us a much clearer view of the share information we are looking for.

Now that we have our share on this server identified, it might be useful to list all of the properties for this share, especially if we are looking for specific details about our share:

This allows us to view quite a bit of information about our share, including things like the type of share, folder enumeration mode, caching mode, and of course, our share name and path, to name a few.

It is also possible to view the share permissions for this share by switching to the Get-SmbShareAccess cmdlet:

This gives us a list of the users and groups, and their current level of access to the share.

We might also have a time where we need to enumerate the share permissions to find out who has full access to a share:

With this information, it is easy to tell who has full access to the share and then take steps to remove that access if it isn’t appropriate for an individual or group.

Now that we are done enumerating shares on a single server, we need to make sure we close our remote PowerShell session:

Enumerate Shares on Multiple File Servers

It is also possible to retrieve this same information from multiple file servers, which is an area where PowerShell really shines. Using Invoke-Command to run Get-SmbShare, we can list the shares on both the FILE01 and FILE02 servers. If we also pipe the output through Format-Table, we can also get a nice organized list:

While entering the file server names manually is fine if there are only two or three servers, it becomes tedious if there are many dozens of servers to check. To get around this, we can assign the output of Get-ADComputer to the variable $FileServAD and get a list of all servers in the “File Servers” Organizational Unit (OU). From there, it’s easy to get the information:

There we have it! A nice tidy list of all of the file shares on all of our file servers.

Additional Resources

Companion Video: “How To Enumerate File Shares On A Remote Windows Computer With PowerShell

David Lamb is a Systems Administrator managing Windows servers and clients since 1995, spending a large portion of his career in the aviation industry. His first certification was the MCSE on Windows NT 4.0, earned in 2001. David lives in Alberta, Canada, and is currently spending his free time learning PowerShell, blogging, and pursuing the MCSE certification on Windows Server.

How to Manage Docker Volumes on Windows

This blog post was created from a snip created by Matt McElreath. You can check out the video Managing Docker Volumes on Windows if you’re more into video format.

Docker volumes are the preferred way for handling persistent data created by and used by Docker containers. Let’s take a look at how this works.

If you want to store persistent data for containers, there are a couple of options. First, I’ll show you how to use a bind mount. I’m currently in a folder called data on my C-drive. If I list the contents of this folder, you can see that I have five text files.

If I want to make this folder available to a container, I can mount it when starting the container. Let’s go ahead and run a container using docker run. I’m going to run this container in interactive mode, then specify -V. Here, I’m going to put the path to my data folder, followed by a colon, then I will specify the path inside the container where I would like this folder to be mounted.

For this, I’m going to specify the shareddata folder on the C-drive. Then I’ll specify the Windows server core image and finally, I’ll specify that I want to run PowerShell once I’m inside the container.

Now that I’m inside the new container, if I list the contents of the C-drive, you can see that I have a shareddata folder.

Let’s go into that folder and list the contents. Here are my five test files that are located on my container host.

I can also create files in this folder, which will be available to other containers or my container host. Let’s go ahead and run new item to create a file called containertest.

We can see above that the new file has been created from within the container. Now I’ll exit this container which will shut it down by running exit.

If I run docker ps, you can see that there are currently no running containers.

Now let’s list the contents of the data folder again from my container host.

We can see the new file that was created from inside the container called containertest. Bind mounts have some limited functionality, however, so volumes are the preferred way to accomplish what we are trying to do. To get started with volumes, we can run the same command to start up a container, but this time with a couple of small differences. Where we specified the volume, instead of using the path on the container hosts’ file system, I’m going to use the word hostdata as the name of a volume I want to create and use.

From inside the new container, if I list the contents of the C-drive, you can see again that I have a folder called shareddata.

If I list the contents of that folder, it is currently empty because we created a blank volume. Now let’s run Ctrl-P-Q which will take us out of the running container, but keep it running in the background.

From the container host, let’s run docker volume ls. This will list the current volumes on this container host. I have a volume called hostdata, which was created when I specified it in the docker run command.

If I run docker ps we can see our running container.

Let’s stop that container using docker stop. Now we have no running containers.

Let’s remove the stopped containers by running docker rm. If I list the volumes again, you can see that the hostdata volume is still available and can be mounted to new containers.

Another way to create a volume is to use the docker volume create command. If you don’t specify a name, docker will give it a name which is a long list of random characters. Otherwise, you can specify a name here. I’m going to call this volume logdata. Now we can see it is in the list when we list the volumes again.

Now let’s go ahead and mount that to a new container. I’m going to use docker run again and for the volume I’m going to specify the volume that I just created and mount it to c:\logdata.

From inside the container, I’m going to go into the logdata folder and create a couple of files. Right now, there are no files in this directory, so let’s go ahead and create some.

Now I have two log files in this directory.

Let’s run Ctrl-P-Q again to exit this container while it is still running. While that container’s running, let’s start up a new container with the same volume mounted.

If we run a listing on the logdata folder in the new container we can see the two log files being shared.

Now let’s exit this container. I currently still have one running container and two exited containers.

I’m going to go ahead and stop all running containers, then run docker rm to remove all exited containers.

Let’s go ahead and list the volumes again. The logdata volume is still available to be mounted to future containers.

If I just run docker volume, I’ll get some usage help for the command.

We already looked at create, so let’s move on to inspect. If I run docker volume inspect against the logdata volume, it will return the properties for that volume, including the mount point which is the physical path to the volume on the container host.

Let’s open that folder using Invoke-Item and have a look. Under the logdata folder, there’s a folder called _data. If we open that, we can see the files that were created from the container earlier.

To delete a volume, we can run docker volume rm, followed by the name of the volume you want to delete.

Now if I list the volumes, logdata is no longer there.

Finally, we can use prune to remove all unused local volumes. This will delete all volumes that are not mounted to a running or stopped container.

You want to be careful with this command, so there’s a warning and a prompt to make sure that you are sure that you want to do this. If I type Y and hit enter, it will show me which volumes were deleted.

And if I list my volumes again you can see that they have all been deleted.

Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an entrepreneur, IT influencer, Microsoft MVP, blogger, trainer and content marketing writer for multiple technology companies. Adam is also the founder of the popular IT career development platform TechSnips.

How to Set the DNS Server Search Order on Windows with PowerShell

To follow along, you can find a copy of the code used in the SnipSnips GitHub repo.

Setting your DNS server search order with PowerShell is actually really easy. We’ll start with the Get-DNSClientServerAddress to get a look at our existing settings as you can see below.

So there you can see, we have our existing settings on ethernet interface index seven, and our addresses are 192.168.2.52 and the secondary server is at .51.

So we’ll do a quick nslookup to file01.corp.ad, to verify that our primary is in fact responding.

So there we go, you can see above that a responding DNS server is our primary at .52, and successfully returned .55 is our file server.

Now, let’s change the order of our DNS servers. To do that, we’ll use the Set-DNSClientServerAddress cmdlet. We’ll point it to interface index seven as listed above, and I’ll change our order, so 192, 168.2.51 is our primary, and .52 is now our secondary.

We’ll do a quick verification. I’ll check interface index seven.

There, now you can see above, .51 is now our primary as it’s listed first, and .52 is our secondary.

Do another quick nslookup, and you can see that that now returns from .51, which is our primary DNS server.

Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an entrepreneur, IT influencer, Microsoft MVP, blogger, trainer and content marketing writer for multiple technology companies. Adam is also the founder of the popular IT career development platform TechSnips.

How to Use Tags in Pester for Targeted Testing

“There’s no sense in being precise when you don’t even know what you’re talking about. -John von Neumann”

 

http://developers-club.com/posts/264697/

I thought this was a good quote as the theme for this post. Re-read the quote, take it in, and then continue reading.

 

During the construction of a set of Pester tests, it can be increasingly difficult to follow the flow of each of the tests and the subjects against which these tests will be performed.

Since Pester tests PowerShell code, you are able to use a PowerShell method called Regions to separate section of code. These Regions will allow you to collapse large sections of similar code to create an overall more pleasant script reading experience. While that may assist to some extent, it does not help you find and run specific tests.

Here’s an example of what using a region would look like:

Example 1 - Regions
Region example

The problem is that this technique is only useful when reading code, but not necessarily nearly has helpful when executing code. A region doesn’t allow you to pick out precisely what test you want to run. That’s where using Tags comes into play.

Tagging is a Pester parameter that allows you to filter Describe blocks using a string or keyword value. When using Invoke-Pester with the -Tag & -Passthru parameters, only the Describe blocks that contain the -Tag value specified will execute.
Here is a simple example:

Example 2
Tag Example

This is useful because you do not have to have multiple test files with only one Describe block in each, you could instead create a single master file with multiple Describe blocks, each with their own tag. This is very useful to do when you have an application or infrastructure stack you want to test, and have the ability to add any new regression tests you may need in the future.

You can use multiple tags for a Describe block, but you cannot use multiple tags when running Invoke-Pester. This took me a little time to figure out but after thinking about it, it does make sense. You want to run a single or suite of tests that have the tag, not multiple tests with multiple tags because that is what Invoke-Pester will do by itself!

Since I first started learning about Pester, I have been building a few infrastructure tests for use in the environments I work in often. One particular task involves working with domain controllers and occasionally do some investigation into replication issues.

Taking what I know and turning it into some tests that run quickly and uniformly has improved my response time to domain controller issues. Tags have been extremely helpful during a couple of troubleshooting tasks now that I could target the specific failed component(s) in my test set based on results of the initial test.

There is no reason to keep running full tests for example if, for example, DNS is not working. I can then target DNS services and infrastructure without chasing other possibilities, thus wasting time. Another benefit was that I did not have to go find additional scripts because I already had the -tag parameter set for the Describe block in question.

Making all Objects in an AWS S3 Bucket Always Public

At TechSnips, we use Amazon S3 to store all of the stuff required to operate. One ability we need is to provide a publicly accessible repository of files. Luckily, S3 has this ability to set objects to public read-access.

To set an object to public read-access, you can click on Make Public via right-clicking on the object inside of the S3 Management Console.

S3 Make Public

This is all well and good but if you’ve got tons of files constantly being uploaded to S3, I’m not about to manually make all of my objects public like this!

After a bit of digging, I was able to figure out how to make all objects be public the moment they are added to a particular bucket. To do this requires creating a bucket policy. This bucket policy applies to all GetObject actions. You can see the bucket policy I used below for our techsnips-public S3 bucket.

This bucket policy can be assigned to the bucket via the Management Console

Once you have the bucket policy set, you’ll then need to also assign Public Access to the Everyone group as well via the Access Control List.

Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an entrepreneur, IT influencer, Microsoft MVP, blogger, trainer and content marketing writer for multiple technology companies. Adam is also the founder of the popular IT career development platform TechSnips.

How to Manage DNS Records with PowerShell

Most of the time, DNS records are managed dynamically by your DNS server. However, at times you may find that you need to manually create, edit, or remove various types of DNS records. It is at times like this that PowerShell is quite useful for managing these records.

Viewing DNS Records

You can view all of the resource records for a given DNS zone by simply using the Get-DnsServerResourceRecord cmdlet and specifying the zone name parameter:

As you can see, this generates quite a lengthy list of records. This nicely highlights one of the advantages of this particular cmdlet over the graphical DNS console. This view gives you all of the records for this zone, regardless of which folder they are in. In the graphical console, it would take quite some time to piece this information together.

Now, let’s thin out this list a bit. Using the same cmdlet, but adding the RRType parameter to search for A records (IPv4 hosts) and filtering for records where the Time To Live (TTL) is greater than 15 minutes gives us a bit more of a manageable list:

Taking this one step further, we can also search for records in a different DNS zone, on a different DNS server. In this example, we will search for A records in the “canada.corp.ad” zone on DNS server DC03:

Adding and Removing Host Records (A and AAAA)

To add a host record, we will need to use the Add-DnsServerResourceRecordA cmdlet. In this example, we need to add a host record for a new printer that we are adding to the network. It will be added to the corp.ad zone with the name “reddeerprint01”, and it’s IP address is 192.168.2.56.

If it turns out that we need to remove a record, for example, if the printer has been decommissioned, we can use the following code to remove the host record that we just created:

It is also just as easy to add an IPv6 host record. Of course, these records differ slightly, as they are listed as AAAA records. You may notice that we are now using the Add-DnsServerResourceRecordAAAA cmdlet. It’s a subtle change, but an important one. Let’s add a record to the “corp.ad” zone for the new IT Intranet server at “fc00:0128” and then quickly verify that it has been created:

Adding Reverse Lookup Records (PTR)

A reverse lookup record allows the client to query a DNS server to request the hostname for a supplied IP address. Creating a PTR record is a relatively easy process, but there is one important bit of information you will need to know before you start adding PTR records. Reverse lookup zones are not created by default. You will need to set up your reverse lookup zone prior to adding records.

Fortunately, it is relatively easy to do. You just need to use the Add-DnsServerPrimaryZone cmdlet and provide it with the Network ID. In this example, I have also chosen to set the replication scope to the entire AD forest, and I have specifically targeted “DC03” as the preferred DNS server:

Now that our reverse lookup zone is in place, we can add our PTR record for a new printer called “CYQF-Printer-01.canada.corp.ad” that has an IP address of 192.168.2.56. As this record is for the “canada.corp.ad” zone, we will be targeting the DNS server “DC03”.

When using the Add-DnsServerResourceRecordPtr cmdlet, it is important to note a couple of things. First, that you need to specify the zone name using the network ID in reverse order, then add “.in-addr.arpa”. So for our “192.168.2.0/24” network ID, the zone name is “2.168.192.in-addr.arpa”. Second, the “Name” parameter is simply the host portion of the IP address. For our printer at 192.168.2.56, the “Name” is simply “56”.

Once you have those pieces of information, the code required to create the PTR record is relatively simple, if a bit long:

Adding Alias Records (CNAME)

To finish off, we will create a host alias record or CNAME record using the Add-DnsServerResourceRecordCName cmdlet. These records allow you to specify an alias for an existing host record in the zone. This becomes especially useful, for example, if you want to provide your finance users with an address for their web-enabled finance app. You could create an alias called “finance”, and point it to the web server “webapp25.corp.ad”. Then when you need to migrate the app to a new web server with a new hostname, you simply change the CMANE record to point “finance” to the new host. This way, the users don’t have to update their bookmarks. They can continue to access their application using the address “finance.corp.ad”.

Additional Resources

Companion video: “How To Manage DNS Records With PowerShell

David Lamb is a Systems Administrator managing Windows servers and clients since 1995, spending a large portion of his career in the aviation industry. His first certification was the MCSE on Windows NT 4.0, earned in 2001. David lives in Alberta, Canada, and is currently spending his free time learning PowerShell, blogging, and pursuing the MCSE certification on Windows Server.

Using PowerShell’s Test-Connection and Test-NetConnection Cmdlets

ThePowerShell Test-Connection and Test-NetConnection cmdlet are the new way of pinging remote computers but we’ve still got good ol’ ping.exe.

Pinging a device is a basic skill of any ‘IT Pro’, and we all know the Ping.exe command but have you heard of the ? It’s PowerShell’s way of reinventing the old-school ping.exe command.

Pinging a computer is really easy to do in PowerShell but there are a set of options that really make it useful, and take it way beyond what ping can do.

We could just use Invoke-Expression to call the ping.exe but this is just putting lipstick on a pig.

ping.exe

Yeah, this works, and we’ll get the normal ping results back of course, but it’s a bit clunky and all we get is the text output from the ping command, so let’s use the native PowerShell cmdletTest-Connection instead.

PowerShell Test-Connection

That’s a much more native way of doing it in PowerShell, and it returns an object rather than just the text output of ping.exe.  So, if we pipe the object coming from the PowerShell Test-Connection cmdlet into Select * we get a mass of useful information.

PowerShell Test-NetConnection

Like any good PowerShell cmdlet we have switches so we can set things like Count for the number of attempts, BufferSize for the size of the packet and Delay to define the delay between each attempt.

There are a lot of other switches of course, and I’m not going to go through them all but there are some that are really useful like Source.  This makes it possible to use the PowerShell Test-Connection cmdlet to connect to other machines on your network and initiate connection attempts from there.

Using Source Test-NetConnection

The output shows all the results from the hosts in the source list, in one nice neat table (object).  This is especially useful if you have a complicated network with lots of firewalls between you and the target.  As long as you can get to those source machines then you can test any of the connections from there.

The Quiet switch goes the other way and gives a really simple true/false result.  This is super useful when using it in If statements.

PowerShell Test-Connection Quiet

If you have a lot of targets to test then the AsJob parameter might be useful for putting the list to a background job and getting the results using Get-Job | Receive-Job.

PowerShell Test-Connection AsJob

Test-NetConnection

Another cmdlet to look at is Test-NetConnection.  The Test-NetConnection cmdlet can test the connection to a device much like the PowerShell Test-Connection cmdlet but it’s a little more networking focused.  In the simplest sense, it gives much the same results.

Again this cmdlet has a load of really useful parameters like Port to test whether a remote port is open or not.

With the TraceRoute parameter we can do the same as we would with Tracert.exe, but the output is a PowerShell object with each of the hops on the route to the target.

Again, if we want to use PowerShell cmdletTest-NetConnection in an If statement to test if a device has port 80 open we can use the -InformationLevel Quiet parameter and value to give us a simple true/false result from the test.

Whether you choose to use the PowerShell Test-Connection cmdlet or Test-NetConnection cmdlet, we’ve got you covered!

I’m a freelance SysAdmin with 20 years experience in IT. My focus is mainly on PowerShell, Automation and Azure Infrastructure. I’ve always had a fascination for anything techie and love learning and sharing that knowledge.