Installing PowerShell Core Everywhere

DevOps is requiring that SysAdmins be experts in more than one operating system. That used to mean learning more than a few shell scripting languages. PowerShell Core is changing that.

With PowerShell Core, it is no longer necessary to learn a new scripting language to support heterogeneous environments.

PowerShell Core is a new edition of PowerShell that is cross-platform (Windows, macOS, and Linux), open-source, and built for heterogeneous environments and the hybrid cloud.

It has recently become available on Windows Internet of Things (IoT). The cross-platform nature of PowerShell Core means that scripts that you write will run on any supported operating system.

What’s the Difference?

The main difference is the platforms they are built on.

Windows PowerShell is built on top of .NET FrameWork and  because of that dependency, is only available on Windows and is launched as powershell.exe

PowerShell Core is built on .NET Core and is available cross platform and is launched as pswh.exe

Installing PowerShell Core

To install on a Windows client or Windows Server, navigate to the GitHub repository – PowerShell Core – and download the .msi package appropriate for your system.

Windows IoT devices already have PowerShell installed which we will use for installing Powershell Core

For Linux Distributions, it just a matter of adding the repository and installing with the package manager.

For Ubuntu, Debian

For CentOS and RedHat

For OpenSUSE

and finally, for Fedora

 

For macOS, Homebrew is the preferred package manager.

Installing Homebrew package manager is a single line command from a terminal, then install Powershell Core.

Embracing DevOps means being able to manage different platforms and OS’s and learning different shell scripting programs maintain them.  With PowerShell Core, you write once, deploy everywhere. It’s another tool in your toolbox.

If you don’t learn it, someone else will.

How to Manage Docker Volumes on Windows

This blog post was created from a snip created by Matt McElreath. You can check out the video Managing Docker Volumes on Windows if you’re more into video format.

Docker volumes are the preferred way for handling persistent data created by and used by Docker containers. Let’s take a look at how this works.

If you want to store persistent data for containers, there are a couple of options. First, I’ll show you how to use a bind mount. I’m currently in a folder called data on my C-drive. If I list the contents of this folder, you can see that I have five text files.

If I want to make this folder available to a container, I can mount it when starting the container. Let’s go ahead and run a container using docker run. I’m going to run this container in interactive mode, then specify -V. Here, I’m going to put the path to my data folder, followed by a colon, then I will specify the path inside the container where I would like this folder to be mounted.

For this, I’m going to specify the shareddata folder on the C-drive. Then I’ll specify the Windows server core image and finally, I’ll specify that I want to run PowerShell once I’m inside the container.

Now that I’m inside the new container, if I list the contents of the C-drive, you can see that I have a shareddata folder.

Let’s go into that folder and list the contents. Here are my five test files that are located on my container host.

I can also create files in this folder, which will be available to other containers or my container host. Let’s go ahead and run new item to create a file called containertest.

We can see above that the new file has been created from within the container. Now I’ll exit this container which will shut it down by running exit.

If I run docker ps, you can see that there are currently no running containers.

Now let’s list the contents of the data folder again from my container host.

We can see the new file that was created from inside the container called containertest. Bind mounts have some limited functionality, however, so volumes are the preferred way to accomplish what we are trying to do. To get started with volumes, we can run the same command to start up a container, but this time with a couple of small differences. Where we specified the volume, instead of using the path on the container hosts’ file system, I’m going to use the word hostdata as the name of a volume I want to create and use.

From inside the new container, if I list the contents of the C-drive, you can see again that I have a folder called shareddata.

If I list the contents of that folder, it is currently empty because we created a blank volume. Now let’s run Ctrl-P-Q which will take us out of the running container, but keep it running in the background.

From the container host, let’s run docker volume ls. This will list the current volumes on this container host. I have a volume called hostdata, which was created when I specified it in the docker run command.

If I run docker ps we can see our running container.

Let’s stop that container using docker stop. Now we have no running containers.

Let’s remove the stopped containers by running docker rm. If I list the volumes again, you can see that the hostdata volume is still available and can be mounted to new containers.

Another way to create a volume is to use the docker volume create command. If you don’t specify a name, docker will give it a name which is a long list of random characters. Otherwise, you can specify a name here. I’m going to call this volume logdata. Now we can see it is in the list when we list the volumes again.

Now let’s go ahead and mount that to a new container. I’m going to use docker run again and for the volume I’m going to specify the volume that I just created and mount it to c:\logdata.

From inside the container, I’m going to go into the logdata folder and create a couple of files. Right now, there are no files in this directory, so let’s go ahead and create some.

Now I have two log files in this directory.

Let’s run Ctrl-P-Q again to exit this container while it is still running. While that container’s running, let’s start up a new container with the same volume mounted.

If we run a listing on the logdata folder in the new container we can see the two log files being shared.

Now let’s exit this container. I currently still have one running container and two exited containers.

I’m going to go ahead and stop all running containers, then run docker rm to remove all exited containers.

Let’s go ahead and list the volumes again. The logdata volume is still available to be mounted to future containers.

If I just run docker volume, I’ll get some usage help for the command.

We already looked at create, so let’s move on to inspect. If I run docker volume inspect against the logdata volume, it will return the properties for that volume, including the mount point which is the physical path to the volume on the container host.

Let’s open that folder using Invoke-Item and have a look. Under the logdata folder, there’s a folder called _data. If we open that, we can see the files that were created from the container earlier.

To delete a volume, we can run docker volume rm, followed by the name of the volume you want to delete.

Now if I list the volumes, logdata is no longer there.

Finally, we can use prune to remove all unused local volumes. This will delete all volumes that are not mounted to a running or stopped container.

You want to be careful with this command, so there’s a warning and a prompt to make sure that you are sure that you want to do this. If I type Y and hit enter, it will show me which volumes were deleted.

And if I list my volumes again you can see that they have all been deleted.

Adam Bertram is a 20-year veteran of IT and experienced online business professional. He’s an entrepreneur, IT influencer, Microsoft MVP, blogger, trainer and content marketing writer for multiple technology companies. Adam is also the founder of the popular IT career development platform TechSnips.