Deploying a Service Fabric cluster to run Windows containers

From container perspective, Service Fabric is a container orchestrator which supports both Windows and Linux containers. In legacy application lift and shift scenarios, we usually containerize the legacy application with minimal code change. And Service Fabric is a good platform to run these containers.

To deploy a Service Fabric cluster on Azure which is suitable for running containers, we can use ARM template. I created a template with the following special settings:

1 – An additional data disk is attached to the VMs in the cluster to host the downloaded container images. We need this disk is because by default all container images would be downloaded to C drive of the VMs. The C drive may run out of space if there are several large images downloaded.

"dataDisks": [
        "lun": 0,
        "createOption": "Empty",
        "caching": "None",
        "managedDisk": {
            "storageAccountType": "Standard_LRS"
        "diskSizeGB": 100

2 – A custom script extension is used to run a custom script to format the data disk and change the configuration of dockerd service.

    "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.9",
        "autoUpgradeMinorVersion": true,
        "settings": {
            "fileUris": [

            "commandToExecute": "powershell -ExecutionPolicy Unrestricted -File config-docker.ps1"
    "name": "VMCustomScriptVmExt_vmNodeType0Name"

The customer script is as follows:

Install Minikube on Ubuntu Server 17.10

I have some experiences with Docker and containers, but never played with Kubernetes before. I started to explore Kubernetes recently as I may need a container orchestration solution in the coming projects. Kubernetes is supported by Azure AKS. Even Docker has announced their support of it. Looks like it is going to be the major container orchestration solution in the market for the coming years.

I started with deploying a local Kubernetes cluster with Minikube on a Ubuntu 17.10 server on Azure. Kubernetes has a document on its site which is about installing the Minikube. But it is very brief. So in this post, I will try to document the step by step procedure both for the future reference of myself and for others who are new to Kubernetes.

Install a Hypervisor

To install Minikube, the first step is to install a hypervisor on the server. On Linux, both VirtualBox and KVM are supported hypervisors. I chose to install KVM and followed the guidance here. The following are steps.

  • Make sure VT-x or AMD-v virtualization is enabled. In Azure, if the VM is based on vCPUs, the virtualization is enabled. To double check, run command egrep -c '(vmx|svm)' /proc/cpuinfo, if the output is 1, the virtualization is enabled.
  • Install the KVM packages with the following command:
sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
  • Use the following command to add the current user to the libvert group, and then logout and login to make it work. Note, in the guidance the group name is libvirtd, but on Ubuntu 17.10, the name has changed to libvert.
sudo adduser `id -un` libvirt
  • Test if your install has been successful with the following command:
virsh list --all
  • Install virt-manager so that we have a UI to manage VMs
sudo apt-get install virt-manager

Install kubectl

Follow the instruction here to install kubectl. The following are the commands:

curl -LO$(curl -s
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Install Minikube

Follow the instruction on the release notes of Minikube to install it. I used the following command:

curl -Lo minikube && chmod +x minikube && sudo mv minikube /usr/local/bin/

When you finish this step, according to the official document, the installation of Minikube has been completed. But before you can use it, there are several other components which needs to be installed as well.

Install Docker, Docker-Machine, and KVM driver

Minikube can run on natively on the Ubuntu server without a virtual machine. To do so, Docker needs to be installed on the server. Docker-CE has a different way to be installed and Docker has a document for it.

Docker Machine can be installed with the following commands:

curl -L`uname -s`-`uname -m` >/tmp/docker-machine && \
sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Finally, we need to install a VM driver for the docker machine. Kubernetes team ships a KVM2 driver which is supposed to replace the KVM driver created by others. However, I failed to make the Minikube work with the KVM2 driver. There is a bug report for this issue and hope the Kubernetes team will fix it soon.

So I installed the KVM driver with the following command:

curl -LO
sudo cp docker-machine-driver-kvm-ubuntu16.04 /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

Test if Minikube Works

With the completion of all the above steps, we can test the Minikube now.

minikube start --vm-driver kvm

It will create a vm named as minikube in KVM and configure a local Kubernetes cluster based on it. With kubectl, you should be able to see the cluster info and node info.

kubectl cluster-info
kubectl get nodes

With that, you can start to explore Kubernetes.

Running Linux Containers on Windows Server 2016

Update on 1 Jul, 2020:

The development of lcow has been stopped, which means it would stay as experimental forever.

Docker Desktop has changed its way to leverage WSL2 for running Linux containers on Windows 10. The plan for Docker EE is unclear as Docker Inc. has sold it to Mirantis.

So if you plan to run both Linux and Windows containers in production, you may want to look for other options, such as Kubernetes.

Original post: 

I never thought running Linux containers on Windows Server is a big deal. A reason that I run Docker for Windows on my Windows 10 laptop is to run some Linux based containers. I thought I just need to install Docker for Windows on a Windows Server 2016 server with Container feature enabled, then I should be able to run both Linux and Windows containers. I didn’t know it is not the case until when I tried it yesterday.

It turns out the Linux Containers on Windows (lcow) Server is a preview feature of both Windows Server, version 1709 and Docker EE. It won’t work on Windows Server 2016 of which the version is older than 1709. As a side learning of this topic, I also got some ideas about the Windows Server semi-annual channel. An interesting change.

So here is a summary of how to enable lcow on Windows Server, version 1709.

  1. First of all, you need to get a Windows Server, version 1709 up and running. You can get the installation media of Windows Server, version 1709 from here. As I use Azure, I provision a server based on the Windows Server, version 1709 with Container image. Version 1709 was only offered as a Server Core installation. It doesn’t have the desktop environment.
  2. Once you have the server up and running, you will have to enable the Hyper-V and Containers feature on it, and install the Docker EE preview. It can be installed with the following PowerShell script.

    As I use the Azure image, the Container feature and Docker EE has been enabled on it, and docker daemon has been configured as a Windows service. I don’t have to run the above script.
  3. Now you can follow the instruction here to configure the lcow. Specifically, I use the following script to configure it. I also update the configuration file in C:\ProgramData\Docker\config\daemon.json to enable the experimental feature of LinuxKit when docker service is started.
  4. Once you finish all the above configuration, you have enabled the lcow on Windows Server 1709. To test it, simply run
docker run --platform linux --rm -ti busybox sh

That is it. If you want, you can also try to run Ubuntu containers by following the instructions here.