跳到主要内容
版本:v2.0-v2.4

v2.1.x and v2.2.x Windows Documentation (Experimental)

Available from v2.1.0 to v2.1.9 and v2.2.0 to v2.2.3

This section describes how to provision Windows clusters in Rancher v2.1.x and v2.2.x. If you are using Rancher v2.3.0 or later, please refer to the new documentation for v2.3.0 or later.

When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure.

You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes.

Important: In versions of Rancher before v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher before v2.3.

This guide walks you through create of a custom cluster that includes three nodes:

  • A Linux node, which serves as a Kubernetes control plane node
  • Another Linux node, which serves as a Kubernetes worker used to support Ingress for the cluster
  • A Windows node, which is assigned the Kubernetes worker role and runs your Windows containers

For a summary of Kubernetes features supported in Windows, see Using Windows in Kubernetes.

OS and Container Requirements

  • For clusters provisioned with Rancher v2.1.x and v2.2.x, containers must run on Windows Server 1809 or above.
  • You must build containers on a Windows Server core version 1809 or above to run these containers on the same server version.

Objectives for Creating Cluster with Windows Support

When setting up a custom cluster with support for Windows nodes and containers, complete the series of tasks below.

1. Provision Hosts

To begin provisioning a custom cluster with Windows support, prepare your host servers. Provision three nodes according to our requirements—two Linux, one Windows. Your hosts can be:

  • Cloud-hosted VMs
  • VMs from virtualization clusters
  • Bare-metal servers

The table below lists the Kubernetes node roles you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications.

NodeOperating SystemFuture Cluster Role(s)
Node 1Linux (Ubuntu Server 16.04 recommended)Control plane, etcd, worker
Node 2Linux (Ubuntu Server 16.04 recommended)Worker (This node is used for Ingress support)
Node 3Windows (Windows Server core version 1809 or above)Worker

Requirements

  • You can view node requirements for Linux and Windows nodes in the installation section.
  • All nodes in a virtualization cluster or a bare metal cluster must be connected using a layer 2 network.
  • To support Ingress, your cluster must include at least one Linux node dedicated to the worker role.
  • Although we recommend the three node architecture listed in the table above, You can add more Linux and Windows workers to scale up your cluster for redundancy.

2. Cloud-hosted VM Networking Configuration

Note: This step only applies to nodes hosted on cloud-hosted virtual machines. If you're using virtualization clusters or bare-metal servers, skip ahead to Create the Custom Cluster.

If you're hosting your nodes on any of the cloud services listed below, you must disable the private IP address checks for both your Linux or Windows hosts on startup. To disable this check for each node, follow the directions provided by each service below.

ServiceDirections to disable private IP address checks
Amazon EC2Disabling Source/Destination Checks
Google GCEEnabling IP Forwarding for Instances
Azure VMEnable or Disable IP Forwarding

3. Create the Custom Cluster

To create a custom cluster that supports Windows nodes, follow the instructions in Creating a Cluster with Custom Nodes, starting from 2. Create the Custom Cluster. While completing the linked instructions, look for steps that requires special actions for Windows nodes, which are flagged with a note. These notes will link back here, to the special Windows instructions listed in the subheadings below.

Enable the Windows Support Option

While choosing Cluster Options, set Windows Support (Experimental) to Enabled.

After you select this option, resume Creating a Cluster with Custom Nodes from step 6.

Networking Option

When choosing a network provider for a cluster that supports Windows, the only option available is Flannel, as host-gw is needed for IP routing.

If your nodes are hosted by a cloud provider and you want automation support such as load balancers or persistent storage devices, see Selecting Cloud Providers for configuration info.

Node Configuration

The first node in your cluster should be a Linux host that fills the Control Plane role. This role must be fulfilled before you can add Windows hosts to your cluster. At minimum, the node must have this role enabled, but we recommend enabling all three. The following table lists our recommended settings (we'll provide the recommended settings for nodes 2 and 3 later).

OptionSetting
Node Operating SystemLinux
Node Rolesetcd
Control Plane
Worker

When you're done with these configurations, resume Creating a Cluster with Custom Nodes from step 8.

4. Add Linux Host for Ingress Support

After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Add another Linux host, which will be used to support Ingress for your cluster.

  1. Using the content menu, open the custom cluster your created in 2. Create the Custom Cluster.

  2. From the main menu, select Nodes.

  3. Click Edit Cluster.

  4. Scroll down to Node Operating System. Choose Linux.

  5. Select the Worker role.

  6. Copy the command displayed on screen to your clipboard.

  7. Log in to your Linux host using a remote Terminal connection. Run the command copied to your clipboard.

  8. From Rancher, click Save.

Result: The worker role is installed on your Linux host, and the node registers with Rancher.

5. Adding Windows Workers

You can add Windows hosts to a custom cluster by editing the cluster and choosing the Windows option.

  1. From the main menu, select Nodes.

  2. Click Edit Cluster.

  3. Scroll down to Node Operating System. Choose Windows.

  4. Select the Worker role.

  5. Copy the command displayed on screen to your clipboard.

  6. Log in to your Windows host using your preferred tool, such as Microsoft Remote Desktop. Run the command copied to your clipboard in the Command Prompt (CMD).

  7. From Rancher, click Save.

  8. Optional: Repeat these instruction if you want to add more Windows nodes to your cluster.

Result: The worker role is installed on your Windows host, and the node registers with Rancher.

6. Cloud-hosted VM Routes Configuration

In Windows clusters, containers communicate with each other using the host-gw mode of Flannel. In host-gw mode, all containers on the same node belong to a private subnet, and traffic routes from a subnet on one node to a subnet on another node through the host network.

  • When worker nodes are provisioned on AWS, virtualization clusters, or bare metal servers, make sure they belong to the same layer 2 subnet. If the nodes don't belong to the same layer 2 subnet, host-gw networking will not work.

  • When worker nodes are provisioned on GCE or Azure, they are not on the same layer 2 subnet. Nodes on GCE and Azure belong to a routable layer 3 network. Follow the instructions below to configure GCE and Azure so that the cloud network knows how to route the host subnets on each node.

To configure host subnet routing on GCE or Azure, first run the following command to find out the host subnets on each worker node:

kubectl get nodes -o custom-columns=nodeName:.metadata.name,nodeIP:status.addresses[0].address,routeDestination:.spec.podCIDR

Then follow the instructions for each cloud provider to configure routing rules for each node:

ServiceInstructions
Google GCEFor GCE, add a static route for each node: Adding a Static Route.
Azure VMFor Azure, create a routing table: Custom Routes: User-defined.