How To Install Kubernetes On Vmware
Deploying Kubernetes (k8s) on vSphere 7 with Tanzu Kubernetes Filigree (TKG)
For a change of gear away from our usual Osquery posts. Over the last year nosotros've done a number of Zercurity deployments onto Kubernetes. With the most common being washed on-prem with VMware's vSphere. Fortunately, every bit of the most contempo release of VMware'southward vCenter you can easily deploy Kubernetes with VMware's Tanzu Kubernetes Grid (TKG).
This post will form part of a series of posts on running Zercurity on top of Kubernetes in a production surroundings.
As with all things, there a number of ways to deploy and manage Kubernetes on VMware. If you're running the latest release of vCenter (7.0.1.00100) y'all tin can actually deploy a TKG cluster straight from the Workload Management screen. Right from the main dashboard which has a full guide to walk yous through the setup process. This also works along side NSX-T Data-center edition for additional management functionality and networking.
Yet, for the purposes of this post and to support older versions of ESX (vSphere 6.7u3 and vSphere 7.0) and vCenter we're going to be using the TKG client utility which spins up its own elementary to use web UI anyhow for deploying Kubernetes.
Installing Tanzu Kubernetes Filigree (TKG)
Right, outset things first. Visit the TKG download page. Its important that you download the following packages advisable for your client platform (we'll be using Linux):
Note: You will require a VMware account to download these files.
- VMware Tanzu Kubernetes Grid 1.2.0 CLI
tkg
is used to install, manage and upgrade the Kubernetes cluster running on top vCenter. - VMware Tanzu Kubernetes Filigree i.ii.0 OVAs for Kubernetes
In order to deploy the TKG. The setup requires the Photon container imagephoton-3-v1.17.3_vmware.ii.ova
. Used for both the workers and management VMs. - Kubectl i.nineteen.1 for VMware Tanzu Kubernetes Filigree one.two.0
kubectl
is a command line tool used to administer your Kubernetes cluster from the command line. - VMware Tanzu Kubernetes Grid Extensions Manifest 1.two.0
Additional services, configuration and RBAC changes that are practical to your cluster post installation.
Prerequisites
The following setups are using Ubuntu Linux. Notes will be added for additional platforms.
Installing kubectl
With the "VMware Tanzu Kubernetes Grid one.ii.0 CLI" annal downloaded. Merely extract the archive and install the tkg
binary into your arrangement or user PATH
. If you lot're using Mac OSX you can use the same control below just substitute darwin
for linux
.
wget https://download2.vmware.com/software/TKG/1.2.0/kubectl-linux-v1.xix.one-vmware.2.gz
gunzip kubectl-linux-v1.nineteen.1-vmware.2.gz
sudo mv kubectl-mac-v1.19.1-vmware.two /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
Installing docker
Docker is required as the TKG installer spins upwards several docker containers used to connect to and configure the remote vCenter server and its subsequent VMs.
If you're running Mac OSX you can make use of the Docker Desktop app.
sudo apt-go update
sudo apt-get install apt-ship-https \
ca-certificates curlicue gnupg-agent software-properties-common
roll -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add together -
sudo apt-primal fingerprint 0EBFCD88
sudo add-apt-repository \
"deb [curvation=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Lastly, you'll demand to give your current user permission to interact with the docker daemon.
sudo usermod -aG docker <your-user>
Installing Tanzu (tkg)
The Tanzu tkg
is a binary application used to install, upgrade and manage your Kubernetes cluster on top of VMware vSphere.
wget https://download2.vmware.com/software/TKG/1.two.0/tkg-darwin-amd64-v1.two.0-vmware.1.tar.gz
tar -zxvf tkg-linux-amd64-v1.2.0-vmware.one.tar.gz
cd tkg
sudo mv tkg-linux-amd64-v1.2.0+vmware.1 /usr/local/bin/tkg
sudo chmod +x /usr/local/bin/tkg
One time installed you tin run tkg version
to check tkg
is working and installed into your system PATH
.
Importing the OVA images
From the hosts and clusters view, if you correct click on your Information-center you'll see the choice "Deploy OVF template" select the OVA downloaded from the vmware TKG downloads page. Nosotros're using photon-3-v1.17.3_vmware.2.ova
. Then just follow the on screen steps.
Once the OVA has been imported its deployed every bit a VM. Do not power on the VM. The last pace is to catechumen it dorsum into a template so it tin be used past the TKG installer.
Right click on the imported VM photon-3-kube-v1.xix.1+vmware.2a
, select the "Template" carte du jour item and choose "Catechumen to template". This will have a few moments and the Template will now be visible nether the "VMS and Templates" view.
Optional prerequisites
Yous many also choose to configure a defended network and or resource puddle for your k8s cluster. This can exist washed directly from the vSphere web UI. If you're configuring a new network delight ensure nodes deployed to that network will receive an IP accost via DHCP and connect to the internet.
Installing Tanzu Kubernetes Filigree
Once all the prerequisites are met launch the tkg
spider web installer:
tkg init --ui
One time you run the control your browser should automatically open and point your browser to: http://127.0.0.i:8080/
Choose the "VMware vSphere" deploy option. The next series of steps will help configure the TKG deployment.
The first stride is to connect to your vSphere vCenter instance with your administrator credentials. Upon clicking "connect" you'll see your available data-centers show up. TKG also requires your RSA public cardinal, in gild for management cluster to use for management.
Your SSH RSA key is unremarkably located inside your abode directory:
cat .ssh/id_rsa.pub
If the file doesn't be or you need to create a new RSA key you can generate one similar so:
ssh-keygen -t rsa -C "your@email.com"
If you change the default filename you'll see ii files created, once the command has run. You demand to copy and paste the contents of your public fundamental (the .pub
file).
The next stage is to proper name your cluster and provide the sizing details of both the management instances and worker instances for your cluster. The virtual IP address is the primary IP address of the API server that provides the load balancing service — aka the ingress server.
For the adjacent stage you can provide some optional metadata or labels to make it easier to identify your VMs.
The next phase is the define the resources location. This is where your Kubernetes cluster volition reside and the data-store used by the virtual machines. We're using the root VM folder, our vSAN datastore and lastly, we've created a separate resource pool chosen k8s-prod to manage the clusters CPU, storage and memory limits.
With the networking configuration, you tin use the defaults provided here. However, we've created a divide Distributed switch called VM Tanzu Prod which its connected via its own segregated VLAN dorsum into our network.
The last and final stage is to again select the Proton Kube OVA which we downloaded earlier as the base of operations image for the workers and management virtual machines. If nothing is listed here, brand sure you have imported the OVA and converted it from a VM into an OVA template. Apply the refresh icon to reload the list without starting over.
Finally, review your configuration and click "Deploy management cluster". This can take around viii–ten minutes and even longer depending on your internet connection. As the setup needs to pull down and deploy multiple images for the Docker containers which are used to bootstrap the Tanzu management cluster.
Once the installation has finished y'all'll now run across several VMs within the vSphere web client named something similar too: tkg-mgmt-vsphere-20200927183052-control-airplane-6rp25
. At this betoken the Tanzu management plane has now been deployed.
Success. \o/
Configuring kubectl
We at present need to configure the kubectl
(used to deploy pods and interact with the Kubernetes cluster) control to use our new cluster as our primary context (shown past the asterisk). Take hold of the cluster credentials with:
tkg get credentials zercurity Credentials of workload cluster 'zercurity' take been saved
You lot can now access the cluster by running 'kubectl config use-context zercurity-admin@zercurity'
Using the command above,copy and paste it into our kubectl
command, to set your new context. This is useful for switching betwixt multiple clusters:
kubectl config use-context zercurity-admin@zercurity
With kubectl
continued to our cluster allow's create our first namespace to check everything is working correctly.
kubectl version
kubectl create namespace zercurity
At this stage y'all're most ready to become and you can get-go deploying not-persistent containers to exam out the cluster.
Installing the VMware TKG extensions
VMware provides a number of helpful extensions to add monitoring, logging and ingress services for web based (HTTP/HTTPS) deployments via contour. Note that TCP/IP ingress isn't supported.
The extensions archive should have been download already from world wide web.vmware.com/go/go-tkg
.
wget https://download2.vmware.com/software/TKG/1.2.0/tkg-extensions-manifests-v1.2.0-vmware.ane.tar
gunzip tkg-extensions-manifests-v1.2.0-vmware.1.tar-two.gz
cd tkg-extensions-v1.two.0+vmware.i/
I'd recommend applying the following extensions. There are a few more independent within the annal. Yet, I'd argue these are the primary extensions y'all're going to want to add.
kubectl apply -f cert-manager/*
kubectl apply -f ingress/profile/*
kubectl utilise -f monitoring/grafana/*
kubectl apply -f monitoring/prometheus/*
Configuring vSAN storage
This is the concluding stage I promise. Too critical if y'all intend on using persistent disks (persistent book claims, pvcs
) along side your deployed pods. In this last part I'chiliad also bold you're using vSAN as it has native support for container volumes.
In guild to permit the Kubernetes cluster know to use vSAN as its storage backend we need to create a new StorageClass
. To make this alter simple copy and paste the command below:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: thin
annotations:
storageclass.kubernetes.io/is-default-class: truthful
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: truthful
parameters:
storagepolicyname: "vSAN Default Storage Policy"
EOF
Y'all can and so check your StorageClass
has been correctly applied like so:
kubectl go sc NAME PROVISIONER Reclaim BINDINGMODE EXPANSION AGE
thin (default) csi.vsphere.. Delete Immediate true 2s kubectl draw sc sparse
You can also test your StorageClass
config is working by creating a quick PersistentVolumeClaim
— again, re-create and paste the command below.
true cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testing
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
This PersistentVolumeClaim
will be created within the default namespace using 1Gi of disk space.
In the event Condition shows the <pending>
country for more 30 seconds so this commonly means some sort of issue has occurred. You lot tin can use the kubectl depict sc thin
control to get additional information on the state of the StorageClass
.
kubectl go pvc NAME Condition VOLUME Chapters Admission STORAGECLASS Age
testing Bound pvc-3974d60f 1Gi RWO default 6s
Upgrading Kubernetes on TKG
This is surprisingly easy using the tkg
command. All you need to exercise is go the direction cluster id using the tkg get management-cluster
command. And then run tkg upgrade management-cluster
with your management cluster id. This will automatically update the Kubernetes command aeroplane and worker nodes.
tkg upgrade management-cluster tkg-mgmt-vsphere-20200927183052 Upgrading management cluster 'tkg-mgmt-vsphere-20200927183052' to TKG version 'v1.2.0' with Kubernetes version 'v1.19.1+vmware.2'. Are you sure? [y/Northward]: y
Upgrading direction cluster providers...
Checking cert-managing director version...
Deleting cert-manager Version="v0.11.0"
Installing cert-manager Version="v0.16.1"
Waiting for cert-director to exist available...
Performing upgrade...
Deleting Provider="cluster-api" Version="" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.iii.ten" TargetNamespace="capi-organisation"
Deleting Provider="bootstrap-kubeadm" Version="" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.three.10" TargetNamespace="capi-kubeadm-bootstrap-organization"
Deleting Provider="control-plane-kubeadm" Version="" TargetNamespace="capi-kubeadm-command-airplane-system"
Installing Provider="command-plane-kubeadm" Version="v0.iii.10" TargetNamespace="capi-kubeadm-control-plane-system"
Deleting Provider="infrastructure-vsphere" Version="" TargetNamespace="capv-system"
Installing Provider="infrastructure-vsphere" Version="v0.vii.one" TargetNamespace="capv-system"
Direction cluster providers upgraded successfully...
Upgrading management cluster kubernetes version...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Create InfrastructureTemplate for upgrade...
Upgrading command plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.xix.1+vmware.2...
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.19.1+vmware.ii...
Waiting for kubernetes version to be updated for worker nodes...
updating 'metadata/tkg' addition...
Management cluster 'tkg-mgmt-vsphere-20200927183052' successfully upgraded to TKG version 'v1.2.0' with kubernetes version 'v1.19.one+vmware.2'
All finished
Congratulations, you've now got a Kubernetes cluster up and running on top of your VMware cluster. In the next mail we'll be looking at deploying PostgreSQL into our cluster fix for our instance of Zercurity.
If yous've got stuck or accept a few suggestions for us to add don't hesitate to make it bear on via our website or leave a comment beneath.
Source: https://zercurity.medium.com/deploying-kubernetes-k8s-on-vsphere-7-with-tanzu-kubernetes-grid-tkg-b9f8b8c2031e
Posted by: mcminnforperfatim.blogspot.com
0 Response to "How To Install Kubernetes On Vmware"
Post a Comment