Installing, Configuring and Day 0 Operations on Rancher Server — Series III
We have talked about what Rancher Server is, how it works on the backstage and why it matters for the Kubernetes (K8s) provisioning world at Series II. This time we’ll be installing, configuring and making several tunes on it. Difficulty to follow this guide is really easy.
Preparation
In order to run Rancher Server, we’ll need a Linux VM, preferably with a Debian distro. You can obtain the latest Debian Server ISO, then make a regular Linux installation using vCenter.
Before the Linux VM provisioning phase; Remember that I’ve mentioned about the Network subnets in Series I. The Linux VM that’s going to run Rancher Server must be deployed into Kubernetes VM network, which is: 192.168.100.0/24. It’s up to you to decide but we should at least leave 50 IP addresses for DHCP server to assign on orphan VM’s. So, the DHCP pool could be configured in between: 192.168.100.150 to 192.168.100.200, or any other convention you like.
I’m not going to teach you how to create a Linux VM step by step like you are a toddler. Pull your shit together or follow the guide below to make a simple Debian Server (No GUI) installation with SSH enabled and Networking ready.
There are several discussions about whether you should be running Rancher Server as High-Available (HA) mode or Standalone. Rancher suggests that you should make all production deployments HA, however I have a different solution to save our resources while enabling Hybrid Cloud. Still, no possible downtime for our clusters.
Debian doesn’t include sudo by default so, it would be nice if you can install it by switching to root user first.
su - root
Then, make sure you know your regular Linux user, mine was “cloud” as on the screenshot above — replace “cloud” with your Linux user wherever you see it, thus we can add this user to sudo group.
apt install sudo -y
usermod -aG sudo cloud
reboot
We’ll throw some magic in our terminal to update outdated packages if any:
sudo apt update
sudo apt upgrade -y
Make sure we have the necessary drivers installed on our VM.
sudo apt install open-vm-tools -y
sudo reboot
Do you remember this command from Series I, TL;DR section? It’s time to throw that beauty in, LoL.
sudo su -c "curl -sSL https://get.docker.com/ | sh"
Now we are ready for some action.
Installation
We have a great option here, it’s simple, it’s elegant… What more do you want? We can deploy Rancher Server with a containerized way, without having to deal with lots of Linux commands.
Let’s start:
Get back to your shell and;
sudo mkdir -p /opt/rancher
sudo chown -R cloud:cloud /opt/rancher
—
cat << 'EOF' > /opt/rancher/run.sh
#!/bin/bashdocker stop rancher-server
docker rm -f rancher-serverdocker run \
--name=rancher-server \
--restart=unless-stopped \
-p 80:80 \
-p 443:443 \
-v rancher-data:/var/lib/rancher \
--privileged \
-d rancher/rancher:stable
EOF
—
chmod +x /opt/rancher/run.sh
/opt/rancher/run.sh
This way, Rancher Server starts running inside a docker container made by their own. Basically inside that docker container, Rancher runs a K3s cluster for it’s services and their orchestration. In Series II, we didn’t mention about this because it’s out of our control (unless you want to contribute to Rancher Server). What we needed was, understanding how Rancher communicates with vCenter or other peripherals to provision RKE.
Rancher persists it’s data inside the container at /var/lib/rancher path. In order for us to make sure things work if by accident or for an upgrade purpose remove/recreate the rancher-server container, we need the “-v” directive.
Now let’s try reaching to https://your-static-vm-ip/
If it doesn’t work as intended (there is no other reason to unless you are trying to be hero, but…), you can check Rancher Server logs by typing this below and look for errors:
docker logs -f rancher-server
In the end we should be landing on a page like this;
Set a password, don’t touch the rest and “Continue”.
As you can see, there is a cluster named “local” deployed as K3s just for Rancher itself. You can use the Explorer button on the right hand side and sneak peek the workload, namespaces and other stuff.
You can even use this Rancher local K3s cluster to try out some features in your local machine, just like minikube.
That sums up the installation. Pretty neat, isn’t it?
Configuration
Actually, when you look into it, there aren’t much to configure when you deploy Rancher Server with the containerized way. However, at this point we’ll configure our Cloud Credentials (vSphere) and Node Templates.
Cloud Credentials
After logging-in when you land on the Rancher Server dashboard, at the top right corner you can see the user profile we’re on. Click on the drop-down and get to Cloud Credentials item.
Then, follow the “Add Cloud Credential” button on top again.
Fill in the information according to your vCenter details.
Hit “Create” to save your cloud credentials into Rancher K3s.
Now Rancher can communicate with vCenter API.
Node Templates
Now let’s hit the Node Templates item from the user drop-down menu at the top right center.
Click on the “Add Template” button.
When you are asked, choose the “vSphere” option. We are making this Node Template for vSphere, using the vSphere credentials we have provided before.
There are 6 steps to configure here and I’ll briefly mention about them how they are supposed to be.
- Account Access: We will choose which vCenter Cloud Credential to use.
- Scheduling: Where the VMs made by this template will be created. Like, the vCenter Datacenter, the Datastore, Folder or Host.
- Instance Options: If this is a template for Master Nodes, then at least 2 Cores and 4 GBs of Memory is required for a health Control Plane. I also want 20 GBs of Disk Space for my Master Nodes to have. As for the Worker Nodes, I’ve set 8 Cores and 16 GBs of Memory with 60 GBs of Disk Space. That’s how I prefer it, you can use lower or higher specs.
Don’t ask “Why?”:
Make sure: Your “Creation method” is set to “Install from boot2docker ISO (Legacy)” with the value (ISO Link) of:
https://releases.rancher.com/os/v1.5.8/vmware/rancheros-autoformat.iso
Make sure: You have set your network for the Kubernetes VM network.
Make sure: You have “disk.enableUUID = True” on all your vSphere templates. - Tags: You can add tags here. I have no tags.
- Custom attributes (legacy): You got the gist from the name. I have no custom attributes.
- vApp Options: I set as “Do not use vApp”.
Name your template, don’t touch the Engine Options, Taints or Labels and hit “Create”.
Repeat as many Node Templates as you need.
Day 0 Operations
So far we have done several things. Filled our Cloud Credentials, created Node Templates we wanted. What we want now is to set a backup retention for our Rancher K3s cluster. At least, that’s what I wanted.
I’ll show you how to use Rancher Apps & Marketplace below. You can set a backup retention from this section in your K3s cluster. You can also make other Day 0 Operations if you want. Like setting authentication mechanism, ex. Active Directory or LDAP.
In our Rancher Dashboard, hit the “Explorer” button of local (K3s) cluster.
From the top left corner, where the drop-down for Cluster Explorer is, choose “Apps & Marketplace” item.
At this point, there are lots of Helm charts we can use via Rancher’s recommendation. I’ll go with “Rancher Backups” in my case.
Don’t touch any options and hit “Install” button at the very bottom right section of the page. You should now see in the terminal that pops up the Helm package has successfully deployed.
After the successful “Rancher Backups” deployment, now you should notice a new item in the Cluster Explorer menu. Before we hit that, we need to create an AWS IAM user with appropriate AWS S3 permissions. Then, obtain programmatic access keys and insert this information to Rancher Server Local K3s Cluster.
Don’t forget to create an S3 bucket from your AWS Console in the region you desire. The access key you made must have the rights to PutObject into this S3 Bucket.
Assuming you have this information created, we will encode this information with base64 so that we can create an Opaque secret for Kubernetes.
Pop your terminal and type these using your credentials:
echo -n 'AKIAAAAABBBBBCCCCCDD' | base64
echo -n 'Bv3ekOp7f9Warh9b32nO/A2lgmsMKeG+76zAoNme' | base64
Using the outputs from above, click on “Import YAML” button on the top-right corner on Rancher Server Cluster Explorer.
You can now feed the information below, leave the namespace as “default” and hit “Import”:
apiVersion: v1
kind: Secret
metadata:
name: limon-s3-creds
type: Opaque
data:
accessKey: QUtJQUFBQUFCQkJCQkNDQ0NDREQ=
secretKey: QnYzZWtPcDdmOVdhcmg5YjMybk8vQTJsZ21zTUtlRys3NnpBb05tZQ==
Let’s get back to Rancher Backups from the Cluster Explorer menu and tap “Create” button to make a new backup job.
Name a rule, put up a schedule (I want to have my backups hourly with a 72 hours of retention), set your storage location with the desired region information like below, and hit “Create”
With this created, you should now be able to see backup files in your AWS S3 bucket and also within the Rancher Backups dashboard.
And that’s how it’s done. You can deploy different Helm charts from the Apps & Marketplaces section at your Rancher Server, or use existing Security or Tools drop-down menus at the Cluster Manager view to maintain your Day 0 Operations.
Up next: We will be deploying a simple, production-ready, Rancher Kubernetes Engine (RKE) using Rancher Server we’ve prepared today.
Thank you for reading and always, stay tuned!