Deploy a Kubernetes cluster (sys-admin nomination required)¶
Table of Contents
Prerequisites¶
The user has to be registered in the IAM system for INFN-Cloud https://iam.cloud.infn.it/login. Only registered users can login into the INFN-Cloud dashboard https://my.cloud.infn.it/login.
User responsibilities¶
Important
The solution described in this guide consists of the deployment of a Kubernetes cluster on top of Virtual Machines instantiated on INFN-CLOUD infrastructure. The instantiation of a VM comes with the responsibility of maintaining it and all of the services it hosts. In particular, be careful when updating the operating system packages, as they could incorrectly modify the current version of the cluster and cause it to malfunction.
Please read the INFN Cloud AUP in order to understand the responsibilities you have in managing this service.
Kubernetes cluster configuration¶
Note
If you belong to multiple projects, aka multiple IAM-groups, after login into the dashboard, from the lower left corner, select the one to be used for the deployment you intend to perform. Not all solutions are available for all projects. The resources used for the deployment will be accounted to the respective project, and impact on their available quota. See figure below.
Select the “Kubernetes cluster” button and then “configure”. The configuration menu shows only your projects allowed to instantiate it.
Once done the configuration form appears. Parameters are split in two pages: “Basic” and “Advanced” configuration.
Basic configuration¶
Default parameters are ready for the submission of a cluster composed by 1 master and 1 slave. By default the provider where the cluster will be instantiated is automatically selected by the INFN Cloud orchestrator service.
The user has to specify:
- the flavor of master and slave selecting between medium (2 VCPUs, 4GB RAM) and large (4 VCPUs, 8 GB RAM)
- the number of slaves if more than one in needed
- admin_token: the password that will be used to access the Grafana dashboards.
If needed, a single port or a range can be specified to be open on the master. By policy the open ports on providers are higher than 8000.
Advanced configuration¶
The user can select:
- the timeout for the deployment
- “no cluster deletion” in case of failure
- don’t send the confirmation email when complete
- the manual scheduling, selecting the provider where the cluster will be created. The list of providers is related to the project.
Deployment result¶
To check the status of the deployment and its details select the “deployments” button. Here all the user’s deployments are reported with “deployment identifier”, “status”, “creation time”, the “resources provider” and the button “Details”.
For each deployment the button “Details” allows:
- to delete the cluster
- to show the TOSCA template of the cluster (with the default values)
- to retrieve the deployment log file that contains error messages in case of failure
- to lock the deployment
Clicking on the “deployment identifier” or on “Details” button the details of the deployed cluster are shown:
- the “Overview” of the cluster
- the “Input Values” used for the cluster configuration
- the “Output Values” to access the cluster, as the Kubernetes and Grafana dashboard endpoints, the kubeconfig file to download and the FloatingIP to access the created VMs. To access the Kubernetes dashboard, use the token in the kubeconfig or the kubeconfig file itself.
Troubleshooting¶
In both the cases (auto and manual scheduling) the success of creation depends on the provider resources availability. Otherwise a “no quota” is reported as failure reason.
Client certificates generated by kubeadm expire after 1 year (consult official guide). You can renew your certificates manually at any time with following commands
# If you didn't save the executable in the $PATH
$ which kubeadm
# You can use the check-expiration sub-command to check when certificates expire
$ kubeadm certs check-expiration
# The command renew, with the sub-command all, can renew all certificates
$ kubeadm certs renew all
# Export KUBECONFIG again (admin.conf has been modified) and try any command
$ export KUBECONFIG=/etc/kubernetes/admin.conf
Note
If you have older versions of the cluster (i.e. < 1.19) you need to use the alpha sub-command, for example “kubeadm alpha certs check-expiration”.