Skip to main content
Version: Next 🚧

Installing Epinio on RKE2

This guide will help you to deploy a suitable RKE2 Kubernetes cluster for Epinio. More details for installing RKE2 can be found in the RKE2 quickstart guide.

Install RKE2 Kubernetes cluster

We're using a dedicated machine for the RKE2 server node. The following steps are performed using the root account on that machine.

  1. Run the installer, start and enable the rke2-server systemd service.

    curl -sfL | sh -
    systemctl enable --now rke2-server.service
  2. Configure environment variables for operating the RKE2 cluster. Execute these commands. Then add them to /root/.bashrc for persistence.

    export PATH=$PATH:/var/lib/rancher/rke2/bin:/opt/rke2/bin
    export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

Make sure that you can communicate with your new RKE2 cluster by running:

kubectl get pods --all-namespaces

RKE2 cluster prerequisities

Perform the following steps on your RKE2 node before installing Epinio:

  1. Install the helm CLI.

    curl | bash
  2. Configure rke2-ingress-nginx-controller. While the rke2-ingress-nginx-controller cluster is installed by default on RKE2 clusters, we need to set the IngressClass nginx up as the default class. This is done by running:

    kubectl patch ingressClass nginx -p '{"metadata": {"annotations":{"": "true"}}}'

    If required, you can specify a non-default IngressClass during the installation of Epinio with the helm argument --set ingress.ingressClassName=<className>.

  3. Deploy a dynamic storage provisioner. RKE2 clusters have no storage provisioner installed by default. To support Epinio a storage provisioner is needed. You can use any storage provisioner which provides, ReadWriteMany (RWX) Access Mode and a default StorageClass resource for dynamic storage provisioning.


    To verify that your cluster provides a default StorageClass run the command kubectl get storageclass. The default StorageClass is marked with the string (default) next to its name in the output list.

    As an example, you can deploy and configure a local-path dynamic storage provisioner by running:

    kubectl apply -f
    kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"":"true"}}}'


For evaluation environments we recommend that you setup Epinio Ingress resources with a wildcard DNS service such as,, or that points to the INTERNAL-IP address of your kubernetes node.

For production environments you should configure an external load-balancer solution. It should listen on a public IP with an associated public FQDN domain. The load-balancer's role is to redirect HTTP(S) traffic, from the load-balancer endpoint to the internal Ingress resources of the kubernetes cluster.

There are two ways of installing DNS for Epinio:

  1. Wildcard DNS Service

    For test environments. This should work on any kubernetes distribution. Epinio will try to create a magic wildcard DNS domain, for example,

  2. DNS setup

    For test and production environments. You'll define a system domain, for example,

Then continue with the Epinio installation process.