I’ve been running Uptime Kuma on a podman-container for a long time with no problem. Recently, though, I’ve been running a mini Kubernetes cluster at home, and decided to move this conatainer to the same cluster.
The following is to document how I deploy Uptime Kuma on a Kubernetes cluster.
As I’m new to Kubernetes world, I broke down the tasks I need to do in 6 pieces:
$ ls -1
1.namespace.yaml
2.service.yaml
3.pvc.yaml
4.deployment.yaml
5.tls-secret.yaml
6.ingress.yaml
- The first step is to create a new namespace for this application or deployment. It’ll be called ‘uptime-kuma’.
$ cat 1.namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: uptime-kuma
$ kubectl apply -f ./1.namespace.yaml
- Create a new service, which will listen on port 3001.
$ cat 2.service.yaml
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma-svc
namespace: uptime-kuma
spec:
selector:
app: uptime-kuma
ports:
- name: uptime-kuma
port: 3001
targetPort: 3001
protocol: TCP
- Next is to create a persistent volume claim for Uptime Kuma’s app data. Based on the current usage on my existing podman-container, 512 MiB is plenty. In my cluster, I use Longhorn as the persistent block storage.
$ cat 3.pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-pvc
namespace: uptime-kuma
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 512Mi
- Here is the deployment file.
$ cat 4.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
namespace: uptime-kuma
spec:
selector:
matchLabels:
app: uptime-kuma
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:1.23.11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: web-ui
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
livenessProbe:
tcpSocket:
port: web-ui
initialDelaySeconds: 60
periodSeconds: 10
readinessProbe:
httpGet:
scheme: HTTP
path: /
port: web-ui
initialDelaySeconds: 30
periodSeconds: 10
volumeMounts:
- name: data
mountPath: /app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: uptime-kuma-pvc
- (Optional) I want to use SSL when accessing the web UI of Uptime Kuma, therefore I need to create a new TLS secret.
$ cat 5.tls-secret.yaml
apiVersion: v1
data:
tls.crt: VERY-LONG-BASE-64-ENCODED-STRING
tls.key: ANOTHER-LONG-BASE-64-ENCODED-STRING
kind: Secret
metadata:
name: wildcard-tls-secret
namespace: uptime-kuma
type: kubernetes.io/tls
- The last file
6.ingress.yaml
contains the manifest for ingress-nginx which allows me to access the Uptime Kuma at https://uptime.home-lab-internal-domain.com:
$ cat 6.ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptime-kuma-ingress
namespace: uptime-kuma
spec:
ingressClassName: nginx
tls:
- hosts:
- uptime.home-lab-internal-domain.com
secretName: wildcard-tls-secret
rules:
- host: uptime.home-lab-internal-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma-svc
port:
number: 3001
References: