<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Uptime-Kuma on Kenno&#39;s Open Note</title>
    <link>https://blog.khmersite.net/tags/uptime-kuma/</link>
    <description>Recent content in Uptime-Kuma on Kenno&#39;s Open Note</description>
    
    <generator>Hugo -- 0.154.0</generator>
    <language>en</language>
    <lastBuildDate>Fri, 15 Mar 2024 18:39:02 +1100</lastBuildDate>
    <atom:link href="https://blog.khmersite.net/tags/uptime-kuma/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>How to deploy Uptime Kuma on kubernetes</title>
      <link>https://blog.khmersite.net/p/how-to-deploy-uptime-kuma-on-kubernetes/</link>
      <pubDate>Fri, 15 Mar 2024 18:39:02 +1100</pubDate>
      <guid>https://blog.khmersite.net/p/how-to-deploy-uptime-kuma-on-kubernetes/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been running Uptime Kuma on a podman-container for a long time with no problem. Recently, though, I&amp;rsquo;ve been running a mini Kubernetes cluster at home, and decided to move this conatainer to the same cluster.&lt;/p&gt;
&lt;p&gt;The following is to document how I deploy Uptime Kuma on a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;As I&amp;rsquo;m new to Kubernetes world, I broke down the tasks I need to do in 6 pieces:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ ls -1
1.namespace.yaml
2.service.yaml
3.pvc.yaml
4.deployment.yaml
5.tls-secret.yaml
6.ingress.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;ol&gt;
&lt;li&gt;The first step is to create a new namespace for this application or deployment. It&amp;rsquo;ll be called &amp;lsquo;uptime-kuma&amp;rsquo;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 1.namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: uptime-kuma

$ kubectl apply -f ./1.namespace.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Create a new service, which will listen on port 3001.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 2.service.yaml
apiVersion: v1
kind: Service
metadata:
  name: uptime-kuma-svc
  namespace: uptime-kuma
spec:
  selector:
    app: uptime-kuma
  ports:
  - name: uptime-kuma
    port: 3001
    targetPort: 3001
    protocol: TCP
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Next is to create a persistent volume claim for Uptime Kuma&amp;rsquo;s app data. Based on the current usage on my existing podman-container, 512 MiB is plenty. In my cluster, I use Longhorn as the persistent block storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 3.pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: uptime-kuma-pvc
  namespace: uptime-kuma
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 512Mi
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Here is the deployment file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 4.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: uptime-kuma
  namespace: uptime-kuma
spec:
  selector:
    matchLabels:
      app: uptime-kuma
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: uptime-kuma
    spec:
      containers:
      - name: uptime-kuma
        image: louislam/uptime-kuma:1.23.11
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3001
          name: web-ui
        resources:
          limits:
            cpu: 200m
            memory: 512Mi
          requests:
            cpu: 50m
            memory: 128Mi
        livenessProbe:
          tcpSocket:
            port: web-ui
          initialDelaySeconds: 60
          periodSeconds: 10
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: web-ui
          initialDelaySeconds: 30
          periodSeconds: 10
        volumeMounts:
        - name: data
          mountPath: /app/data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: uptime-kuma-pvc
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;(Optional) I want to use SSL when accessing the web UI of Uptime Kuma, therefore I need to create a new TLS secret.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 5.tls-secret.yaml
apiVersion: v1
data:
  tls.crt: VERY-LONG-BASE-64-ENCODED-STRING
  tls.key: ANOTHER-LONG-BASE-64-ENCODED-STRING
kind: Secret
metadata:
  name: wildcard-tls-secret
  namespace: uptime-kuma
type: kubernetes.io/tls
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;The last file &lt;code&gt;6.ingress.yaml&lt;/code&gt; contains the manifest for ingress-nginx which allows me to access the Uptime Kuma at &lt;a href=&#34;https://uptime.home-lab-internal-domain.com&#34;&gt;https://uptime.home-lab-internal-domain.com&lt;/a&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 6.ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: uptime-kuma-ingress
  namespace: uptime-kuma
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - uptime.home-lab-internal-domain.com
    secretName: wildcard-tls-secret
  rules:
  - host: uptime.home-lab-internal-domain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: uptime-kuma-svc
            port:
              number: 3001
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;References:&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
