<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Kubernetes on Kenno&#39;s Open Note</title>
    <link>https://blog.khmersite.net/tags/kubernetes/</link>
    <description>Recent content in Kubernetes on Kenno&#39;s Open Note</description>
    
    <generator>Hugo -- 0.154.0</generator>
    <language>en</language>
    <lastBuildDate>Tue, 11 Mar 2025 20:24:00 +1100</lastBuildDate>
    <atom:link href="https://blog.khmersite.net/tags/kubernetes/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>ArgoCD CLI failed to invoke grpc call</title>
      <link>https://blog.khmersite.net/p/argocd-cli-failed-to-invoke-grpc-call/</link>
      <pubDate>Tue, 11 Mar 2025 20:24:00 +1100</pubDate>
      <guid>https://blog.khmersite.net/p/argocd-cli-failed-to-invoke-grpc-call/</guid>
      <description>&lt;p&gt;When running &lt;code&gt;argocd&lt;/code&gt; CLI, the following warning message shown:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;➜ argocd app list
WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web.
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To suppress this warning message, re-run the login with &lt;code&gt;--grpc-web&lt;/code&gt; option. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;➜ argocd login argocd.apps.tuaf.local.cluster --grpc-web
Username: admin
Password:
&amp;#39;admin:login&amp;#39; logged in successfully
Context &amp;#39;argocd.apps.tuaf.local.cluster&amp;#39; updated
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;➜ argocd app list
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Reference:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setting Retention for Kube Prometheus Stack</title>
      <link>https://blog.khmersite.net/p/setting-retention-for-kube-prometheus-stack/</link>
      <pubDate>Tue, 25 Feb 2025 10:42:41 +1100</pubDate>
      <guid>https://blog.khmersite.net/p/setting-retention-for-kube-prometheus-stack/</guid>
      <description>&lt;p&gt;When deploying the &lt;strong&gt;prometheus-community/kube-prometheus-stack&lt;/strong&gt; in a k8s cluster, the default retention for Prometheus scrapped data is 10d at the time of this writing or Helm chart &lt;code&gt;kube-prometheus-stack-69.4.1&lt;/code&gt;. As a result, the longhorn storage is filled up fast, and caused the Prometheus to crash.&lt;/p&gt;
&lt;p&gt;So I&amp;rsquo;d decided to change the retention period to 3 days, and also set retention size to 4GB. To do this, update the &lt;code&gt;values&lt;/code&gt; file for the Helm chart with the following values:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Upgrading Longhorn with Helm</title>
      <link>https://blog.khmersite.net/p/upgrading-longhorn-with-helm/</link>
      <pubDate>Fri, 23 Aug 2024 13:55:07 +1000</pubDate>
      <guid>https://blog.khmersite.net/p/upgrading-longhorn-with-helm/</guid>
      <description>&lt;p&gt;For my test Kubernetes cluster, Longhorn is used as the persistent block storage. The installed version is 1.6.2, and the version 1.7.0 is available now. So today, I&amp;rsquo;m going to share how to perform the Longhorn upgrade using Helm.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m going to try to do this live as usual. Of course, I already created snapshots for all my K8s nodes in case thing goes south.&lt;/p&gt;
&lt;p&gt;First, let&amp;rsquo;s verify the current installed version of Longhorn [1].&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy metrics-server in Kubernetes using Helm</title>
      <link>https://blog.khmersite.net/p/deploy-metrics-server-in-kubernetes-using-helm/</link>
      <pubDate>Wed, 24 Apr 2024 01:20:00 +1000</pubDate>
      <guid>https://blog.khmersite.net/p/deploy-metrics-server-in-kubernetes-using-helm/</guid>
      <description>&lt;p&gt;This is a short note on how to install &lt;a href=&#34;https://github.com/kubernetes-sigs/metrics-server&#34;&gt;metrics-server&lt;/a&gt; using Helm on my k8s cluster.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3

$ helm version --short
v3.14.4+g81c902a
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Before we can install the chart, we will need to add the metrics-server repo to Helm.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Update the repo:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the &amp;#34;metrics-server&amp;#34; chart repository
Update Complete. ⎈Happy Helming!⎈
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I also need to make a small customization to the chart&amp;rsquo;s default value since my k8s cluster uses self-signed certificates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to deploy Uptime Kuma on kubernetes</title>
      <link>https://blog.khmersite.net/p/how-to-deploy-uptime-kuma-on-kubernetes/</link>
      <pubDate>Fri, 15 Mar 2024 18:39:02 +1100</pubDate>
      <guid>https://blog.khmersite.net/p/how-to-deploy-uptime-kuma-on-kubernetes/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been running Uptime Kuma on a podman-container for a long time with no problem. Recently, though, I&amp;rsquo;ve been running a mini Kubernetes cluster at home, and decided to move this conatainer to the same cluster.&lt;/p&gt;
&lt;p&gt;The following is to document how I deploy Uptime Kuma on a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;As I&amp;rsquo;m new to Kubernetes world, I broke down the tasks I need to do in 6 pieces:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ ls -1
1.namespace.yaml
2.service.yaml
3.pvc.yaml
4.deployment.yaml
5.tls-secret.yaml
6.ingress.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;ol&gt;
&lt;li&gt;The first step is to create a new namespace for this application or deployment. It&amp;rsquo;ll be called &amp;lsquo;uptime-kuma&amp;rsquo;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 1.namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: uptime-kuma

$ kubectl apply -f ./1.namespace.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;2&#34;&gt;
&lt;li&gt;Create a new service, which will listen on port 3001.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 2.service.yaml
apiVersion: v1
kind: Service
metadata:
  name: uptime-kuma-svc
  namespace: uptime-kuma
spec:
  selector:
    app: uptime-kuma
  ports:
  - name: uptime-kuma
    port: 3001
    targetPort: 3001
    protocol: TCP
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;Next is to create a persistent volume claim for Uptime Kuma&amp;rsquo;s app data. Based on the current usage on my existing podman-container, 512 MiB is plenty. In my cluster, I use Longhorn as the persistent block storage.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 3.pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: uptime-kuma-pvc
  namespace: uptime-kuma
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 512Mi
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;4&#34;&gt;
&lt;li&gt;Here is the deployment file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 4.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: uptime-kuma
  namespace: uptime-kuma
spec:
  selector:
    matchLabels:
      app: uptime-kuma
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: uptime-kuma
    spec:
      containers:
      - name: uptime-kuma
        image: louislam/uptime-kuma:1.23.11
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3001
          name: web-ui
        resources:
          limits:
            cpu: 200m
            memory: 512Mi
          requests:
            cpu: 50m
            memory: 128Mi
        livenessProbe:
          tcpSocket:
            port: web-ui
          initialDelaySeconds: 60
          periodSeconds: 10
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: web-ui
          initialDelaySeconds: 30
          periodSeconds: 10
        volumeMounts:
        - name: data
          mountPath: /app/data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: uptime-kuma-pvc
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;5&#34;&gt;
&lt;li&gt;(Optional) I want to use SSL when accessing the web UI of Uptime Kuma, therefore I need to create a new TLS secret.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 5.tls-secret.yaml
apiVersion: v1
data:
  tls.crt: VERY-LONG-BASE-64-ENCODED-STRING
  tls.key: ANOTHER-LONG-BASE-64-ENCODED-STRING
kind: Secret
metadata:
  name: wildcard-tls-secret
  namespace: uptime-kuma
type: kubernetes.io/tls
&lt;/code&gt;&lt;/pre&gt;&lt;ol start=&#34;6&#34;&gt;
&lt;li&gt;The last file &lt;code&gt;6.ingress.yaml&lt;/code&gt; contains the manifest for ingress-nginx which allows me to access the Uptime Kuma at &lt;a href=&#34;https://uptime.home-lab-internal-domain.com&#34;&gt;https://uptime.home-lab-internal-domain.com&lt;/a&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cat 6.ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: uptime-kuma-ingress
  namespace: uptime-kuma
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - uptime.home-lab-internal-domain.com
    secretName: wildcard-tls-secret
  rules:
  - host: uptime.home-lab-internal-domain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: uptime-kuma-svc
            port:
              number: 3001
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;References:&lt;/p&gt;</description>
    </item>
    <item>
      <title>What is in a Longhorn volume?</title>
      <link>https://blog.khmersite.net/p/what-is-in-a-longhorn-volume/</link>
      <pubDate>Sun, 15 Oct 2023 18:57:07 +1100</pubDate>
      <guid>https://blog.khmersite.net/p/what-is-in-a-longhorn-volume/</guid>
      <description>&lt;p&gt;Do you know? Well, before today, neither did I.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m pretty late to Kubernetes. I heard about it many hears ago, but unfortunately, I had no need to use it at home or at work. Even today, there is no workload that requires a Kubernetes cluster to run on. However, I think I can&amp;rsquo;t ignore Kubernetes any longer, and I need to learn about it now, well &amp;ldquo;better be late than never&amp;rdquo;, right?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Installing Ingress-Nginx Controller with Helm</title>
      <link>https://blog.khmersite.net/p/installing-ingress-nginx-controller-with-helm/</link>
      <pubDate>Tue, 26 Sep 2023 22:35:22 +1000</pubDate>
      <guid>https://blog.khmersite.net/p/installing-ingress-nginx-controller-with-helm/</guid>
      <description>&lt;p&gt;In my previous &lt;a href=&#34;https://blog.khmersite.net/2023/09/install-helm-cli-on-almalinux-9/&#34;&gt;post&lt;/a&gt;, I showed how to install &lt;code&gt;helm&lt;/code&gt; CLI on a server/instance running AlmaLinux 9. Now, I will share how to use &lt;code&gt;helm&lt;/code&gt; to install the Ingress-Nginx Controller for a Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;We install a package known as &amp;ldquo;chart&amp;rdquo; from a Helm repository. Initially, there is no repo, and can be seen below:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ helm repo list
Error: no repositories to show
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In order to install Ingress-Nginx Controller, I first need to add a new repository.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Installing HELM Cli on AlmaLinux 9</title>
      <link>https://blog.khmersite.net/p/install-helm-cli-on-almalinux-9/</link>
      <pubDate>Tue, 26 Sep 2023 21:01:05 +1000</pubDate>
      <guid>https://blog.khmersite.net/p/install-helm-cli-on-almalinux-9/</guid>
      <description>&lt;p&gt;Recently, I decided to pick up learning Kubernetes again after completely being absent from this world for many years. In this post, I&amp;rsquo;ll document how to install Helm CLI on AlmaLinx 9. &lt;a href=&#34;https://helm.sh/docs/&#34;&gt;Helm&lt;/a&gt; is like a package manager for Kubernetes. Helm CLI can be installed on your local machine, it does not to be installed on the Kubenetes node. However, in order to interact with your Kubernetes cluster, you&amp;rsquo;ll need to also have &lt;code&gt;kubectl&lt;/code&gt; command as well as properly configured it to interact with your Kubernetes cluster.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
