ArgoCD CLI failed to invoke grpc call

When running argocd CLI, the following warning message shown: ➜ argocd app list WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web. NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET To suppress this warning message, re-run the login with --grpc-web option. For example: ➜ argocd login argocd.apps.tuaf.local.cluster --grpc-web Username: admin Password: 'admin:login' logged in successfully Context 'argocd.apps.tuaf.local.cluster' updated ➜ argocd app list NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET Reference: ...

March 11, 2025 · 1 min · 97 words · kenno

Setting Retention for Kube Prometheus Stack

When deploying the prometheus-community/kube-prometheus-stack in a k8s cluster, the default retention for Prometheus scrapped data is 10d at the time of this writing or Helm chart kube-prometheus-stack-69.4.1. As a result, the longhorn storage is filled up fast, and caused the Prometheus to crash. So I’d decided to change the retention period to 3 days, and also set retention size to 4GB. To do this, update the values file for the Helm chart with the following values: ...

February 25, 2025 · 1 min · 158 words · kenno

Upgrading Longhorn with Helm

For my test Kubernetes cluster, Longhorn is used as the persistent block storage. The installed version is 1.6.2, and the version 1.7.0 is available now. So today, I’m going to share how to perform the Longhorn upgrade using Helm. I’m going to try to do this live as usual. Of course, I already created snapshots for all my K8s nodes in case thing goes south. First, let’s verify the current installed version of Longhorn [1]. ...

August 23, 2024 · 4 min · 647 words · kenno

Deploy metrics-server in Kubernetes using Helm

This is a short note on how to install metrics-server using Helm on my k8s cluster. $ kubectl version Client Version: v1.29.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.3 $ helm version --short v3.14.4+g81c902a Before we can install the chart, we will need to add the metrics-server repo to Helm. $ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/ Update the repo: $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "metrics-server" chart repository Update Complete. ⎈Happy Helming!⎈ I also need to make a small customization to the chart’s default value since my k8s cluster uses self-signed certificates. ...

April 24, 2024 · 2 min · 262 words · kenno

How to deploy Uptime Kuma on kubernetes

I’ve been running Uptime Kuma on a podman-container for a long time with no problem. Recently, though, I’ve been running a mini Kubernetes cluster at home, and decided to move this conatainer to the same cluster. The following is to document how I deploy Uptime Kuma on a Kubernetes cluster. As I’m new to Kubernetes world, I broke down the tasks I need to do in 6 pieces: $ ls -1 1.namespace.yaml 2.service.yaml 3.pvc.yaml 4.deployment.yaml 5.tls-secret.yaml 6.ingress.yaml The first step is to create a new namespace for this application or deployment. It’ll be called ‘uptime-kuma’. $ cat 1.namespace.yaml kind: Namespace apiVersion: v1 metadata: name: uptime-kuma $ kubectl apply -f ./1.namespace.yaml Create a new service, which will listen on port 3001. $ cat 2.service.yaml apiVersion: v1 kind: Service metadata: name: uptime-kuma-svc namespace: uptime-kuma spec: selector: app: uptime-kuma ports: - name: uptime-kuma port: 3001 targetPort: 3001 protocol: TCP Next is to create a persistent volume claim for Uptime Kuma’s app data. Based on the current usage on my existing podman-container, 512 MiB is plenty. In my cluster, I use Longhorn as the persistent block storage. $ cat 3.pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: uptime-kuma-pvc namespace: uptime-kuma spec: accessModes: - ReadWriteOnce storageClassName: longhorn resources: requests: storage: 512Mi Here is the deployment file. $ cat 4.deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: uptime-kuma namespace: uptime-kuma spec: selector: matchLabels: app: uptime-kuma replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: app: uptime-kuma spec: containers: - name: uptime-kuma image: louislam/uptime-kuma:1.23.11 imagePullPolicy: IfNotPresent ports: - containerPort: 3001 name: web-ui resources: limits: cpu: 200m memory: 512Mi requests: cpu: 50m memory: 128Mi livenessProbe: tcpSocket: port: web-ui initialDelaySeconds: 60 periodSeconds: 10 readinessProbe: httpGet: scheme: HTTP path: / port: web-ui initialDelaySeconds: 30 periodSeconds: 10 volumeMounts: - name: data mountPath: /app/data volumes: - name: data persistentVolumeClaim: claimName: uptime-kuma-pvc (Optional) I want to use SSL when accessing the web UI of Uptime Kuma, therefore I need to create a new TLS secret. $ cat 5.tls-secret.yaml apiVersion: v1 data: tls.crt: VERY-LONG-BASE-64-ENCODED-STRING tls.key: ANOTHER-LONG-BASE-64-ENCODED-STRING kind: Secret metadata: name: wildcard-tls-secret namespace: uptime-kuma type: kubernetes.io/tls The last file 6.ingress.yaml contains the manifest for ingress-nginx which allows me to access the Uptime Kuma at https://uptime.home-lab-internal-domain.com: $ cat 6.ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: uptime-kuma-ingress namespace: uptime-kuma spec: ingressClassName: nginx tls: - hosts: - uptime.home-lab-internal-domain.com secretName: wildcard-tls-secret rules: - host: uptime.home-lab-internal-domain.com http: paths: - path: / pathType: Prefix backend: service: name: uptime-kuma-svc port: number: 3001 References: ...

March 15, 2024 · 2 min · 416 words · kenno