phpc.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A server for PHP programmers & friends. Join us for discussions on the PHP programming language, frameworks, packages, tools, open source, tech, life, and more.

Administered by:

Server stats:

833
active users

#rancher

2 posts2 participants0 posts today
Continued thread

Here's the interesting thing about that, though: It is *not* currently possible to run an Elemental downstream cluster in Harvester, but it should be possible to deploy a TalosLinux cluster on Harvester, though not as a Rancher downstream cluster, by provision nor adoption, since Rancher agent very much assumes you're running k3s/RKE2. But you could just spin up Talos VMs in Harvester with bridged networking, etc, and it should work.

#Rancher/#RKE2 #Kubernetes cluster question - I don't need Rancher, but in the past with my RKE2 clusters, I normally deploy Rancher on a single VM using #Docker just for the sake of having some sort of UI for my cluster(s) if need be - with this setup, I'm relying on importing the downstream (RKE 2) cluster(s) onto said Rancher deployment. That worked well.

This time round though, I tried deploying Rancher on the cluster itself, instead of an external VM, using
#Helm. Rancher's pretty beefy and heavy to deploy even with a single replica, and from my limited testing I found that it's easier to deploy when your cluster's pretty new and not have much resources running just yet.

What I'm curious about tho are these errors - my cluster's fine, and I'm not seeing anything wrong with it, but ever since deploying it a few days ago, I'm constantly seeing these
Liveness/Readiness probe failed error on all 3 of my Master nodes (periodically most of the time, not all at once) - the same error also seems to include etcd failed: reason withheld. What does it mean, and how do I "address" it?

Uh... one of my #Proxmox nodes exploded... wtf 😂

I was running the
#Rancher cleanup script, to remove Rancher from my #Kubernetes cluster (was experimenting something), and left it. After a while I noticed that the cluster wasn't reachable. Then I checked Proxmox and saw that the node (which is also the one hosting my primary Master node of my cluster) was offline.

I went to the room and saw that the PC's power button's lights weren't on and the fans weren't running. I pressed it, and the fans started running for a while, then it sort of went quiet. I pressed it again, and then something just popped within the PC and all the motherboard lights went off.

I've no fucking idea what happened, and what damage was done (I
really hope it's just the PSU) but I'm just gonna let it sit and check it later ​:blobfoxangrylaugh:

Continued thread

Got through a bunch of stuff without incident until Elemental and a Dell got in a fight and grub2 lost. Interesting that the gigabyte machines in the same cluster didn't have any issue with the same upgrade.

Hopefully, just a reprovision from iso boot is all it needs.

Continued thread

Deployed a #Rancher #k3s cluster with #Harvester, added a project/namespace for Jenkins, generated the kubeconfig with Rancher Manager, and then added it as a cloud provider in Jenkins. It was that easy. I made a basic pod template that runs a #Fedora image in the GUI, then created a Jenkinsfile in a #forgejo #git repo that did 6 parallel tasks. #Jenkins picked it up and it was fun to watch k8s pull the Fedora images and do it's thing from there.