Helm how to delete bad deployment?
At some point everyone screws things up, so how do you fix it? I had done a Ceph deployment using Helm on our Kubernetes cluster, but then realized I missed a setting a delete the namespace which in turn remove all the pods from underneath of Helm. Then when I tried to properly delete using
helm delete --purge ceph
But, it isn’t working, keeps returning with a timeout, with an error message
Error: transport is closing
I can see my Ceph test deployment is still known by Helm, and didn’t get removed.
$ helm list NAME REVISION UPDATED STATUS CHART NAMESPACE ceph 1 Sun Mar 18 03:03:41 2018 DEPLOYED ceph-0.1.0 ceph cert-manager 1 Tue Mar 13 02:29:27 2018 DEPLOYED cert-manager-0.2.2 kube-system
So, how do i get Helm to forget about the test installation that I had already deleted the namespace without properly doing a helm delete?
There’s a config map still stored by Helm in Kubernetes which we can delete.
$ kubectl get cm --all-namespaces NAMESPACE NAME DATA AGE kube-public cluster-info 2 15d kube-system calico-config 3 15d kube-system ceph.v1 1 1d kube-system cert-manager.v1 1 6d kube-system coredns 1 6d kube-system extension-apiserver-authentication 6 15d kube-system kube-proxy 2 15d kube-system kubeadm-config 1 15d kube-system kubernetes-dashboard-settings 1 3d
You see the ceph.v1 ConfigMap, if we remove this object, we can then see Helm has forgotten about the installation that I screwed up.
$ kubectl delete cm ceph.v1 -n kube-system
You’ll see that it’s delete now.
$ helm list NAME REVISION UPDATED STATUS CHART NAMESPACE cert-manager 1 Tue Mar 13 02:29:27 2018 DEPLOYED cert-manager-0.2.2 kube-system
We should be good now to try to redeploy my Ceph test once again.