Using kubectl on your local workstation

Updated October 27, 2018: Added link to follow up article to create a Service Account with Cluster Role Binding instead of using deployment token. 

Now that we’ve got our Kubernetes cluster running, you’ll need to give access to your developer or admins to allow them to remotely access and control their work loads in the cluster.

I’ve got a Mac OSX laptop, so these are the steps that I went through to setup my local workstation for controlling our production Kubernetes cluster.

Install kubectl

First download and install kubectl, we can use the setup instructions

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Make sure it’s executable

chmod +x ./kubectl

Then move it to a common location in your PATH environment

sudo mv ./kubectl /usr/local/bin/kubectl

Now that you have the executable, next is setting up the actual remote access to the cluster.

Configure kubectl context

During this process you will create a ~/.kube/config settings file that will be used by kubectl to remotely access the cluster server or multiple clusters.

If you have multiple clusters you can change between them, here’s my example of a dev and prod  cluster and the steps I went through.  Note that I’m using the deployment-controller-token right now which gives me access, but you should be creating a ServiceAccount with ClusterRole permissions to properly manage each users access to the cluster.

So lets get started.  First we need to log into Dev Cluster and get the API url.

$ kubectl cluster-info
Kubernetes master is running at https://10.81.236.201:6443
Heapster is running at https://10.81.236.201:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.81.236.201:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

So make a note of running at URL, we’ll need that in a minute.

Next we need to get a token, this is where i’m using deployment-controller-token, and ideally this should be a created for the user with cluster role permissions.

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'

Now copy that token, and we’ll use in the next couple of steps

First we create the cluster with a name of development

$ kubectl config set-cluster development --server=YOUR_MASTER_URL --insecure-skip-tls-verify=true

Add the credentials which includes the token.

$ kubectl config set-credentials admin-dev-token --token=YOUR_DEV_TOKEN

Create a context

$ kubectl config set-context admin-dev --cluster=development --namespace=default --user=admin-dev-token

Double check your configuration.

$ kubectl config view

Now switch to the context so we can issue commands

$ kubectl config use-context admin-dev

Test by getting pods

$ kubectl get pods

Now add a second context for production cluster.

$ kubectl config set-cluster production --server=https://10.230.107.137:6443 --insecure-skip-tls-verify=true
$ kubectl config set-credentials admin-prod-token --token=YOUR_PROD_TOKEN
$ kubectl config set-context admin-prod --cluster=production --namespace=default --user=admin-prod-token
$ kubectl config view
$ kubectl config use-context admin-prod

Now you have two contexts which you can issue commands remotely from your workstation.

See create cluster role binding for creating individual user tokens for access to either a namespace or full cluster.