Overview
This guide walks you through the complete process of securing microservices in Kubernetes using the cert-manager CSI driver and GlobalSign’s Atlas plugin. With this setup, certificates are automatically mounted into pods without the need to create intermediate Kubernetes Secret objects. At the completion of this article, you'll learn how to:
- Set up IAM roles and permissions in AWS
- Create and configure an EKS cluster
- Install cert-manager and Atlas plugin
- Install and configure the CSI driver
- Deploy pods and mount TLS certificates directly
- Enable secure HTTPS communication between microservices
Prerequisites
Ensure the following tools are installed and configured on your system:
-
AWS Account
-
Nginx Ingress
-
One Valid Domain Name
-
Kubectl
-
Helm Package Manager
-
Cert-manager & its CRDs
-
Cert-manager-Atlas Plugin
What is Cert-manager Csi-driver?
Csi-driver is a Container Storage Interface (CSI) driver plugin for Kubernetes which works alongside cert-manager. Pods which mount the cert-manager csi-driver will request certificates from cert-manager without needing a Certificate resource to be created. These certificates will be mounted directly into the pod, with no intermediate Secret being created.
Guidelines
-
AWS Ubuntu EC2 Instance - Use the AWS documentation for creating Ubuntu Instance
-
Create A User in IAM.
a. Go to IAM Console and select Users.
b. Click on create user.
c. Give a name to the user.
d. Fill the checkbox of "Provide user Access to the AWS Management Console".
i. Select I want to create an IAM user.
ii. In Console Password, choose Autogenerate Password or Custom Password based on your preference.
iii. Click Next in the bottom right corner.
e. In Set Permissions, choose:
i. Select Add User to a group in case if you already have defined policies for a particular user group, otherwise choose Attach Policies directly.
ii. In Permission Policies, provide the following permissions to the user. Note: You can provide permissions based on your own requirements as this is just for the example purposes.- AdministratorAccess
- AmazonEC2FullAccess
- AmazonEKSClusterPolicy
- AmazonEKSServicePolicy
- AmazonEventBridgeFullAccess
- AmazonRoute53FullAccess
- AmazonVPCFullAccess
- AWSCloudFormationFullAccess
- IAMFullAccess
iii. Click Next in the bottom-right corner.
iv. Review your User Permissions and Policies.
v. Retrieve Login URL and Password. -
Provide programmatic access to your user.
a. Go to IAM, then users.
b. Select your created user.
c. Select Security Credentials.
d. Go to Access Keys in your Security Credentials and choose Create Access Key.
e. Go to the Use Case and select AWS CLI.
f. Click Next, then Create Access Key.
g. You will get your Programmatic Access Keys from here. - Connect to your AWS ec2 instance which you have created in the Step 1.
- Once you are logged in to your instance, install the following tools:
a. Install Unzip.
sudo apt install unzip
b. Configure AWS CLI.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install - Configure AWS CLI with the following commands and the programmatic access keys created in Step 3.
aws configure
#enter the Access key ID and Secret access key.
#Provide the region details i.e., us-east-1 or any other
#Give output format as "json".
#Generate public and private keys
ssh-keygen - Install Helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh - Install kubectl and eksctl (tools to manage and interact with the kubernetes cluster).
a. Install the latest version of kubectl:
curl -LO"https://dl.k8s.io/release/$(curl -L -shttps://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
#make the downloaded file executable
chmod +x kubectl
#Move the executable to the /usr/local/bin
sudo mv kubectl /usr/local/bin
b. Install the latest version of eksctl:
#for ARM systems, set ARCH to: arm64, armv6 or armv7
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
#(Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
sudo mv /tmp/eksctl /usr/local/bin - Create the cluster with 3 worker node and 1 master node with the below command.
eksctl create cluster --name test-cluster --version 1.29 --region eu-west-1 --nodegroup-name linux-nodes --node-type m4.large --nodes 3
It will take around 10 to 15 mins for cluster to be ready to use. After the said time, you can check the status of the cluster by running the below command:
eksctl get cluster
When the cluster is ready with 3 node machines running in eu-west-1 region and 1 master running in eu-west-1 as per the availability zones. - Install Cert-manager and its CRDs.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.yaml
- Install the GlobalSign's cert-manager-Atlas CRD. Once installed, it will be ready to handle Atlas Certificate requests.
kubectl apply -f https://github.com/globalsign/atlas-cert-manager/releases/download/v0.0.1/install.yaml
- Create GlobalSign Issuer to issue a TLS certificate for your Ingress with the following steps.
a. Create a secret to store the GlobalSign's ATLAS account api_key, secrets along with mTLS and private key. Note: You can get these API credentials from GlobalSign Team.
kubectl create secret generic issuer-credentials --from-literal=apikey=$API_KEY --from-literal=apisecret=$API_SECRET --from-literal=cert="$(cat mTLS.pem)" --from-literal=certkey="$(cat privatekey.pem)" -n cert-manager
b. Create an Issuer of GlobalSign.
cat <<EOF | kubectl apply -f -
apiVersion: hvca.globalsign.com/v1alpha1
kind: Issuer
metadata:
name: gs-issuer
namespace: cert-manager
spec:
authSecretName: "issuer-credentials"
url: "https://emea.api.hvca.globalsign.com:8443/v2"
EOF
Since csi-driver doesn't require creating certificate resource, these certificates will be mounted directly into the pod with no intermediate Secret being created. - Install the csi-driver using helm.
helm repo add jetstack https://charts.jetstack.io --force-update
helm upgrade cert-manager-csi-driver jetstack/cert-manager-csi-driver \
--install \
--namespace cert-manager \
--wait - Verify the installation.
kubectl get csidrivers
NAME CREATED AT
csi.cert-manager.io 2024-09-06T16:55:19Z - After the successful installation of the cert-manager csi-driver, configure the first pod to use the GlobalSign certificate with the below definition. This will start the pod containing my-csi-app-2 where it will use the volume to install the tls.crt into the /tls location of the pod from the attributes defined into .spec.volumeMounts.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: my-csi-app-1
namespace: cert-manager
labels:
app: my-csi-app-1
spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 443
volumeMounts:
- name: tls
mountPath: "/usr/local/apache2/conf/tls"
- name: httpd-config
mountPath: "/usr/local/apache2/conf/httpd.conf"
subPath: httpd.conf
volumes:
- name: tls
csi:
driver: csi.cert-manager.io
volumeAttributes:
csi.cert-manager.io/issuer-kind: Issuer
csi.cert-manager.io/issuer-group: hvca.globalsign.com
csi.cert-manager.io/issuer-name: ca-issuer
csi.cert-manager.io/common-name: my-csi-app-1-service.cert-manager.svc.cluster.local
readOnly: true
- name: httpd-config
configMap:
name: httpd-config
EOF - Create the second pod with the below pod definition. This will start the pod containing my-csi-app-2 where it will use the volume to install the tls.crt into the /tls location of the pod from the attributes defined into .spec.volumeMounts.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: my-csi-app-2
namespace: cert-manager
labels:
app: my-csi-app-2
spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 443
volumeMounts:
- name: tls
mountPath: "/usr/local/apache2/conf/tls"
- name: httpd-config
mountPath: "/usr/local/apache2/conf/httpd.conf"
subPath: httpd.conf
# command: ["sh", "-c", "sleep 3600"]
volumes:
- name: tls
csi:
driver: csi.cert-manager.io
volumeAttributes:
csi.cert-manager.io/issuer-kind: Issuer
csi.cert-manager.io/issuer-group: hvca.globalsign.com
csi.cert-manager.io/issuer-name: ca-issuer
csi.cert-manager.io/common-name: my-csi-app-2-service.cert-manager.svc.cluster.local
readOnly: true
- name: httpd-config
configMap:
name: httpd-config
EOF
a. Once the pod comes into the ready state, it’s time to expose them as a service.
This is for the pod my-csi-app-1 and exposed to a service with name:: my-csi-app-1-service.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: my-csi-app-1-service
namespace: cert-manager
spec:
selector:
app: my-csi-app-1
ports:
- protocol: TCP
port: 443
targetPort: 443
type: ClusterIP
EOF
b. This is for pod my-csi-app-2 and exposed with the service name: and exposed with the service name: my-csi-app-1-service.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: my-csi-app-2-service
namespace: cert-manager
spec:
selector:
app: my-csi-app-2
ports:
- protocol: TCP
port: 443
targetPort: 443
type: ClusterIP
EOF
cat <<EOF | kubectl apply -f -
EOF
-
kubectl exec -it kubectl exec -it my-csi-app-2 -n cert-manager -- /bin/bash
-
Look for the certificates in the expected directory passed into the pod definition under .spec.volumeMounts.
-
Test the communication between both the pods.
root@my-csi-app-2:/usr/local/apache2# curl https://my-csi-app-1-service.cert-manager.svc.cluster.local:443
<html><body><h1>It works!</h1></body></html>
Which means that the pod my-csi-app-2 is able to communicate over https with the pod my-csi-app-1. - The certificate later can be seen over both the pods.
kubectl exec -n cert-manager -it my-csi-app-1 -- openssl x509 -in /usr/local/apache2/conf/tls/tls.crt -text -noout
kubectl exec -n cert-manager -it my-csi-app-2 -- openssl x509 -in /usr/local/apache2/conf/tls/tls.crt -text -noout