
This article will provide a detailed guide how to expose a Kubernetes Service to access an App through Load Balancer. If you do not have Kubernetes setup yet, you may read my earlier blog on installing Kubernetes in Mac M1 chip. But this guide works to any Kubernetes distribution. Tasks are as follows to expose a Kubernetes Service:
Create Kubernetes Deployments file
Provision Kubernetes Deployments
Expose the Deployments using Service
Test the App using the Service External IP of type Load Balancer
Cleanup Kubernetes resources
For this simple example, we will use HTTPD image. On my next blog, we will develop our own micro service using Python. Meanwhile for simplicity we will use HTTP.

Step 1: Create Kubernetes Deployments File
For this example we will use HTTP as the most basic example. The first thing we need to create is "Deployment" with an image of HTTP. To create Deployment of HTTPD run the command below.
$ kubectl create deployment httpd --image=httpd --port=80 --replicas=2 --dry-run=client -o yaml > httpd-deployment.yaml
The command says Kubernetes to create a deployment named httpd, using an image from Docker Hub called httpd, with 2 replica sets. Result will be written to a file named httpd-deployment.yaml. No actual resources will be provisioned because of the dry run flag.
If we open the newly created yaml file. It looks something similar to this. Do note that I already removed some unnecessary values already e.g, timestamp and other null values.
$ vi httpd-deployment.yaml
Step 2: Provision Kubernetes Deployments
Now that you have the Deployment file in Kubernetes. It is time to do the provisioning. Run the command in terminal to create the Deployment.
$ kubectl apply -f httpd-deployment.yaml
Check the status of the Deployments using the following command.
$ kubectl get deployments
Check the status of the Deployments using the following command.
$ kubectl get pods
You should be able to see something similar below. This means that your creation of Deployments was successful.
Step 3: Provision Kubernetes Service
Remember that Kubernetes runs multiple pods managed by replica sets. Therefore we need to have the capability to access these pods without the need to take account of the individual IPs of each pods. The Service takes care of this by exposing a single IP Address that represents the entire replica sets. In our case we created 2 replica sets.
To create expose the service we will use a Load Balancer type. Run the command below to create the yaml file. Take note that the actual service are not yet provisioned due to the dry run flag.
$ kubectl expose deployment httpd --port=8080 --target-port=80 --type=LoadBalancer --dry-run=client -o yaml > httpd-service.yaml
This command creates the service yaml file below. I already deleted some unnecessary entries like null values for simplicity.
$ vi httpd-service.yaml
To provision the actual service run the command below.
$ kubectl apply -f httpd-service.yaml
To get the status of the service run the command below.
$ kubectl get services -o wide
You should see something similar in the screenshot if all went well.
Conclusion: How to Expose a Kubernetes Service
Exposing Kubernetes Service is straight forward if you followed the step by step guide above. As part of this article, I'd like to emphasize though that pods are different from containers. Pods are the smallest unit of compute in Kubernetes. Containers runs inside the pod. There can be many containers running inside a pod. Each container in a pod can communicate with each other locally. Deployments manages a set of Pods.
When you start to run multiple pods via multiple ReplicaSets, accessing them one by one or keeping track of pods IP is no longer feasible. That is where Kubernetes Service comes in. In a nutshell, Kubernetes service group pods together so that they are accessible in a single IP either ClusterIP or Load Balancer. This is important because pods are ephemeral, they come and go(not permanent). Each time a new pod is provisioned, it has a new sets of IP. Therefore accessing them via service is the most efficient way. Service will handle distributing these traffics to different pods so you need not have to track pods lifecycle.