Tech Recipe Book
My Services
  • Book
    • About the author
    • Architect
      • Algorithms
        • DB index algorithms
          • How does database indexing work
        • Neural network optimization
          • Neural Network Optimization
        • Route search
          • Road network in a database to build a route
          • Traveling Salesman Problem (TSP)
      • Architecture Frameworks
        • DODAF
        • TOGAF
        • Enterprise Architecture (EA) Tools Reviews 2023 | Gartner
      • Zero Trust
      • Billing
        • SHM billing system
      • Bots
        • Discord
        • Telegram
          • Chat GPT Telegram bot
          • Получаем статистику Telegram-канала при помощи api и python или свой tgstat с регистрацией и смс
          • Как хостить телеграм-бота (и другие скрипты на Python) на Repl.it бесплатно 24/7
          • Создание Telegram бота на PHP #1: основные понятия для работы с API
          • Создание Telegram бота на PHP #2: создание первого бота для Telegram
          • Создание Telegram бота на PHP #3: примеры отправки сообщений с кнопками в Telegram
          • Создание Telegram бота на PHP #4: отправка файлов и изображений в Telegram
          • Создание Telegram бота на PHP #5: работа с хуками
      • Business intelligence
      • Cloud Storage
        • Ceph
        • Virtual Distributed File System
      • Cryptography
        • Open Source PKI Software
        • OpenPGP
          • Email Encryption
          • Kleopatra
          • Miscellaneous Tools
          • Server side applications
      • Message broker
        • Kafka
          • Kafka UI-tools
          • Kafka streams ksqlDb
        • RabbitMQ
      • DB
        • MySQL
          • Auto sharding
          • MariaDB Zabbix monitoring
          • MySQL and MariaDB replication with Zabbix monitoring
        • Postgres
          • HA PostgreSQL with Patroni, Haproxy, Keepalived
          • Mass parallel requests - Greenplum
          • PostgreSQL cluster for development and testing
        • Vitess - Scalable. Reliable. MySQL-compatible. Cloud-native. Database.
      • Identity and Access Management (IDM)
        • FreeIPA - Identity, Policy, Audit
        • FreeIPA as an Enterprise solution
        • Keycloak
          • Keycloak HA cluster
        • Open Identity Platform
        • SSO
          • Keycloak for Java app
          • OpenAM
          • OpenIG
      • Firewall
        • nftables
      • Infrastructure As a Code
        • Ansible
        • IaC Packer Ansible Teraform
        • Installing Jenkins using terraform in Kubernetes in Yandex Cloud with letsencypt
        • Teraform Crosplan Pulumi
        • Yandex IaC solutions
      • Kubernetes
        • Installation
          • Install Kubernetes cluster
          • Deploying a Kubespray cluster to OpenStack using Terraform
          • Kube deploy in Yandex cloud
        • Frameworks
          • Deckhouse
            • LDAP authentification
            • On premise Install
            • Yandex Cloud Install
          • K3S
          • OpenShift OKD
          • RKE2
          • Rancher
            • Rancher Install
        • Auth
          • Keycloak in k8s
          • LDAP
        • GUI management Lens
        • Monitoring
          • Monitoring with Falco
          • Network monitoring
          • Nginx ingress
          • Prometheus Graphana for sample Nodejs app
          • Rsource monitoring Avito
        • Exposing services
          • Exposing Kubernetes Services
          • Cilium BGP
        • CNCF
        • Helm
          • Repositories
            • Artifact Hub | official
            • Bitnami | vmware
          • Awesome helm charts and resources
          • Essential Services for Modern Organizations
          • Security and Compliance
          • Additional charts
        • Isolation
          • vcluster - Virtual Kubernetes Clusters
          • Kiosk
          • KubeArmor
          • Control Plane Hardening
          • Hierarchical namespaces
        • Security Center
          • Minesweeper
          • NeuVector by SUSE
          • SOAR in Kubernetes
          • Security Сenter for Kubernetes
        • Terraform CI security
          • Terraform plan analysis with Checkov and Bridgecrew
          • Yandex Terraform scan
        • Vulnerability management
          • Aqua
          • Sysdig
          • Kyverno
          • GitLab
          • NeuVector by SUSE
        • Image scanning
          • Snyk
          • Sysdig
          • Harbor
          • Trivy
        • Signature verification
          • Sigstore
        • Control plane security
          • Gatekeeper
            • Applying OPA Gatekeeper
          • Kyverno
            • Policy as a code. Kyverno
        • Runtime Security
          • Osquery
          • Falco
          • ClamAV
        • Network security
          • Cilium
          • Control Plane Hardening (API restriction)
          • Network policy recipes
          • Service mesh
            • Istio HA, LoadBalance, Rate limit
          • mTLS Autocert
        • Honeypot
          • Building honeypot using vcluster and Falco
        • Backup
          • Kasten K10
        • Secrets
          • Vault CSI Driver
      • Load Balance
        • Nginx
        • HAProxy
          • Proxy methods
          • HAProxy for RDP
          • Payment gateway A/B test with HAProxy
          • HAPRoxy for Percona or Galera
      • Monitoring
        • Zabbix
          • Apache Zabbix
          • Disc Quota
          • Nginx Zabbix
          • SSL certificates Zabix
          • Zabbix notifications
        • Nagios
          • Datacenter monitoring
        • Prometheus and Grafana
      • Windows
        • Sysmon enhanced Windows audit
        • Sysmon to Block Unwanted File
      • Linux
        • Rsync
        • Debian based
          • Apt-Cacher NG
          • Unattended Upgrades in Debian / Ubuntu
        • RedHat basede
          • RPM Server
        • Logs analysis
        • Build armhf qemu
      • NGFW
      • CI/CD
        • DevSecOps
          • DAST
            • Burp
              • Dastardly
            • StackHawk
            • ZAP and GitHub Actions
          • SAST
            • Checkmarx
            • OSV by Google
            • Snyk
            • SonarQube
        • GitLab Runner in Yandex Cloud
        • Dynamic Gitlab Runners in Yandex Cloud
        • GitLab runner in Kubernetes with Werf
        • Kubernetes deploy strategies
        • Kubernetes highload deploy. part 1
        • Kubernetes highload deploy. part 2
        • Kubernetes Argo Rollouts
        • Jenkins in Kubernetes
        • Ansible Semaphore
        • Image storage, scaning and signing
        • Install WireGuard with Gitlab and Terraform
        • CI/CD example fror small web app
        • Threat matrix for CI CD Pipeline
      • SIEM / SOC
        • Datadog
        • Splunk
          • Splunk — general description
        • MaxPatrol
          • MaxPatrol 8 and RedCheck Enterprise
        • QRadar IBM
        • Cloud Native Security Platform (CNAPP) - Aqua
        • OSSIM | AT&T
          • AlienVault (OSSIM) install
        • Wazuh
        • EDR
          • Cortex XDR | Palo Alto Networks
          • Cynet
          • FortiEDR | Fortinet
          • Elastic
        • Elastic
          • Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 22.04
          • Setting Up Elastic 8 with Kibana, Fleet, Endpoint Security, and Windows Log Collection
        • Threat Intelligence
          • MISP
          • msticpy Microsoft
          • X-Force | IBM
          • Elastic
      • VPN
        • Full-Mesh VPN fastd, tinc, VpnCloud
        • Wireguard
          • WireGuard for Internet access
          • WireGuard on MikroTik and Keenetic
          • WireGuard site to site
        • SoftEther VPN Project
        • Cisco AnyConnect client
        • OpenConnect
        • SSTP python server
      • OS hardening
        • CIS Benchmarks
      • Cloud Providers
      • OpenNebula
        • OpenNebula Edge Cloud - Open Source Cloud & Edge Computing
        • Discover OpenNebula – Open Source Cloud & Edge Computing Platform
        • OpenNebula Multi-Cloud
        • Kubernetes on OpenNebula
        • The Open Source Alternative to Nutanix
        • The Simple Alternative to OpenStack
        • OpenNebula Partner Ecosystem
      • OpenStack
        • Install manual
        • Install with DevStack
      • VM
        • Create a VHD file from a Linux disk
        • Backup / Migration
          • Coriolis
          • Proxmox Backup Server
        • oVirt
        • VMware vCenter
        • Proxmox
      • Docker
        • Container optimization
        • Ubuntu RDP container
      • LXC
        • LXD on Ubuntu 18.04
        • Install, Create and Manage LXC in Ubuntu/Debian
    • Big Data
      • OLAP data qubes
      • Storage and autoscale in Lerua
    • Machine Learning
      • Yandex YaLM 100B. GPT model
      • Kaggle Community Datasts Models
      • AI in video production
      • Image search
      • Chat bots
        • You.com
        • Chat GPT
          • Implementing GPT in NumPy
        • Jailbreak Chat
      • Coding plugins CodeWhisperer
    • Malware
      • Isiaon/Pitraix: Modern Cross-Platform Peer-to-Peer Botnet over TOR
      • theZoo A repository of LIVE malwares
    • Pentest
      • Red Team
        • MITRE ATT&CK matrix
        • C2 Frameworks
          • Brute Ratel C4
          • Cobalt Strike
          • Covenant
          • Havoc Framework
          • Merlin
          • Metasploit
          • Sillenttrinity
          • Sliver
        • Manage and report
          • Dradis Framework
          • Hexway
        • Underground
      • Social engineering
        • Social Engineer Toolkit setoolkit
      • OSINT
        • OSINT for comapny
        • Instagram fishing
      • Forensics
        • Forensics tools
      • Pentesting Methodology
      • Web
      • CI/CD Methodology
      • Cloud Methodology
        • Hacking The Cloud
      • Kubernetes Pentesting
      • Android
        • SSL Unpinning for Android applications
      • iOS
        • SSL unpinning iOS and macOS applications
      • HackBar tool
      • CyberChef Tools
      • Python virtualenv
      • IppSec - YouTube
      • Hacktricks.xyz
    • Compliance
      • 152 ФЗ. Personal data
      • PCI DSS and ГОСТ Р 57580.1-2017
      • Cloud compliance
      • ГОСТ Р 57580.1-2017 для Kubernetes
      • Kubernets as DevSecOps and NIST compliance
      • NIST SP 800-61 cyberincidece control
      • CIS Kubernetes Benchmark v1.6 - RKE2 v1.20
      • CIS Kubernetes Benchmark v1.23 - RKE2
      • Requirements for Russian Banks
      • Tools
        • Chef InSpec
        • Elastic SIEM
    • Asset management
      • CMDBuild
    • Project management
    • Incident management SRE
    • Risk management
      • IT risk management
      • BSI-Standard 200-3
    • Web Dev
      • Cookie security
      • OWASP Top 10 2021
      • Docker nginx php mysql
      • Docker tor hiddenservice nginx
      • Docker Compose wp nginx php mariadb
      • Dependency Checking
        • Nexus Analyzer
        • OWASP dependency-check
      • Yii skeeks cms
      • YiiStudio
    • Art
      • GTK Themes
      • Themes for Xfce Desktop
      • XFCE / Xubuntu Windows 95
      • Moscow events
      • Photo goods
      • Russian style gifts
    • Cryptocurrency
      • News
      • Arbitrage
      • Stocks
      • Exchange aggregators
      • Where to use
      • Prepaid cards
        • BitFree
        • Pyypl Your Money at Your Fingertips
    • IT magazines
      • WIKI and Writeups tools
        • BookStack
        • GitBook
        • MkDocs
        • Wiki.js
        • DokuWiki
    • Languages
    • Learning
      • (ISC)2
        • CISSP
      • Offensive Security
        • OSCP
        • OSEP
        • OSED
      • DevSecOps
        • Certified DevSecOps Professional (CDP)
        • Certified DevSecOps Expert (CDE)
      • Web Security Academy: PortSwigger
    • Relocation
      • London experience
      • IT visas in 2022
      • Remote work
      • Running business in UAE
    • Freenet
      • Independent online services: the philosophy of a free Internet
      • Tor Project Anonymity Online
      • I2P Anonymous Network
    • Services
      • SMS Registration
        • Registering ChatGPT in Russia
      • Local and regional eSIMs for travellers - Airalo
      • Digital busines cards
      • No KYC services and exchanges
Powered by GitBook
On this page
  • An Overview of ReplicaSets in Kubernetes
  • Types of Services in Kubernetes
  • ClusterIP Services
  • When Do We Use ClusterIP Services?
  • Multiport Services
  • Headless Services
  • NodePort Services
  • LoadBalancer Services
  • Configuring Services - What You Should Know
  • Conclusion

Was this helpful?

  1. Book
  2. Architect
  3. Kubernetes
  4. Exposing services

Exposing Kubernetes Services

Last updated 1 year ago

Was this helpful?

Exposing Services in Kubernetes short version

Kubernetes is a tool for managing containerized applications, designed to make it easy to deploy and scale applications. It is designed to work with a variety of container technologies like . In a Kubernetes , your application runs in a Pod. In Kubernetes, Pods are ephemeral; they are temporary resources which are created and destroyed as needed .

When pods need to interact with other resources in a Kubernetes cluster, they can use the IP addresses provided by the cluster. However, this approach has the drawback of requiring developers to manually configure the IP addresses for each pod. Because Pods are temporary resources in a cluster, it is practically impossible to configure IP tables whenever a new Pod is created or destroyed. As a result, it is challenging for Pods to communicate with one another using IP addresses.

To solve this problem, Kubernetes has a resource called Service, which gives the Pod a stable IP address to solve this communication issue—making interaction with other Pods considerably more reliable. Services provide a way to expose applications running on a Kubernetes cluster to the outside world. They also allow for load balancing and for routing traffic to the correct application instance. Services can be exposed using a variety of methods, such as a load balancer or an resource.

In this guide, you’ll learn about Services and its types in Kubernetes, and how to define them using YAML files. By the end of the article, you’ll have a good understanding of Services in Kubernetes.

An Overview of ReplicaSets in Kubernetes

We’ll use in this tutorial. ReplicaSets are Pod Controllers in Kubernetes, they are used to make the pods fault tolerant by making it easy for them to easily scale up and down. Replicaset ensures that a specific number of Pods(replicas) keep running in the cluster. To make a ReplicaSet, create a new file my-replicaset.yml and populate it with the following configuration:

my-replicaset.ymlapiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-pod
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-server
image: nginx
ports:
- containerPort: 80

Let’s go through the contents of the above YAML file:

  • The first line, apiVersion: apps/v1, specifies the version of the Kubernetes API that should be used to interpret this YAML file.

  • The second line, kind: ReplicaSet, specifies the type of object that this YAML file describes. In this case, it specifies that the file describes a ReplicaSet object.

  • The metadata section provides metadata about the ReplicaSet object. Here, it specifies that the ReplicaSet should be named “my-replicaset”.

  • The spec section specifies the details of the ReplicaSet. Here, the ReplicaSet should manage three replicas of the pod, and that the pod should be selected based on the “app” label being set to “my-pod”.

  • The template section specifies the details of the pod that the ReplicaSet will manage. The pod should have the app: my-pod label and it should contain a container named my-server based on the nginx image.

  • The containerPort field in the spec section specifies the port from which the pod can be reached from inside the cluster.

To create the ReplicaSet, open your terminal and run kubectl create -f my-replicaset.yml. Running this command should produce the following output.

>_replicaset.apps/my-replicaset created

Types of Services in Kubernetes

Kubernetes offers four primary service types that are each beneficial for a certain task. The following services are covered in more detail below:

  • ClusterIP Service

  • Headless Service

  • NodePort Service

  • Load Balancer Service

ClusterIP Services

A ClusterIP service is a type of service that can be used to expose instances of pods running on the Kubernetes cluster. Each pod has a single IP address that is used by the ClusterIP service to route traffic to and from that pod. A ClusterIP Service is the default service; if you don’t specify the type attribute in the YAML file, then Kubernetes will create a ClusterIP service automatically.

To create a ClusterIP service, create a YAML file and add the following configuration:

apiVersion: v1
kind: Service
metadata:
name: rs-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80

selector:
app: my-pod

Let’s parse the contents of the YAML file:

  • apiVersion defines the version to be used. The API version must support this kind of resource.

  • metadata is another mandatory field which provides the resource’s fundamental details. In this example, you only enter the resource’s name, but you may also include labels and annotations.

  • The final mandatory part, spec , describes the requirements for the resources. Each resource has unique specifications.

  • spec.ports.port specifies the port which should be made an endpoint in the cluster; port can take any arbitrary value.

  • spec.ports.targetPort identifies the port that the Pod opens.

  • spec.selector helps the Service to identify the Pods to which the request should be forwarded to. The Service will send requests to only those Pods which have a label of app: my-pod.

To create a new service resource, type kubectl create -f rs-svc.yml into your console with the rs-svc.yml file containing the above configuration. Kubernetes creates an endpoint resource when the service is created that lists all the endpoints to which requests should be directed.

>_$ kubectl get endpoints
Output
NAME             ENDPOINTS                                      AGE
kubernetes       192.168.49.2:8443                              105d
rs-service       172.17.0.5:80,172.17.0.6:80,172.17.0.7:80      111s

In the above example, you have three endpoints which correspond to the number of Pods in the ReplicaSet. Note that these endpoints are internal to the cluster. This means only the service can access the Pod using these endpoints but the end user cannot.

You can check detailed information about the Service by running kubectl describe -f rs-svc.yml, which produces the following output:

OutputName:            rs-service
Namespace:       default
Labels:          <none>
Annotations:     <none>
Selector:         app=my-pod
Type:             ClusterIP
IP Family Policy: SingleStack
IP Families:      IPv4
IP:               10.103.78.229
IPs:              10.103.78.229
Port:             <unset>  80/TCP
TargetPort:       80/TCP
Endpoints:        172.17.0.5:80,172.17.0.6:80,172.17.0.7:80
Session Affinity: None
Events:           <none>

In the above output, you can see all the Events, IPs, Type, and Selector at one place. This is helpful when you have to examine your Service thoroughly.

To check all the current services running in your cluster type kubectl get svc:

>_$ kubectl get svc
Output
NAME         CLUSTER-IP         EXTERNAL-IP     PORT(S)     AGE
kubernetes   10.111.240.1         <none>        443/TCP     30d
rs-service   10.96.206.29         <none>        80/TCP      6m

In the above output, you can see that External IP is set to none which means the current service can only be used from inside the cluster. The ClusterIP service is only accessible to other Pods and other resources inside the cluster. You’ll soon learn how to expose a service externally.

You can delete the newly created service using kubectl delete -f rs-svc.yml command which returns the confirmation that the service has been deleted.

In the above illustration, every request at IP:<PORT> from inside the cluster is directed to the service resource which then redirects to the <TARGET_PORT> (in this case 80) of the Pods with matching labels.

When Do We Use ClusterIP Services?

Here are some common use cases for ClusterIP services in Kubernetes:

  • Load balancing traffic to a group of identical pods

  • Exposing a service to other services within the cluster

  • Providing a stable IP address and DNS name for a set of pods that can be used by other services within the cluster

  • Providing a single entry point for accessing multiple services within the cluster

Multiport Services

ClusterIP services can also be used to make a Pod listen to more than one port. This configuration is known as multiport services. In Kubernetes, a multiport service is a type of service that exposes multiple ports for external access. This is useful when a single application or container exposes multiple services or APIs on different ports.

By creating a multiport service, you can map these different ports to a single service and make them accessible using a single IP address and DNS name. This simplifies the process of accessing the services and allows you to easily scale them up or down as needed.

To define a multiport service create a YAML file, mul-svc.yml and add the following configuration:

mul-svc.ymlapiVersion: v1
kind: Service
metadata:
name: rs-service
spec:
ports:
- name: server-main
protocol: TCP
port: 80
targetPort: 80

- name: server-logs
protocol: TCP
port: 25
targetPort: 25

selector:
app: my-pod

In the above configuration, you can see that multiple ports are defined in the spec.ports field. This field contains an array of objects, each of which defines the port number, protocol, and target port for a specific service. Note that defining the name field is required for each port you define in multiport services. To create the Service from the above file, run the following command in your terminal:

>_kubectl create -f mul-svc.yml

After running the above command, your application can be communicated to both port 80 and 25 using multiport service. Multiport services provide a convenient way to access multiple services from a single application or container, while cluster IP services require you to create a separate service for each port you want to expose. This is useful when a single application or container exposes multiple services or APIs on different ports.

In the above illustration, you can see that the requests for both Port 80 and Port 25 are routed through the same service to the Pods.

Headless Services

You’ve learned that the ClusterIP Service randomly routes requests to any one of the Pods inside the Cluster. The Headless Service operates differently; rather than issuing requests at random, it facilitates direct communication between a specific Pod and other Pods. Headless service accomplishes this by setting the ClusterIP of the Pod to None and then performing a DNS lookup which provides the Pod’s IP instead of Service IP. This means that the service does not have a stable IP address that can be used to access it from outside the cluster.

When you need to access stateful applications such as databases, a headless service is employed. Accessing random Pods in a stateful application could lead to data inconsistencies, which is undesirable in a system.

These services are typically used when you want to access the individual pods within a service directly, rather than accessing the service as a whole through a load balancer. Headless Service is created by setting the ClusterIP field to None in the Service YAML file (here, a new file, rs-svc-headless.yml, is created).

rs-svc-headless.ymlCopyapiVersion: v1
kind: Service
metadata:
name: rs-service-headless
spec:
clusterIP: None
ports:
- protocol: TCP
port: 80
targetPort: 80

selector:
app: my-pod

Note that we’ve set the newly added spec.clusterIP field to None. After creating this service using kubectl create -f rs-svc-headless.yml, run the following command in your terminal.

>_$ kubectl get svc

Running the above command gives the following output:

Output
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP    PORT(S)       AGE
kubernetes                ClusterIP   10.96.0.1         <none>       443/TCP       117d
rs-service                ClusterIP   10.103.78.229     <none>       80/TCP        3m51s
rs-service-headless       ClusterIP   None              <none>       80/TCP        82s

As seen in the output, Kubernetes did not allocate the CLUSTER-IP as specified in the YAML file, hence it is None. In most cases, this type of configuration is used in conjunction with ClusterIP Service to handle load balancing and normal communication between the Pods.

Headless services are often used with pods which maintain a consistent state and store data across multiple pods. In order to maintain this state of a service and its associated data, Stateful Pods need to be accessed directly and consistently. By using a headless service, you can access the individual pods within the service directly, without going through a load balancer. This allows you to maintain the state of the service and ensure that the data is consistent across all the pods.

You can also get detailed information about the service by running the following command:

>_$ kubectl describe -f rs-svc-headless.yml

Here’s the output:

OutputName:                  rs-service-headless
Namespace:             default
Labels:                <none>
Annotations:           <none>
Selector:              app=my-pod
Type:                  ClusterIP
IP Family Policy:      SingleStack
IP Families:           IPv4
IP:                    None
IPs:                   None
Port:                  <unset>  80/TCP
TargetPort:            80/TCP
Endpoints:             172.17.0.5:80,172.17.0.6:80,172.17.0.7:80
Session Affinity:      None
Events:                <none>

In the above output, you can see that IP and IPs field is set to None since Headless Service does not provide any IP to the pod, but does a DNS lookup on the endpoints.

The above image shows how the request to write data to the database (stateful pod) goes through a DNS lookup and then to the database. This is done to maintain consistency between the replicas of the databases.

NodePort Services

A NodePort service in Kubernetes is an extension of the ClusterIP service type. A NodePort service is similar to a ClusterIP service in that it allows the pods within a cluster to communicate with each other. However, a NodePort service also exposes the service on a specific port on each node in the cluster. Each pod has one or more ports (for example, 8080) which are used by this type of service as the external IP addresses used by clients connecting to it.

Note that exposing ports using NodePort Service type is not considered secure. This is because it allows anyone with access to the IP address of a node in the cluster to access the resources that are running on that node. This can be a security risk because it allows unauthorized users to potentially access and manipulate the services and their associated data.

To create a NodePort service, create a new file, rs-np-svc.yml, and populate it with the following configuration.

rs-np-svc.ymlCopyapiVersion: v1
kind: Service
metadata:
name: rs-service-nodeport
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30256
selector:
app: my-pod

In the above file you can see that the type field specifies the service type as NodePort, the nodePort field exposes port 30256 for the external use. Note that nodePort can take on any value in the range between 30000 and 32767. The port field is used for the ClusterIP service as the Service automatically creates a ClusterIP service to load balance requests among the Pods.

Now create the above service by running kubectl create -frs-np-svc.yml`. To check if the Service has been created, run the following command in your terminal:

>_$ kubectl get svc
Output
NAME                   TYPE            CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes             ClusterIP       10.96.0.1        <none>        443/TCP        117d
rs-service             ClusterIP       10.103.78.229    <none>        80/TCP         7m25s
rs-service-headless    ClusterIP       None             <none>        80/TCP         4m56s
rs-service-nodeport    NodePort        10.105.4.133     <none>        80:30256/TCP   28s

In the above output you can see that the new Nodeport Service is created by the Kubernetes. You can also run the following command to get detailed information about the Service:

>_$ kubectl describe -f rs-np-svc.yml
OutputName:                    rs-service-nodeport
Namespace:               default
Labels:                  <none>
Annotations:             <none>
Selector:                app=my-pod
Type:                    NodePort
IP Family Policy:        SingleStack
IP Families:             IPv4
IP:                      10.105.4.133
IPs:                     10.105.4.133
Port:                    <unset>  80/TCP
TargetPort:              80/TCP
NodePort:                <unset>  30256/TCP
Endpoints:               172.17.0.5:80,172.17.0.6:80,172.17.0.7:80
Session Affinity:        None
External Traffic Policy: Cluster
Events:                  <none>

The working of NodePort service is summarized below. You can see that the Node in the cluster opens the desired port (as specified in spec.ports.nodePort) which redirects all the requests to the internal service which then redirects them to the Pods.

LoadBalancer Services

Load balancer services in Kubernetes are a type of service that exposes a service to external traffic using a load balancer. A load balancer is a network appliance that distributes incoming traffic among multiple servers or nodes in a cluster. This can improve the performance, reliability, and scalability of your services by distributing the workload among multiple instances, and allowing you to easily add or remove nodes as needed.

Additionally, LoadBalancer services automatically detect and route traffic away from unhealthy pods. This helps ensure that the service remains available and responsive, even in times of failures and disruptions.

To create a LoadBalancer Service, create a new file svc-load.yml and add the following to the manifest file:

svc-load.ymlCopyapiVersion: v1
kind: Service
metadata:
name: rs-service-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30056
selector:
app: my-pod

In the above file, you can see that spec.type is set as LoadBalancer, and every other configuration is the same as the Nodeport service you defined earlier.

This is because the LoadBalancer service is an extension of NodePort Service. Like a NodePort service, a LoadBalancer service exposes a resource on a specific port on each node in the cluster. However, a LoadBalancer service also creates a load balancer with a stable IP address and DNS name that can be used to access the service from outside the cluster. When LoadBalancer services are created Kubernetes automatically creates a NodePort service to work in conjunction with the LoadBalancer service.

Now create the above service by running the below command:

>_kubectl create -f svc-load.yml

After running the above command you can see that the LoadBalancer service was created using the command kubectl get svc, which produces the following output:

Output
NAME                     TYPE            CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
kubernetes               ClusterIP       10.96.0.1        <none>          443/TCP        126d
rs-service               ClusterIP       10.103.78.229    <none>          80/TCP         8d
rs-service-headless      ClusterIP       None             <none>          80/TCP         8d
rs-service-loadbalancer  LoadBalancer    10.110.210.241   145.168.25.58   80:30056/TCP   60s

Overall, load balancer services are helpful when exposing services to external traffic.

In the above image, you can see that the external request to the cluster goes through the LoadBalancer service which then redirects it to the nodePort—and then chooses a random Pod—after taking into account the current load on each Pod that listens to targetPort.

Configuring Services - What You Should Know

If services in Kubernetes are not configured properly, we may run into problems such as:

  • The containers within the pod may not be able to communicate with each other or with other pods in the cluster.

  • Pods may not be able to access the resources they need, such as memory, CPU, or storage.

  • Your application may not be able to reach the outside world, which can prevent it from accessing external services or being accessed by users.

  • The overall performance of the Kubernetes cluster may be degraded, which can affect the availability and reliability of the applications running on it.

It’s important to be careful when setting up services in Kubernetes because they act as the gateway to your application. If they are not configured properly, they could disrupt communication between pods and harm your system.

Conclusion

Services in Kubernetes provide a steady network endpoint for a specific set of pods, simplifying inter-application communication. This article gave insights into creating and using such services, focusing on the most common types; ClusterIP, Headless, NodePort, and Load Balancer Services. Understanding and correctly configuring these services is crucial in Kubernetes, ensuring effective cluster communication.

kind specifies the resource type you’re creating in .

In the above output you can see that the LoadBalancer Service also exposed your application using EXTERNAL-IP. Note that using LoadBalancer Service requires cloud providers like AWS or GCP. Also, minikube does not support LoadBalancer so using this service in will show <pending> in the EXTERNAL-IP section.

https://earthly.dev/blog/kubernetes-services/
https://cloud.hacktricks.xyz/pentesting-cloud/kubernetes-security/exposing-services-in-kubernetes
Docker and containerd
cluster
Ingress
ReplicaSets
k8s
minikube