Understand the networking configuration on the cluster nodes.
- Fundamental requirements on network implementation
- all containers can communicate with all other containers without NAT
- all nodes can communicate with all containers (and vice-versa) without NAT
- the IP that a container sees itself as is the same IP that others see it as
Understand Pod networking concepts.
Figure above: to enable communication between pods across two nodes with plain layer 3 networking, the node’s physical network interface needs to be connected to the bridge as well. Routing tables on node A need to be configured so all packets destined for 10.1.2.0/24 are routed to node B, whereas node B’s routing tables need to be configured so packets sent to 10.1.1.0/24 are routed to node A. With this type of setup, when a packet is sent by a container on one of the nodes to a container on the other node, the packet first goes through the veth pair, then through the bridge to the node’s physical adapter, then over the wire to the other node’s physical adapter, through the other node’s bridge, and finally through the veth pair of the destination container.
Understand service networking.
- service is a resource you create to make a single, constant point of entry to a group of pods providing the same service
- service’s cluster IP is a virtual IP, and only has meaning when combined with the service port (so pinging will fail because this ip is associated with service port)
- services don’t link to pods directly. Endpoints sit between service and pod
- externalName type service is cool; points to external URL and internal clients can use servicename to get to it
- expose services externally
- Ingress resource
- headless service - headless services, because DNS returns the pods’ IPs, clients connect directly to the pods, instead of through the service proxy.
- clusterIP: None
- when performing a nslookup of service, instead of returning clusterip, it will return the ips of all the pods associated with that service
Deploy and configure network load balancer.
- is NodePort with additional infrastructure-provided load balancer
- prevent additional hops with externalTrafficPolicy: Local; preservation of client ip
e.g. Create influxdb pod
apiVersion: v1 kind: Pod metadata: name: influxdb labels: name: influxdb spec: containers: - name: influxdb image: influxdb ports: - containerPort: 8086
Create influxdb loadbalancer service
kind: Service apiVersion: v1 metadata: name: influxdb spec: type: LoadBalancer ports: - port: 8086 selector: name: influxdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE influxdb LoadBalancer 10.0.232.7 18.104.22.168 8086:31909/TCP 2m
When running the following from external: curl -v http://22.214.171.124:8086/ping The influxdb pod sees client coming from: [httpd] 10.244.1.1 - - [07/Apr/2018:23:16:35 +0000] "GET /ping HTTP/1.1"204 0 "-" "curl/7.45.0" b78ce3ea-3ab9-11e8-8003-000000000000 62 Edit influxdb service and change externalTrafficPolicy: Local When running the following from external: curl -v http://126.96.36.199:8086/ping The influxdb pod sees client coming from my home ip. [httpd] 188.8.131.52 - - [07/Apr/2018:23:18:42 +0000] "GET /ping HTTP/1.1" 204 0 "-" "curl/7.45.0" 03353264-3aba-11e8-8004-000000000000 48
Know how to use Ingress rules.
- Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services
- operate at L7
Know how to configure and use the cluster DNS.