Showing posts with label kubernetes. Show all posts
Showing posts with label kubernetes. Show all posts

Friday, June 25, 2021

The ports required for a Kubernetes deployment

 Ref: https://docs.oracle.com/en/operating-systems/olcne/1.1/start/ports.html

Setting up the Firewall Rules

Oracle Linux 7 installs and enables firewalld, by default. The Platform CLI notifies you of any rules that you may need to add during the deployment of the Kubernetes module. The Platform CLI also provides the commands to run to modify your firewall configuration to meet the requirements.

Make sure that all required ports are open. The ports required for a Kubernetes deployment are:

  • 2379/tcp: Kubernetes etcd server client API (on master nodes in multi-master deployments)

  • 2380/tcp: Kubernetes etcd server client API (on master nodes in multi-master deployments)

  • 6443/tcp: Kubernetes API server (master nodes)

  • 8090/tcp: Platform Agent (master and worker nodes)

  • 8091/tcp: Platform API Server (operator node)

  • 8472/udp: Flannel overlay network, VxLAN backend (master and worker nodes)

  • 10250/tcp: Kubernetes kubelet API server (master and worker nodes)

  • 10251/tcp: Kubernetes kube-scheduler (on master nodes in multi-master deployments)

  • 10252/tcp: Kubernetes kube-controller-manager (on master nodes in multi-master deployments)

  • 10255/tcp: Kubernetes kubelet API server for read-only access with no authentication (master and worker nodes)

The commands to open the ports and to set up the firewall rules are provided below.

Single Master Firewall Rules

For a single master deployment, the following ports are required to be open in the firewall.

Operator Node

On the operator node, run:

$ sudo firewall-cmd --add-port=8091/tcp --permanent

Restart the firewall for these rules to take effect:

$ sudo systemctl restart firewalld
Worker Nodes

On the Kubernetes worker nodes run:

$ sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
$ sudo firewall-cmd --add-port=8090/tcp --permanent
$ sudo firewall-cmd --add-port=10250/tcp --permanent
$ sudo firewall-cmd --add-port=10255/tcp --permanent
$ sudo firewall-cmd --add-port=8472/udp --permanent

Restart the firewall for these rules to take effect:

$ sudo systemctl restart firewalld
Master Nodes

On the Kubernetes master nodes run:

$ sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
$ sudo firewall-cmd --add-port=8090/tcp --permanent
$ sudo firewall-cmd --add-port=10250/tcp --permanent
$ sudo firewall-cmd --add-port=10255/tcp --permanent
$ sudo firewall-cmd --add-port=8472/udp --permanent
$ sudo firewall-cmd --add-port=6443/tcp --permanent

Restart the firewall for these rules to take effect:

$ sudo systemctl restart firewalld

Sunday, May 16, 2021

Nginx Ingress with rewrite annotation

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: abc-ingress
  namespace : default
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/ssl-ciphers: "ALL"
    nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  tls:
  - hosts:
    - abc.com
  rules:
  - host: abc.com
    http:
      paths:
      - path: /something(/|$)(.*)
        backend:
          serviceName: abc-service
          servicePort: 8080
  tls:
  - hosts:
    - abc.com
    secretName: abc-secret

With config above:
  • The request https://abc.com/something will send to backend as  https://abc.com/
  • The request https://abc.com/something/somepath will send to backend as  https://abc.com/somepath 

Wednesday, May 12, 2021

HTTPS backend-protocol not working- Ingress NGINX

 Anotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" is not wroking.

Solution: Install the below Ingress NGINX

https://kubernetes.github.io/ingress-nginx/deploy/


Ref: https://github.com/kubernetes/ingress-nginx/issues/6721

Tuesday, April 27, 2021

Use separate cer for separate domain with nginx ingress in kubernetes

 We have nginx  ingress to control https service and forward to backend in kubernetes. We want some thing like domain abc.com should use "abc.com" 's certificate  and  domain xyz.com should use "xyz.com" 's certificate.

Step 1: Create TLS Secrets

Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

You can generate a self-signed certificate and private key with:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout abc.key -out abc.cer -subj "/CN=abc.com/O=abc.com"

Then create the secret in the cluster via:

$kubectl create secret tls abc --key abc.key --cert abc.cer

The resulting secret will be of type kubernetes.io/tls.

We add same tls for xyz.com domain.

Step 2: Add ingress resource 

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    # Enable client certificate authentication
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    # Create the secret containing the trusted ca certificates
    nginx.ingress.kubernetes.io/auth-tls-secret: "default/ca-secret"
    # Specify the verification depth in the client certificates chain
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    # Specify an error page to be redirected to verification errors
    nginx.ingress.kubernetes.io/auth-tls-error-page: "http://www.mysite.com/error-cert.html"
    # Specify if certificates are passed to upstream server
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
  name: nginx-test
  namespace: default
spec:
  rules:
  - host: abc.com
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /
  tls:
  - hosts:
    - abc.com
    secretName: abc

Ref: https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/

Add the DNS entry to DNSCore in kubernetes

 Edit configmap with name coredns in namespace 

kubectl edit cm coredns   -n kube-system

apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
rewrite name foo.example.com foo.default.svc.cluster.local
kubernetes cluster.local 10.0.0.0/24
file /etc/coredns/cluster.db cluster.local
proxy . /etc/resolv.conf
cache 30
}
cluster.db: |

cluster.local.               IN      SOA     ns.dns.cluster.local. hostmaster.cluster.local. 2015082541 7200 3600 1209600 3600

something.cluster.local.     IN      A       10.0.0.1

otherthing.cluster.local.    IN      CNAME   google.com.

Add the highlight  content above( The example.db is included dns entries the you want to add).

and we also need to edit the volumes section of the Pod template spec(DNSCore deployment):
      volumes:

and we also need to edit the volumes section of the Pod template spec:

volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
- key: cluster.db
path: 
cluster.db

Ref: https://coredns.io/2017/06/08/how-queries-are-processed-in-coredns/

Friday, April 23, 2021

Control plane node isolation

 By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

With output looking something like:

node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule Pods everywhere

Ref: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Install and use xorg-server on macOS via Homebrew

  The instructions to install and use xorg-server on macOS via Homebrew: Install Homebrew (if you haven't already): /bin/bash -c ...