0

Cluster information:

kubectl version Client Version: v1.29.14 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.14 Cloud being used: bare-metal Installation method: Host OS: AlmaLinux 8 CNI and version: Flannel ver: 0.26.4 CRI and version: cri-dockerd ver: 0.3.16 

I have a master node and create my first worker node, before executing the command kubeadm join in the worker I could ping from the worker to the master and viceversa without troubles, now that I have executed the kubeadm join ... command I cannot ping between them anymore and I get this error:

[root@worker-1 ~]# kubectl get nodes -o wide E0308 19:38:31.027307 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused E0308 19:38:32.051145 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused E0308 19:38:33.075350 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused E0308 19:38:34.099160 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused E0308 19:38:35.123011 59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s": dial tcp 198.58.126.88:6443: connect: connection refused The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port? 

Ping from the worker node to the master node:

[root@worker-1 ~]# ping 198.58.126.88 PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data. From 198.58.126.88 icmp_seq=1 Destination Port Unreachable From 198.58.126.88 icmp_seq=2 Destination Port Unreachable From 198.58.126.88 icmp_seq=3 Destination Port Unreachable 

If I run this:

[root@worker-1 ~]# iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X 

The ping command starts to work:

[root@worker-1 ~]# ping 198.58.126.88 PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data. 64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.030 ms 64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.025 ms 

(The ping command works with the IPv6 address, it just fails with the IPv4 address) But after about one minute it gets blocked again:

[root@worker-1 ~]# ping 198.58.126.88 PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data. From 198.58.126.88 icmp_seq=1 Destination Port Unreachable From 198.58.126.88 icmp_seq=2 Destination Port Unreachable 
[root@worker-1 ~]# cat /etc/sysctl.conf # sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # # For more information, see sysctl.conf(5) and sysctl.d(5). net.ipv6.conf.default.forwarding=1 net.ipv6.conf.all.forwarding=1 
[root@worker-1 ~]# cd /etc/systctl.d/ -bash: cd: /etc/systctl.d/: No such file or directory 

The port 6443/TCP is closed in the worker node and I have tried to open it without success:

nmap 172.235.135.144 -p 6443  ✔ 2.7.4  06:19:47 Starting Nmap 7.95 ( https://nmap.org ) at 2025-03-11 16:22 -05 Nmap scan report for 172-235-135-144.ip.linodeusercontent.com (172.235.135.144) Host is up (0.072s latency). PORT STATE SERVICE 6443/tcp closed sun-sr-https Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds 

master node:

[root@master ~]# iptables -nvL Chain INPUT (policy ACCEPT 1312K packets, 202M bytes) pkts bytes target prot opt in out source destination 1301K 201M KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 1311K 202M KUBE-IPVS-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */ 1311K 202M KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */ 1311K 202M KUBE-NODE-PORT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes health check rules */ 40 3520 ACCEPT icmp -- * * 198.58.126.88 0.0.0.0/0 0 0 ACCEPT icmp -- * * 172.233.172.101 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 950 181K KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */ 950 181K KUBE-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ 212 12626 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 212 12626 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * br-09363fc9af47 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 20 1068 DOCKER all -- * br-09363fc9af47 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-09363fc9af47 !br-09363fc9af47 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 4 184 DOCKER all -- * br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-05a2ea8c281b !br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-05a2ea8c281b br-05a2ea8c281b 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * br-032fd1b78367 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * br-032fd1b78367 0.0.0.0/0 0.0.0.0/0 9 504 ACCEPT all -- br-032fd1b78367 !br-032fd1b78367 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-032fd1b78367 br-032fd1b78367 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0 132 7920 ACCEPT all -- br-ae1997e801f3 !br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-ae1997e801f3 br-ae1997e801f3 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 14 824 DOCKER all -- * br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0 4 240 ACCEPT all -- br-9f6d34f7e48a !br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- br-9f6d34f7e48a br-9f6d34f7e48a 0.0.0.0/0 0.0.0.0/0 29 1886 FLANNEL-FWD all -- * * 0.0.0.0/0 0.0.0.0/0 /* flanneld forward */ Chain OUTPUT (policy ACCEPT 1309K packets, 288M bytes) pkts bytes target prot opt in out source destination 1298K 286M KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 1308K 288M KUBE-IPVS-OUT-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */ Chain DOCKER (6 references) pkts bytes target prot opt in out source destination 14 824 ACCEPT tcp -- !br-9f6d34f7e48a br-9f6d34f7e48a 0.0.0.0/0 172.24.0.2 tcp dpt:3001 0 0 ACCEPT tcp -- !br-ae1997e801f3 br-ae1997e801f3 0.0.0.0/0 172.21.0.2 tcp dpt:3000 4 184 ACCEPT tcp -- !br-05a2ea8c281b br-05a2ea8c281b 0.0.0.0/0 172.22.0.2 tcp dpt:4443 12 700 ACCEPT tcp -- !br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 172.19.0.2 tcp dpt:4443 8 368 ACCEPT tcp -- !br-09363fc9af47 br-09363fc9af47 0.0.0.0/0 172.19.0.3 tcp dpt:443 Chain DOCKER-ISOLATION-STAGE-1 (1 references) pkts bytes target prot opt in out source destination 212 12626 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (0 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FLANNEL-FWD (1 references) pkts bytes target prot opt in out source destination 29 1886 ACCEPT all -- * * 10.244.0.0/16 0.0.0.0/0 /* flanneld forward */ 0 0 ACCEPT all -- * * 0.0.0.0/0 10.244.0.0/16 /* flanneld forward */ Chain DOCKER-USER (1 references) pkts bytes target prot opt in out source destination 212 12626 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-FORWARD (1 references) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED Chain KUBE-NODE-PORT (1 references) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst Chain KUBE-PROXY-FIREWALL (2 references) pkts bytes target prot opt in out source destination Chain KUBE-SOURCE-RANGES-FIREWALL (0 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-IPVS-FILTER (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-LOAD-BALANCER dst,dst 2 104 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-CLUSTER-IP dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP-LOCAL dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-HEALTH-CHECK-NODE-PORT dst 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable Chain KUBE-IPVS-OUT-FILTER (1 references) pkts bytes target prot opt in out source destination Chain KUBE-FIREWALL (2 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT Chain KUBE-KUBELET-CANARY (0 references) pkts bytes target prot opt in out source destination 

worker node:

[root@worker-1 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 18469 1430K KUBE-IPVS-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */ 10534 954K KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */ 10534 954K KUBE-NODE-PORT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes health check rules */ 10767 1115K KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 KUBE-PROXY-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-proxy firewall rules */ 0 0 KUBE-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 18359 1696K KUBE-IPVS-OUT-FILTER all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes ipvs access filter */ 18605 1739K KUBE-FIREWALL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER (1 references) pkts bytes target prot opt in out source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) pkts bytes target prot opt in out source destination 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-ISOLATION-STAGE-2 (1 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain DOCKER-USER (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-FIREWALL (2 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT Chain KUBE-KUBELET-CANARY (0 references) pkts bytes target prot opt in out source destination Chain KUBE-FORWARD (1 references) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED Chain KUBE-NODE-PORT (1 references) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst Chain KUBE-PROXY-FIREWALL (2 references) pkts bytes target prot opt in out source destination Chain KUBE-SOURCE-RANGES-FIREWALL (0 references) pkts bytes target prot opt in out source destination 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain KUBE-IPVS-FILTER (1 references) pkts bytes target prot opt in out source destination 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-LOAD-BALANCER dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-CLUSTER-IP dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-EXTERNAL-IP-LOCAL dst,dst 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set KUBE-HEALTH-CHECK-NODE-PORT dst 45 2700 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable Chain KUBE-IPVS-OUT-FILTER (1 references) pkts bytes target prot opt in out source destination 

If I run iptables -F INPUT in the worker the ping command starts to work back again:

[root@worker-1 ~]# iptables -F INPUT [root@worker-1 ~]# ping 198.58.126.88 PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data. 64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.054 ms 64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from 198.58.126.88: icmp_seq=3 ttl=64 time=0.037 ms 64 bytes from 198.58.126.88: icmp_seq=4 ttl=64 time=0.039 ms 64 bytes from 198.58.126.88: icmp_seq=5 ttl=64 time=0.023 ms 64 bytes from 198.58.126.88: icmp_seq=6 ttl=64 time=0.022 ms 64 bytes from 198.58.126.88: icmp_seq=7 ttl=64 time=0.070 ms 64 bytes from 198.58.126.88: icmp_seq=8 ttl=64 time=0.072 ms ^C --- 198.58.126.88 ping statistics --- 8 packets transmitted, 8 received, 0% packet loss, time 7197ms rtt min/avg/max/mdev = 0.022/0.045/0.072/0.017 ms 

strace command from worker:

[root@worker-1 ~]# iptables -F INPUT [root@worker-1 ~]# strace -eopenat kubectl version openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- openat(AT_FDCWD, "/usr/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3 --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- --- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} --- openat(AT_FDCWD, "/usr/local/sbin", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/usr/local/bin", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/usr/sbin", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/usr/bin", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/root/bin", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/root/.kube/config", O_RDONLY|O_CLOEXEC) = 3 Client Version: v1.29.14 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port? +++ exited with 1 +++ 

nftables before and after executing the kubeadm join command in the worker enter image description here

Chain KUBE-IPVS-FILTER (0 references) target prot opt source destination RETURN all -- anywhere anywhere match-set KUBE-LOAD-BALANCER dst,dst RETURN all -- anywhere anywhere match-set KUBE-CLUSTER-IP dst,dst RETURN all -- anywhere anywhere match-set KUBE-EXTERNAL-IP dst,dst RETURN all -- anywhere anywhere match-set KUBE-EXTERNAL-IP-LOCAL dst,dst RETURN all -- anywhere anywhere match-set KUBE-HEALTH-CHECK-NODE-PORT dst REJECT all -- anywhere anywhere ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable 
[root@worker-1 ~]# sudo iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N KUBE-FIREWALL -N KUBE-KUBELET-CANARY -N KUBE-FORWARD -N KUBE-NODE-PORT -N KUBE-PROXY-FIREWALL -N KUBE-SOURCE-RANGES-FIREWALL -N KUBE-IPVS-FILTER -N KUBE-IPVS-OUT-FILTER -A INPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-FILTER -A INPUT -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL -A INPUT -m comment --comment "kubernetes health check rules" -j KUBE-NODE-PORT -A FORWARD -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A OUTPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-OUT-FILTER -A OUTPUT -j KUBE-FIREWALL -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-NODE-PORT -m comment --comment "Kubernetes health check node port" -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j ACCEPT -A KUBE-SOURCE-RANGES-FIREWALL -j DROP -A KUBE-IPVS-FILTER -m set --match-set KUBE-LOAD-BALANCER dst,dst -j RETURN -A KUBE-IPVS-FILTER -m set --match-set KUBE-CLUSTER-IP dst,dst -j RETURN -A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP dst,dst -j RETURN -A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP-LOCAL dst,dst -j RETURN -A KUBE-IPVS-FILTER -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j RETURN -A KUBE-IPVS-FILTER -m conntrack --ctstate NEW -m set --match-set KUBE-IPVS-IPS dst -j REJECT --reject-with icmp-port-unreachable 

The blocked connection from the worker to the master starts to happen as soon as the kubelet service is running; if the kubelet service is stopped then I can ping back the master from the worker.

What might be causing this blocking on the worker node? Thanks.

8
  • 1
    So clearing the iptables rulesets makes the ping work. What was in those rulesets? Commented Mar 11 at 21:39
  • @ChrisDavies hi! How may I share the result from the command sudo nft list ruleset, I'm asking because it has 800+ lines. Thanks. Commented Mar 11 at 22:33
  • I'd suggest iptables -nvL … rather than nft list… since you've already shown the iptables compatibility layer to flush the tables Commented Mar 11 at 22:50
  • @ChrisDavies I edited my question with the output from the command iptables -nvL from the master and from the worker nodes. Commented Mar 11 at 23:19
  • 1
    On the worker if you run (just) iptables -F INPUT does the ping start working? If not, you need to add some more of the iptables rules to match the iptables -F options you used to make the it work Commented Mar 11 at 23:34

2 Answers 2

1

to places to check are sysctl and firewall

look in /etc/sysctl.conf as well as any files under /etc/systctl.d/

for settings related to icmp and turning it off or ignoring, for example if this exists: net.ipv4.icmp_echo_ignore_all=1 then either delete or change 1 to 0.

and it could also be a firewall setting, doing a simple service firewalld stop and if the problem stops then there's something in the firewall settings [also?] causing it.

reference: How to Disable Ping Response (ICMP echo) in Linux all the time?

1
  • I edited my question and added the content from /etc/sysctl.conf; I have also tried disabling the firewall without success; it seems that it is a nft table rule, if I run this [root@worker-1 ~]# iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X the ping command works but then again it fails. Commented Mar 11 at 21:18
0

The final solution was to uninstall kubernetes and the cluster and install it back again. Now I can see both nodes Ready in the worker and master nodes.

2
  • I am glad this worked for you but it doesn't really answer the question why it happened in first place. One of the things I learned is: a "solution" not only makes a sysmptom go away but also explains, why it happened in first place, so that the reason why the symptom happened can be avoided beforehand. Commented Mar 17 at 20:32
  • Hi @bakunin What worked was to reinstall the Cluster by passing --pod-cidr matching the one from Flannel (which I think is 10.244.0.0/16). By passing this Kubernetes has the same network address and can communicate all the elements in the cluster. Thanks. Commented Mar 18 at 16:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.