You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm experimenting with multi node k3s on a public vps and found a strange behavior using LoadBalancer with externalTrafficPolicy: Local.
That spec, allows traffic reaching the pod balanced by a service to keep its original (public) ip address.
In my case, it works in 50% and I don't know why.
2 Nodes:
node1 (public_ip1)
node2 (public_ip2)
Every node balances port 25 to a service scheduled on node1 (only 1 replica).
Balacing is handled by klipper_lb's pods on each hosts.
When I do a tcping from a third VPS (outside K3s network, totally unrelated vm):
If I do tcping public_ip1 25 (the one with the pod receiving traffic), the pod receives a correct public ip for the third vps (not part of the ensemble)
If I do tcping public_ip2 25 (the host that doesn't have the pod receiveng traffic scheduled), the pod recevies a internal ip which corresponds to svclb-service pod ip.
Is that already addressed somehow? Could you point me to some documentation?
Thanks
The text was updated successfully, but these errors were encountered:
Basically, externalTrafficPolicy: Local indicates that traffic reaching a load balancer (i.e. node) should be sent to a pod scheduled on that same node, which as a side effect allows to preserve the client's IP.
If no pod is scheduled on that node (e.g. node 2), then it has to add a hop to another node (e.g. node 1) on which a pod is running.
And thus it has to change the IP packet's source to the load balancing node's (e.g. node 2) IP address. Otherwise, return traffic wouldn't follow that same path and would go directly to the client IP (from e.g. node 1), which would probably get dropped.
The solution is to use a DeamonSet to run one replica (pod) on each node which you expect clients to connect to.
In this scenario, you might also want to use a NodePort service instead of Klipper LB when you have control over the port number, since it can achieve the same result in most cases.
Hello everyone.
I'm experimenting with multi node k3s on a public vps and found a strange behavior using LoadBalancer with externalTrafficPolicy: Local.
That spec, allows traffic reaching the pod balanced by a service to keep its original (public) ip address.
In my case, it works in 50% and I don't know why.
2 Nodes:
node1 (public_ip1)
node2 (public_ip2)
Every node balances port 25 to a service scheduled on node1 (only 1 replica).
Balacing is handled by klipper_lb's pods on each hosts.
When I do a tcping from a third VPS (outside K3s network, totally unrelated vm):
If I do tcping public_ip1 25 (the one with the pod receiving traffic), the pod receives a correct public ip for the third vps (not part of the ensemble)
If I do tcping public_ip2 25 (the host that doesn't have the pod receiveng traffic scheduled), the pod recevies a internal ip which corresponds to svclb-service pod ip.
Is that already addressed somehow? Could you point me to some documentation?
Thanks
The text was updated successfully, but these errors were encountered: