Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nat not always working #20

Open
lucagervasi opened this issue Sep 19, 2021 · 1 comment
Open

Nat not always working #20

lucagervasi opened this issue Sep 19, 2021 · 1 comment

Comments

@lucagervasi
Copy link

Hello everyone.

I'm experimenting with multi node k3s on a public vps and found a strange behavior using LoadBalancer with externalTrafficPolicy: Local.

That spec, allows traffic reaching the pod balanced by a service to keep its original (public) ip address.
In my case, it works in 50% and I don't know why.

2 Nodes:
node1 (public_ip1)
node2 (public_ip2)

Every node balances port 25 to a service scheduled on node1 (only 1 replica).
Balacing is handled by klipper_lb's pods on each hosts.

When I do a tcping from a third VPS (outside K3s network, totally unrelated vm):
If I do tcping public_ip1 25 (the one with the pod receiving traffic), the pod receives a correct public ip for the third vps (not part of the ensemble)
If I do tcping public_ip2 25 (the host that doesn't have the pod receiveng traffic scheduled), the pod recevies a internal ip which corresponds to svclb-service pod ip.
Is that already addressed somehow? Could you point me to some documentation?

Thanks

@au2001
Copy link

au2001 commented May 11, 2022

Not sure if you have found the solution since then, but for the sake of other people in the same situation, here's an explanation.

What you describe is the expected behavior when running the app with only 1 replica scheduled on node 1.
Here is the relevant documentation: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer and https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

Basically, externalTrafficPolicy: Local indicates that traffic reaching a load balancer (i.e. node) should be sent to a pod scheduled on that same node, which as a side effect allows to preserve the client's IP.
If no pod is scheduled on that node (e.g. node 2), then it has to add a hop to another node (e.g. node 1) on which a pod is running.
And thus it has to change the IP packet's source to the load balancing node's (e.g. node 2) IP address. Otherwise, return traffic wouldn't follow that same path and would go directly to the client IP (from e.g. node 1), which would probably get dropped.

The solution is to use a DeamonSet to run one replica (pod) on each node which you expect clients to connect to.
In this scenario, you might also want to use a NodePort service instead of Klipper LB when you have control over the port number, since it can achieve the same result in most cases.

Related issue: #31

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants