Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binding to localhost exclusively #305

Closed
flxs opened this issue Apr 4, 2019 · 12 comments
Closed

Binding to localhost exclusively #305

flxs opened this issue Apr 4, 2019 · 12 comments
Labels
kind/question No code change, just asking/answering a question status/stale

Comments

@flxs
Copy link

flxs commented Apr 4, 2019

I'm running k3s v0.3.0 on a small Ubuntu 18.04 VPS that is directly exposed to the internet, with no way to use cloud firewalls or the like.

It makes for a really neat dev environment, but I've noticed that port 6443 is reachable on the public interface, which I get is useful for reaching the cluster from the outside, but I'd still prefer to expose just 22 for SSH and have everything else listen on localhost, to be forwarded as necessary.

Filtering everything but ssh on the public interface with ufw causes all sorts of things within the cluster to break, apparently the kubernetes API isn't reachable from within the cluster anymore, so that doesn't seem to be an option.

Is there any way to configure k3s to bind to 127.0.0.1 exclusively? Preferably using the curl-sh-combo from https://k3s.io?

@uablrek
Copy link

uablrek commented Apr 4, 2019

Please see;
https://github.com/rancher/k3s#open-ports--network-security

Port 6443 must be open on the server. And you must anyway firewall the VXLAN port, so an extra rule for external access to 6443 should not be too hard IMHO.

@warmchang
Copy link
Contributor

Please see;
https://github.com/rancher/k3s#open-ports--network-security

Port 6443 must be open on the server. And you must anyway firewall the VXLAN port, so an extra rule for external access to 6443 should not be too hard IMHO.

I thinks @flxs just want to open the port on the special ip (such as 127.0.0.1, localhost), and not for 0.0.0.0 (all the ip adresses on the server).

The duplicate issue: #214

@flxs
Copy link
Author

flxs commented Apr 4, 2019

Yes, that's exactly what I want, I definitely wouldn't want to expose this sort of thing to the whole internet. I've filtered the port for now (turns out I botched the ufw config previously). Thanks for pointing out the existing issue for this!

@deniseschannon
Copy link

@flxs That issue (#214) has been resolved and is ready to test with our latest RC (v0.4.0-rc1). Can you give it a try and let me know?

@erikwilson erikwilson added the kind/question No code change, just asking/answering a question label Apr 17, 2019
@thomasheller
Copy link

I'm facing the same issue as @flxs, I need to bind all k3s ports to 127.0.0.1.

I tried k3s v1.18.8 on Ubuntu Server 20.04.1 LTS with the following command line:

$ k3s server --disable=metrics-server --kube-controller-manager-arg=bind-address=127.0.0.1 --kube-apiserver-arg=bind-address=127.0.0.1 --kube-scheduler-arg=bind-address=127.0.0.1 --tls-san=127.0.0.1 --bind-address=127.0.0.1

The ports 6443, 6444 and 10010 are then bound to 127.0.0.1.

However, ports 10251 and 10252 are still publicly available, and I can fetch metrics from http://127.0.0.1:10251/metrics without authentication.

Am I missing something obvious here? Is there a built-in way how the services on these ports can be secured without interfering with iptables or ufw settings?

@stale
Copy link

stale bot commented Jul 31, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jul 31, 2021
@stale stale bot closed this as completed Aug 14, 2021
@mcarbonneaux
Copy link

I tried but when you read the log of k3s you see kubelet refusing to use localhost ip.

$ k3s server --disable=metrics-server --kube-controller-manager-arg=bind-address=127.0.0.1 --kube-apiserver-arg=bind-address=127.0.0.1 --kube-scheduler-arg=bind-address=127.0.0.1 --tls-san=127.0.0.1 --bind-address=127.0.0.1

i've tryed to use dummy interface, but i'm face to k3s that wait infinitly the readynes of kubeproxy...

@cherusk
Copy link

cherusk commented Dec 14, 2024

Think the only practically viable and efficient enough way to get this working is via k3s in docker.

For me solution was to base on https://raw.githubusercontent.com/k3s-io/k3s/refs/heads/master/docker-compose.yml.

NB: There are even k3s components that do not allow to use localhost address space at all. You might solved this by having a virutal inerface (veth) with a routable address bound to but it's way more efficient to spin up the docker stack locally.

All best!

@brandond
Copy link
Member

If you bind the apiserver to localhost, how are you supposed to access it from within the cluster? It needs to be exposed on the node IP so that it is reachable within pods.

@cherusk
Copy link

cherusk commented Dec 14, 2024

@brandond Were you referring to my suggestion ❓

In my case and in the case of others, the use case is to use k3s on one system/node only.

Hence "other" nodes as it were are further docker containers (workers) on the same node.

By that the apiserver is not really directly bound to the localhost address, only the server docker container does expose the apiserver port on localhost address. The apiserver is in the container bound to some other non-localhost address.

@brandond
Copy link
Member

I was referring to the larger conversation on this issue. Running K3s is a container with an isolated network namespace is definitely one way to prevent it from needing to bind to IPs on the host itself.

@mcarbonneaux
Copy link

mcarbonneaux commented Dec 16, 2024

you can have server with two interface, one public and one private, it's legitime to be abel to bind on private interface all the internal network mecanic, and bind only service, node port/load balancer on public interface.

is safe by default network approch, is better than using firewall to bloc api/vxlan or other thing like prometheus metric...

and for single host usage, it's preferable to not expose by default the internal network port....

with dummy interface it's working, but vxlan port always listen on any interfaces...
it's because flannel whant absolutly bind on any interface even if you try to force to change it's interface (with --iface)...

--iface="": interface to use (IP or name) for inter-host communication. Defaults to the interface for the default route on the machine. This can be specified multiple times to check each option in order. Returns the first match found.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question No code change, just asking/answering a question status/stale
Projects
None yet
Development

No branches or pull requests

9 participants