Replies: 22 comments 21 replies
-
Good idea, PR more than welcome I think! |
Beta Was this translation helpful? Give feedback.
-
cool, i will try this one |
Beta Was this translation helpful? Give feedback.
-
An idea would be to extend this from worker nodes also to control plane ones due to the fact that there might be other possibilities to connect to control planes maybe via a VPN and so on . In this way we can basically close completely any chance from receiving requests from outside ... |
Beta Was this translation helpful? Give feedback.
-
Yeah, make sense as well, this one will require bastion/jumphost to access the control plane right. |
Beta Was this translation helpful? Give feedback.
-
Or at initial bootstrap of the cluster we use public IPv4 and after we finish the job drop those public IP's ? |
Beta Was this translation helpful? Give feedback.
-
@ricristian that is smart idea. really. |
Beta Was this translation helpful? Give feedback.
-
I'm not 100% sure how terraform works but I don't think that we can run in a sequence manner tf modules so a 2nd step can be implemented. I don't see that much hassle for that instead of trying to use jump hosts and so on ... |
Beta Was this translation helpful? Give feedback.
-
yeah right, looks like terraform cannot create machine with public ip, provision, then destroy public ip with one go. what i have in mind
create control plane nodes with public ip, then create worker nodes without public ip, then get inside the worker nodes through control planes nodes to bootstrap
cmiiw. feel free to add |
Beta Was this translation helpful? Give feedback.
-
Hmmm this might be something but maybe this flow might be a little bit easier ? 1st step:
Why I'm trying to avoid jumphosts is because there are a lot of ways to have some kind of jumposts ( VPN, proxy command, Cloudflare, another VM inside Hetzner and so on ) and each one of us might have different use case so maybe it is a global option just to add Public IPv4 when we need to do some stuff on that cluster ( bootrap it, change flavour ) and after that drop those public IP's ( yeah there is a drawback for those few minutes when terraform is doing stuff port 6443 & 22 will be opened in internet if we don't limit it from firewall but still I can afford to have for 2 minutes 2 ports exposed ... it will require some Tesla processing power for someone to break in in under 3 minutes ) ... But those are my thoughts hope that it adds some value |
Beta Was this translation helpful? Give feedback.
-
thanks @ricristian. nice input, lemme check and try the approach |
Beta Was this translation helpful? Give feedback.
-
I think such approaches run a bit against terraforms spirit. But if someone gets that working with a single terraform apply invocation in a somewhat maintainable form, I'd gladly merge it 👍 In theory, I'd try to implement by using a bastion host for ssh on tcp/22 and kubernetes api on tcp/6443.
What would you think about such an approach? Is there anything I am missing? |
Beta Was this translation helpful? Give feedback.
-
Thanks, @jodhi, very good idea, and great input @ricristian. For what it's worth, I really like the last iteration on this idea by @phaer. On top of having the nodes only on the private network, it would provide a load balancer for the control plane Kubernetes API, which is a feature I was hoping to have for a long time. |
Beta Was this translation helpful? Give feedback.
-
Folks, FYI, slightly related to the above, starting in 1.4.3 we now have the possibility to load-balance CP nodes, thanks to @PurpleBooth! |
Beta Was this translation helpful? Give feedback.
-
I started putting together a spike as to the limitations of public interface-less hosts, and it looks a bit more complicated than simply allowing access via a bastion, which requires a public interface, we enable the recovery mode to install MicroOS, so we need to change how we install MicroOS in order to allow installation without a public interface, see #261 for my spike. |
Beta Was this translation helpful? Give feedback.
-
Having given this a second thought, I see two more or less easy ways to implement this:
I would have chosen method 1. What do you think, @jodhi? Would that satisfy your tension? |
Beta Was this translation helpful? Give feedback.
-
Moving this to a discussion as I do not see it as a priority for the time being. |
Beta Was this translation helpful? Give feedback.
-
Something important to realize, folks, is that with private IPs only on the agent's nodes, go away automatic upgrades of nodes, k3s, and fetching of containers! Which is kind of bad; please correct me if I am missing something. |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
I'd really like to have my whole cluster, both controlplanes and workers, to only have private IPs assigned. Regarding egress, I would assume there would be no need for any two-step terraform setup as mentioned in earlier posts. That mainly would leave two concerns:
Regarding a bastion host, I assume there might be other usecases, e.g. with an LB placed only within the private network for the API and a bastion with public IP so the Kubernetes API is not public at all. I would however not put too much thought into that, since for daily operations I could rather create a tunnel from within the cluster (or e.g. use Tailscale) and just spin up a bastion instance for rescue purposes if that tunnel went down for some reason. |
Beta Was this translation helpful? Give feedback.
-
Did this issue ever got concluded ? I do not see we still have any way to setup nodes with a proper Private IPs instead of public . I still see the worker nodes and control plane nodes having public ips |
Beta Was this translation helpful? Give feedback.
-
I have encountered the same desire to eliminate the use of public IPs. As I began refactoring the code to handle private-only control and agent nodes through configuration, I learned two key points: a) Hetzner requires NAT/public IPs to communicate with the internet, and b) more parts of the code need modification. Since I have already invested some effort into this, and I can envision running a single point of failure (SPOF) pfSense in my network (which also supports remote access) until some high-availability (HA) options are feasible, I wanted to ask if there is any interest in extending the project. Specifically, this extension would make the assignment of public/private IPs configurable for those who manage the NAT component themselves. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone, I've been working on this feature, may you please give some feedback on #1567 ? Thanks ! |
Beta Was this translation helpful? Give feedback.
-
Hello, I just saw the news on hetzner page regarding flexible networking
And since the kube-hetzner already using private ip by default, i think that should be straightforward to implement worker nodes only with private ip (still not sure).
What do you guys think about worker nodes without public ip?
ps: I can try to create PR if that make sense
Beta Was this translation helpful? Give feedback.
All reactions