You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been investigating consul for service discovery, we want to use it for services deployed in Kubernetes (on-prem clusters deployed via kubespray and kubeadm) as well as services that live on bare metal VMs. I'll detail our cluster setup and what I've configured thus far.
TLDR - HAproxy LB point to HAproxy ingress controller nodes on multiple clusters. Routed via host headers with ingress objects using path prefixes. Want to use consul purely for service discovery. Configured with consul templates to loop through services and map them to the respective ingress controller nodes.
Traffic flows into our cluster via an external load balancer (LB), HAproxy in our case. We have Polaris GSLB as an authoritative DNS server for the sub domain .dev.company.com. The top level domain .company.com is configured in AD DNS and handled by another tech department. Polaris has records for all the clusters (prod-cluster-1.dev.company.com, prod-cluster-2.dev.company.com, etc) and some independent services (app.dev.company.com, app2.dev.company.com, etc) that all just point back to the external HAproxy load balancer. Once traffic gets to the load balancer, we have config that maps host headers to backends.
With introduction of Consul, I've deployed consul server on a Linux VM with the following configuration:
Once the catalog sync on consul-k8s starts syncing services, I used consul-template on haproxy to essentially map the services to the ingress NodePort services that have the same cluster tag:
So all of this achieves a list of services we want discoverable in Consul and we have HAproxy LB getting all the services, mapping the controller ingress controller nodes and ports against them.
Enabling the ingress option on consul-k8s is great but I've noticed it only exposes one of the hostnames of an ingress object. Ideally with a multiple cluster setup, we would want services accessible via friendly names like app.dev.bhdgsystematic.com but also accessible via app.dc1.dev.bhdgsystematic.com. Most of the chatter online seems to be using consul-dns and then using the .consul domain for queries. I don't particularly like this approach, I don't want to introduce another arbitrary domain into our setup.
I've yet to see many others use Consul and Kubernetes in this way. Is what we're doing wrong or possibly incorrect. How are others using consul to expose services and what other tooling is used to get traffic to these services for on-prem clusters?
Please let me know if I've missed out any details.
The text was updated successfully, but these errors were encountered:
Hi All,
I've been investigating consul for service discovery, we want to use it for services deployed in Kubernetes (on-prem clusters deployed via kubespray and kubeadm) as well as services that live on bare metal VMs. I'll detail our cluster setup and what I've configured thus far.
TLDR - HAproxy LB point to HAproxy ingress controller nodes on multiple clusters. Routed via host headers with ingress objects using path prefixes. Want to use consul purely for service discovery. Configured with consul templates to loop through services and map them to the respective ingress controller nodes.
Traffic flows into our cluster via an external load balancer (LB), HAproxy in our case. We have Polaris GSLB as an authoritative DNS server for the sub domain .dev.company.com. The top level domain .company.com is configured in AD DNS and handled by another tech department. Polaris has records for all the clusters (prod-cluster-1.dev.company.com, prod-cluster-2.dev.company.com, etc) and some independent services (app.dev.company.com, app2.dev.company.com, etc) that all just point back to the external HAproxy load balancer. Once traffic gets to the load balancer, we have config that maps host headers to backends.
With introduction of Consul, I've deployed consul server on a Linux VM with the following configuration:
The consul.hcl is also very standard:
Consul-k8s, I've deployed the catalog sync service (currently saving all services):
Once the catalog sync on consul-k8s starts syncing services, I used consul-template on haproxy to essentially map the services to the ingress NodePort services that have the same cluster tag:
So all of this achieves a list of services we want discoverable in Consul and we have HAproxy LB getting all the services, mapping the controller ingress controller nodes and ports against them.
Enabling the ingress option on consul-k8s is great but I've noticed it only exposes one of the hostnames of an ingress object. Ideally with a multiple cluster setup, we would want services accessible via friendly names like app.dev.bhdgsystematic.com but also accessible via app.dc1.dev.bhdgsystematic.com. Most of the chatter online seems to be using consul-dns and then using the .consul domain for queries. I don't particularly like this approach, I don't want to introduce another arbitrary domain into our setup.
I've yet to see many others use Consul and Kubernetes in this way. Is what we're doing wrong or possibly incorrect. How are others using consul to expose services and what other tooling is used to get traffic to these services for on-prem clusters?
Please let me know if I've missed out any details.
The text was updated successfully, but these errors were encountered: