-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exporter running in a separate Docker container #25
Comments
A more elaborate and self-explanatory version;
varnishstat -j is working smoothly but I'm receiving this error from the exporter:
I'm using Ubuntu just for testing, it could be shipped with a much smaller FROM Docker image. |
Sorry forgot to respond to this! I still don't see this being very user friendly. The use case of running varnish and its varnish exporter in a container is probably quite rare. I understand that many prom exporters are provided as docker packages, but there is still no simple way to setup the comms between my process and varnish running in different containers. So I'm not going to focus on this. If someone does run varnish in a container, they'll probably be more easy off by running this exporter in the same container (there is probably supported ways of running multiple processes in a single container, I don't know what exactly though) and exposing its port to the outside world for scrapes. If you do get some progress here, you can add more info here if someone else is interested. |
Hi @jonnenauha, Actually I think most Prometheus exporters use the approach to run as a separate container, also this is Docker's best practice, to have a highly specialized container running one process and its threads. It is possible to run multiple services inside one container but I would rather to avoid it. I have several projects running on top of Kubernetes and Docker Compose and I break up my infrastructure in layers, layers like monitoring, logging, app, database, caching, and so on... What I basically showed previously is that you can access varnishstat binary running in a container from other container and I think that is basically what you need to collect Varnish metrics, isn't it ? I did another code to show how easy it would be for users; 1 - Create a temporary folder and place inside the files below: 2 - Create a Dockerfile (this would be your Docker image) with:
3 - Create a docker-compose.yaml file
4 - Run the command docker-compose up 5 - See the results In my example we are building the Docker image locally, for a user they would just need to point to your future Docker Image and set the environment variable for the Varnish service container, simple as that! :) Cheers, Ivan Pinatti |
I think Prometheus is also used very often with Kubernetes and using exec isn't a working solution there. I tried to figure out how So splitting |
It tried to implement this, but I failed because I don't know why the fork is diverging from this repo. |
Looks good @svenwltr, I will try it next week. Thanks! |
Yeh that repo might work but its lacking a lot of features and support for new varnish versions. You can however clone it and then just paste all .go files to it from my repo, it might work if he did not mod the code heavily :) |
It turns out, that my previous try did not succeed, because I had varnish v4 for the actual proxy and varnish v5 for the exporter. This means sharing Here is an example: (the exporter and the server shouldn't be in the same image, but all the public Docker images still use Varnish v4) FROM golang:1.10-alpine as exporter
RUN apk add --no-cache git
RUN go get github.com/jonnenauha/prometheus_varnish_exporter
WORKDIR /go/src/github.com/jonnenauha/prometheus_varnish_exporter
RUN git checkout 1.4
RUN go build
FROM alpine:3.7
RUN apk --no-cache add varnish
COPY config/default.vcl /etc/varnish/default.vcl
COPY --from=exporter /go/bin/prometheus_varnish_exporter /usr/local/bin
ENTRYPOINT /usr/sbin/varnishd Kubernetes deployment: ---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: proxy
labels:
team: platform
app: proxy
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: proxy
template:
metadata:
name: proxy
labels:
app: proxy
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9131"
spec:
containers:
- name: proxy
image: "varnish-exporter"
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
command:
- /usr/sbin/varnishd
args:
- -f/etc/varnish/default.vcl
- -F # forground
- -a0.0.0.0:80 # listen address
- -smalloc,300M # storage backend
resources:
requests:
cpu: 2
memory: 1100Mi
limits:
cpu: 2
memory: 1400Mi
volumeMounts:
- name: var
mountPath: /var/lib/varnish
- name: exporter
image: "varnish-exporter"
imagePullPolicy: Always
command:
- /usr/local/bin/prometheus_varnish_exporter
ports:
- name: metrics
containerPort: 9131
volumeMounts:
- name: var
mountPath: /var/lib/varnish
volumes:
- name: var
emptyDir: {} |
Hi, we just moved our exporter into a separate container:
Initially it worked fine for about 1-2 days.
After restarting the varnish-container manually it works again. |
ignore my previous comment.. i forgot to export "VSM_NOPID" (https://varnish-cache.org/docs/trunk/reference/vsm.html#vsm-and-containers) |
How is it going running in a separate Docker container? |
How did you corrected the "Varnish version initialize failed: exit status 1" error? |
Checkout the readme, that is the now supported way of doing this. Exporter is running in host but varnish can run in a docker container. https://github.com/jonnenauha/prometheus_varnish_exporter#docker If you want to run also the exporter in separate container, I'm not sure how to do this. Seems you are copying If you echo your test |
Hi, we have the exporter running in a separate container, and its been working for about 1 year now. Then run it like this:
The exporter might restart a couple of times, because varnish is not up fast enough, but after a couple of seconds you should be able to get those metrics from port 9131. regards, |
Cool, thanks. It looks like you are sharing Do you have any insight how the connection part works, the You can add edit: the .vms file seems to have a IP:PORT combo and points to a secret file. Do the services share full networking to each other? |
Also does |
yes we've put the .vms to a shared-location, this way we could also plug in another varnishlog-container, since we don't want to run a command like varnishlog manually inside the varnish-container. The ONLY the vsm is needed, and the pid-namespace info has to be ignored. (Won't work with multiple varnish-instances on the same host, but that would be a bad idea anyway i think). See here for reference: The We build the exporter into the main-image for simplicity since youneed the cli-commands for both etc.. I've put up a cleaned up version here: https://github.com/strowi/varnish |
Nice, thanks for the explanation. One more thing, how do you manage to go back to the terminal after starting the binary, without killing the process? |
what terminal? There is only 1 process running per container, as it should be per docker best-practice. |
We choosed to run varnish exporter as a daemon inside the varnish container, it's working fine until now. |
This is a Dockerfile based on the comment(s) here: jonnenauha/prometheus_varnish_exporter#25 (comment) .. but adapted so that it can be run with Varnish 6.2.
If you're trying to run Varnish and this exporter in separate containers within the same pod in Kubernetes, you need to enable process namespace sharing and share the varnish working directory as described by @strowi above |
is there anything new on this issue? |
Don't know what you mean, there is a solution in here that works if used correctly. Not sure why this is still open... |
Hi,
If I understood correctly, Docker notes mentions that you can't run the exporter in a separate container because you can't access the varnishstat.
How about to use docker exec and access the varnishstat binary from another container?
A little example on how it could be done.
In this manner you can access the varnishtat from another container and have the metrics exposed for Prometheus scraping.
Cheers!
The text was updated successfully, but these errors were encountered: