Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporter running in a separate Docker container #25

Open
ivan-pinatti opened this issue Nov 7, 2017 · 23 comments
Open

Exporter running in a separate Docker container #25

ivan-pinatti opened this issue Nov 7, 2017 · 23 comments

Comments

@ivan-pinatti
Copy link

Hi,

If I understood correctly, Docker notes mentions that you can't run the exporter in a separate container because you can't access the varnishstat.

How about to use docker exec and access the varnishstat binary from another container?

A little example on how it could be done.

docker run -it \
           -v /var/run/docker.sock:/var/run/docker.sock \
           ubuntu:latest \
           sh -c "apt-get update ; apt-get install docker.io -y ; docker exec -it my-varnish-container varnishstat -h"

In this manner you can access the varnishtat from another container and have the metrics exposed for Prometheus scraping.

Cheers!

@ivan-pinatti
Copy link
Author

A more elaborate and self-explanatory version;

docker run -it \
           -v /var/run/docker.sock:/var/run/docker.sock \
           -e "VARNISH_CONTAINER=my-varnish.dev" \
           ubuntu:latest \
           sh -c " \
            apt-get update \
            && apt-get install docker.io curl -y \
            && echo '#!/usr/bin/env bash' > /usr/local/bin/varnishstat \
            && echo 'docker exec -it \${VARNISH_CONTAINER} varnishstat \$@' >> /usr/local/bin/varnishstat \
            && chmod +x /usr/local/bin/varnishstat \
            && varnishstat -j \
            && curl -L -O https://github.com/jonnenauha/prometheus_varnish_exporter/releases/download/1.3.4/prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
            && tar zxfv prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
            && cd prometheus_varnish_exporter-1.3.4.linux-amd64 \
            && ./prometheus_varnish_exporter -test \
           "

varnishstat -j is working smoothly but I'm receiving this error from the exporter:

2017/11/07 20:00:04 prometheus_varnish_exporter v1.3.4 (8aee39a) {
  "ListenAddress": ":9131",
  "Path": "/metrics",
  "HealthPath": "",
  "VarnishstatExe": "varnishstat",
  "Params": {
    "Instance": "",
    "VSM": ""
  },
  "Verbose": false,
  "NoExit": false,
  "Test": true,
  "Raw": false
}
2017/11/07 20:00:04 [FATAL] Varnish version initialize failed: exit status 1

I'm using Ubuntu just for testing, it could be shipped with a much smaller FROM Docker image.

@jonnenauha
Copy link
Owner

Sorry forgot to respond to this! I still don't see this being very user friendly. The use case of running varnish and its varnish exporter in a container is probably quite rare. I understand that many prom exporters are provided as docker packages, but there is still no simple way to setup the comms between my process and varnish running in different containers.

So I'm not going to focus on this. If someone does run varnish in a container, they'll probably be more easy off by running this exporter in the same container (there is probably supported ways of running multiple processes in a single container, I don't know what exactly though) and exposing its port to the outside world for scrapes.

If you do get some progress here, you can add more info here if someone else is interested.

@ivan-pinatti
Copy link
Author

ivan-pinatti commented Dec 14, 2017

Hi @jonnenauha,

Actually I think most Prometheus exporters use the approach to run as a separate container, also this is Docker's best practice, to have a highly specialized container running one process and its threads. It is possible to run multiple services inside one container but I would rather to avoid it.

I have several projects running on top of Kubernetes and Docker Compose and I break up my infrastructure in layers, layers like monitoring, logging, app, database, caching, and so on...
Thus, when a developer is spinning their environment they just spin-up the layers that they need to work on, there is no need to have monitoring and logging most of the time. By doing it I'm saving time and reducing cost.

What I basically showed previously is that you can access varnishstat binary running in a container from other container and I think that is basically what you need to collect Varnish metrics, isn't it ?

I did another code to show how easy it would be for users;

1 - Create a temporary folder and place inside the files below:

2 - Create a Dockerfile (this would be your Docker image) with:

FROM ubuntu:latest

RUN apt-get update \
    && apt-get install docker.io curl -y

RUN echo '#!/usr/bin/env bash' > /usr/local/bin/varnishstat \
    && echo 'docker exec -it \${VARNISH_CONTAINER} varnishstat \$@' >> /usr/local/bin/varnishstat \
    && chmod +x /usr/local/bin/varnishstat

RUN mkdir -p /opt/prometheus_varnish_exporter \
    && cd /opt/prometheus_varnish_exporter \
    && curl -L -O https://github.com/jonnenauha/prometheus_varnish_exporter/releases/download/1.3.4/prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
    && tar zxfv prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
    && ln -s /opt/prometheus_varnish_exporter/prometheus_varnish_exporter-1.3.4.linux-amd64/prometheus_varnish_exporter /usr/local/bin/

ENTRYPOINT ["prometheus_varnish_exporter"]
CMD ["-h"]

3 - Create a docker-compose.yaml file

version: '3'

services:

  varnish:
    image: million12/varnish:latest
    container_name: varnish.dev
    tty: true
    stdin_open: true

  varnish-exporter:
    build: ./
    container_name: varnish-exporter.dev
    tty: true
    stdin_open: true
    depends_on:
      - varnish
    environment:
      VARNISH_CONTAINER: varnish.dev
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command:
      - -web.listen-address=9131

4 - Run the command docker-compose up

5 - See the results

In my example we are building the Docker image locally, for a user they would just need to point to your future Docker Image and set the environment variable for the Varnish service container, simple as that! :)

Cheers,

Ivan Pinatti

@svenwltr
Copy link

I think Prometheus is also used very often with Kubernetes and using exec isn't a working solution there.

I tried to figure out how varnishstat actually gets its data. Also I noticed the command line flag -N with some blurry description. Since I am not very familiar with C I did a strace varnishstat -1 and saw that varnishstat is openening the file /var/lib/varnish/$hostname/_.vsm. I tested it with varnishstat -1 -N /var/lib/varnish/$hostname/_.vsm and it worked. Also providing gibberish for -N fails with an error.

So splitting varnishd and varnishstat into two containers might be achieved by sharing that file. I will investigate further into that.

@svenwltr
Copy link

It tried to implement this, but I failed because varnishstat demanded a running varnishd. Fortunately this idea is already impemented in lswith/varnish_exporter.

I don't know why the fork is diverging from this repo.

@ivan-pinatti
Copy link
Author

Looks good @svenwltr, I will try it next week.

Thanks!

@jonnenauha
Copy link
Owner

Yeh that repo might work but its lacking a lot of features and support for new varnish versions. You can however clone it and then just paste all .go files to it from my repo, it might work if he did not mod the code heavily :)

@svenwltr
Copy link

It turns out, that my previous try did not succeed, because I had varnish v4 for the actual proxy and varnish v5 for the exporter. This means sharing /var/lib/varnish between the proxy and the exporter container just works.

Here is an example:

(the exporter and the server shouldn't be in the same image, but all the public Docker images still use Varnish v4)

FROM golang:1.10-alpine as exporter

RUN apk add --no-cache git
RUN go get github.com/jonnenauha/prometheus_varnish_exporter

WORKDIR /go/src/github.com/jonnenauha/prometheus_varnish_exporter
RUN git checkout 1.4
RUN go build

FROM alpine:3.7

RUN apk --no-cache add varnish 

COPY config/default.vcl /etc/varnish/default.vcl
COPY --from=exporter /go/bin/prometheus_varnish_exporter /usr/local/bin

ENTRYPOINT /usr/sbin/varnishd

Kubernetes deployment:

---
apiVersion: extensions/v1beta1
kind: Deployment

metadata:
  name: proxy
  labels:
    team: platform
    app: proxy

spec:
  replicas: 3

  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1

  selector:
    matchLabels:
      app: proxy

  template:
    metadata:
      name: proxy
      labels:
        app: proxy
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9131"

    spec:
      containers:
      - name: proxy
        image: "varnish-exporter"
        imagePullPolicy: Always

        ports:
        - name: http
          containerPort: 80

        command:
        - /usr/sbin/varnishd
        args:
        - -f/etc/varnish/default.vcl
        - -F            # forground
        - -a0.0.0.0:80  # listen address
        - -smalloc,300M # storage backend

        resources:
          requests:
            cpu: 2
            memory: 1100Mi
          limits:
            cpu: 2
            memory: 1400Mi

        volumeMounts:
          - name: var
            mountPath: /var/lib/varnish

      - name: exporter
        image: "varnish-exporter"
        imagePullPolicy: Always

        command:
        - /usr/local/bin/prometheus_varnish_exporter

        ports:
        - name: metrics
          containerPort: 9131

        volumeMounts:
        - name: var
          mountPath: /var/lib/varnish

      volumes:
      - name: var
        emptyDir: {}

@strowi
Copy link

strowi commented Sep 20, 2018

Hi,

we just moved our exporter into a separate container:

version: '2'

services:
  varnish:
    image: xyz/sys/docker/varnish:latest
    restart: always
    ipc: host
    network_mode: host
    mem_limit: 240g
    volumes:
      - ./src:/etc/varnish
      - varnish_tmpfs:/usr/var/varnish
    environment:
      BIND_PORT: :80
      CACHE_SIZE: 180G
      VCL_CONFIG: /etc/varnish/default.vcl
      VARNISHD_PARAMS: -p default_ttl=120 -p default_grace=3600 -S /etc/varnish/secret -T 127.0.0.1:6082 -p vcl_dir=/etc/varnish/

  varnishexporter:
    image: xyz/sys/docker/varnish:latest
    restart: always
    mem_limit: 100m
    ipc: host
    network_mode: host
    depends_on:
      - varnish
    volumes:
      - ./src:/etc/varnish:ro
      - varnish_tmpfs:/usr/var/varnish/:ro
    command: prometheus_varnish_exporter

Initially it worked fine for about 1-2 days.
Then Varnish reached it's container limit and got OOM-killed and restarted.
From that time we only get the basic varnish-metrics:

# HELP varnish_mgt_child_died Child process died (signal)
# TYPE varnish_mgt_child_died gauge
varnish_mgt_child_died 1
# HELP varnish_mgt_child_dump Child process core dumped
# TYPE varnish_mgt_child_dump gauge
varnish_mgt_child_dump 0
# HELP varnish_mgt_child_exit Child process normal exit
# TYPE varnish_mgt_child_exit gauge
varnish_mgt_child_exit 0
# HELP varnish_mgt_child_panic Child process panic
# TYPE varnish_mgt_child_panic gauge
varnish_mgt_child_panic 0
# HELP varnish_mgt_child_start Child process started
# TYPE varnish_mgt_child_start gauge
varnish_mgt_child_start 2
# HELP varnish_mgt_child_stop Child process unexpected exit
# TYPE varnish_mgt_child_stop gauge
varnish_mgt_child_stop 0
# HELP varnish_mgt_uptime Management process uptime
# TYPE varnish_mgt_uptime gauge
varnish_mgt_uptime 164803
# HELP varnish_up Was the last scrape of varnish successful.
# TYPE varnish_up gauge
varnish_up 1
# HELP varnish_version Varnish version information
# TYPE varnish_version gauge
varnish_version{major="6",minor="0",patch="1",revision="8d54bec5330c29304979ebf2c425ae14ab80493c",version="6.0.1"} 1

After restarting the varnish-container manually it works again.
Not sure where exactly the problem lies...

@strowi
Copy link

strowi commented Sep 21, 2018

ignore my previous comment.. i forgot to export "VSM_NOPID" (https://varnish-cache.org/docs/trunk/reference/vsm.html#vsm-and-containers)

@jlvale
Copy link

jlvale commented May 13, 2019

How is it going running in a separate Docker container?

@jlvale
Copy link

jlvale commented May 13, 2019

A more elaborate and self-explanatory version;

docker run -it \
           -v /var/run/docker.sock:/var/run/docker.sock \
           -e "VARNISH_CONTAINER=my-varnish.dev" \
           ubuntu:latest \
           sh -c " \
            apt-get update \
            && apt-get install docker.io curl -y \
            && echo '#!/usr/bin/env bash' > /usr/local/bin/varnishstat \
            && echo 'docker exec -it \${VARNISH_CONTAINER} varnishstat \$@' >> /usr/local/bin/varnishstat \
            && chmod +x /usr/local/bin/varnishstat \
            && varnishstat -j \
            && curl -L -O https://github.com/jonnenauha/prometheus_varnish_exporter/releases/download/1.3.4/prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
            && tar zxfv prometheus_varnish_exporter-1.3.4.linux-amd64.tar.gz \
            && cd prometheus_varnish_exporter-1.3.4.linux-amd64 \
            && ./prometheus_varnish_exporter -test \
           "

varnishstat -j is working smoothly but I'm receiving this error from the exporter:

2017/11/07 20:00:04 prometheus_varnish_exporter v1.3.4 (8aee39a) {
  "ListenAddress": ":9131",
  "Path": "/metrics",
  "HealthPath": "",
  "VarnishstatExe": "varnishstat",
  "Params": {
    "Instance": "",
    "VSM": ""
  },
  "Verbose": false,
  "NoExit": false,
  "Test": true,
  "Raw": false
}
2017/11/07 20:00:04 [FATAL] Varnish version initialize failed: exit status 1

I'm using Ubuntu just for testing, it could be shipped with a much smaller FROM Docker image.

How did you corrected the "Varnish version initialize failed: exit status 1" error?

@jonnenauha
Copy link
Owner

jonnenauha commented May 15, 2019

Checkout the readme, that is the now supported way of doing this. Exporter is running in host but varnish can run in a docker container.

https://github.com/jonnenauha/prometheus_varnish_exporter#docker

If you want to run also the exporter in separate container, I'm not sure how to do this. Seems you are copying varnishstat binary over to the exporter container. This does not cut it, as the binary will user unix sockets or some other mechanism to talk to the actual varnish process (not entirely sure what the route here is).

If you echo your test varnishstat -j you should see it also fails (seems to have success exit code)? Or does it get the JSON stats? If it does, then the exporter should as well. Try -verbose if you get more info about the error.

@strowi
Copy link

strowi commented May 15, 2019

Hi,

we have the exporter running in a separate container, and its been working for about 1 year now.
Just build your varnish image, and just add the exporters binary.

Then run it like this:

version: '2'

services:
  varnish:
    image: cktest.net/sys/docker/varnish:latest
    restart: always
    network_mode: host
    mem_limit: 240g
    volumes:
      - ./src:/etc/varnish
      - varnish_tmpfs:/var/lib/varnish
    environment:
      MALLOC_CONF: lg_dirty_mult:8,lg_chunk:18
      BIND_PORT: :80
      CACHE_SIZE: 180G
      VCL_CONFIG: /etc/varnish/default.vcl
      VARNISHD_PARAMS: -p default_ttl=120 -p default_grace=3600 -p nuke_limit=99999 -S /etc/varnish/secret -T 127.0.0.1:6082 -p vcl_dir=/etc/varnish/

  varnishexporter:
    image: cktest.net/sys/docker/varnish:latest
    restart: always
    mem_limit: 100m
    network_mode: host
    environment:
      VSM_NOPID: 1
    depends_on:
      - varnish
    volumes:
      - ./src:/etc/varnish:ro
      - varnish_tmpfs:/var/lib/varnish/:ro
    command: prometheus_varnish_exporter

volumes:
  varnish_tmpfs:
    driver_opts:
      type: tmpfs
      device: tmpfs

The exporter might restart a couple of times, because varnish is not up fast enough, but after a couple of seconds you should be able to get those metrics from port 9131.

regards,
Roman

@jonnenauha
Copy link
Owner

jonnenauha commented May 15, 2019

Cool, thanks.

It looks like you are sharing /var/lib/varnish to both service containers. That location has the .vms file that afaik varnishstat uses to connect to varnish.

Do you have any insight how the connection part works, the /socket/varnish.socket looks to not be shared so the exporter image could connect there? What am I missing here, is there some secret sauce somewhere? 😄

You can add prometheus_varnish_exporter -no-exit to not exit on scrape errors, Should fix that restart cycle at the beginning.

edit: the .vms file seems to have a IP:PORT combo and points to a secret file. Do the services share full networking to each other?

@jonnenauha
Copy link
Owner

Also does VSM_NOPID: 1 tell the exporter container to not actually run varnish itself? Looks like its the same image, and you probably just want the varnishstat binary (and possibly others) from there?

@strowi
Copy link

strowi commented May 15, 2019

yes we've put the .vms to a shared-location, this way we could also plug in another varnishlog-container, since we don't want to run a command like varnishlog manually inside the varnish-container.

The /socket/varnish.socket doesn't play a role in this (i removed it from the above).
That was from a test, connecting our nginx-ssl-terminator via socket instead of tcp.

ONLY the vsm is needed, and the pid-namespace info has to be ignored. (Won't work with multiple varnish-instances on the same host, but that would be a bad idea anyway i think).

See here for reference:

The -T -parameter is for the cli-address.

We build the exporter into the main-image for simplicity since youneed the cli-commands for both etc..

I've put up a cleaned up version here: https://github.com/strowi/varnish

@jlvale
Copy link

jlvale commented May 16, 2019

Hi,

we have the exporter running in a separate container, and its been working for about 1 year now.
Just build your varnish image, and just add the exporters binary.

Then run it like this:

version: '2'

services:
  varnish:
    image: cktest.net/sys/docker/varnish:latest
    restart: always
    network_mode: host
    mem_limit: 240g
    volumes:
      - ./src:/etc/varnish
      - varnish_tmpfs:/var/lib/varnish
    environment:
      MALLOC_CONF: lg_dirty_mult:8,lg_chunk:18
      BIND_PORT: :80
      CACHE_SIZE: 180G
      VCL_CONFIG: /etc/varnish/default.vcl
      VARNISHD_PARAMS: -p default_ttl=120 -p default_grace=3600 -p nuke_limit=99999 -S /etc/varnish/secret -T 127.0.0.1:6082 -p vcl_dir=/etc/varnish/

  varnishexporter:
    image: cktest.net/sys/docker/varnish:latest
    restart: always
    mem_limit: 100m
    network_mode: host
    environment:
      VSM_NOPID: 1
    depends_on:
      - varnish
    volumes:
      - ./src:/etc/varnish:ro
      - varnish_tmpfs:/var/lib/varnish/:ro
    command: prometheus_varnish_exporter

volumes:
  varnish_tmpfs:
    driver_opts:
      type: tmpfs
      device: tmpfs

The exporter might restart a couple of times, because varnish is not up fast enough, but after a couple of seconds you should be able to get those metrics from port 9131.

regards,
Roman

Nice, thanks for the explanation. One more thing, how do you manage to go back to the terminal after starting the binary, without killing the process?

@strowi
Copy link

strowi commented May 16, 2019

what terminal? There is only 1 process running per container, as it should be per docker best-practice.

@jlvale
Copy link

jlvale commented May 16, 2019

We choosed to run varnish exporter as a daemon inside the varnish container, it's working fine until now.

squaremo added a commit to squaremo/varnish-prometheus-exporter-docker that referenced this issue Aug 28, 2019
This is a Dockerfile based on the comment(s) here:
jonnenauha/prometheus_varnish_exporter#25 (comment)

.. but adapted so that it can be run with Varnish 6.2.
@ironmike-au
Copy link

ironmike-au commented Jul 12, 2021

If you're trying to run Varnish and this exporter in separate containers within the same pod in Kubernetes, you need to enable process namespace sharing and share the varnish working directory as described by @strowi above

@autokilla47
Copy link

is there anything new on this issue?

@strowi
Copy link

strowi commented Jun 2, 2023

Don't know what you mean, there is a solution in here that works if used correctly.

Not sure why this is still open...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants