Skip to content

Commit

Permalink
Merge pull request #662 from paulfantom/disable-remote-read
Browse files Browse the repository at this point in the history
remove remote_read option and point to use direct queries for data retrieval
  • Loading branch information
paulfantom authored Nov 28, 2022
2 parents 879a29d + 24d303e commit 11b6ee0
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 7 deletions.
2 changes: 1 addition & 1 deletion chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ keywords:
- monitoring
- tracing
- opentelemetry
version: 17.24.0
version: 18.0.0
# TODO(paulfantom): Enable after kubernetes 1.22 reaches EOL (2022-10-28)
# kubeVersion: ">= 1.23.0"
dependencies:
Expand Down
2 changes: 0 additions & 2 deletions chart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,8 +253,6 @@ of the [Promscale](https://github.com/timescale/promscale) repo.
| `kube-prometheus-stack.prometheus.prometheusSpec.scrapeTimeout` | Prometheus scrape timeout | `10s` |
| `kube-prometheus-stack.prometheus.prometheusSpec.evaluationInterval` | Prometheus evaluation interval | `1m` |
| `kube-prometheus-stack.prometheus.prometheusSpec.retention` | Prometheus data retention | `1d` |
| `kube-prometheus-stack.prometheus.prometheusSpec.remoteRead[0].readRecent` | Whether reads should be made for queries for time ranges that the local storage should have complete data for. | `true` |
| `kube-prometheus-stack.prometheus.prometheusSpec.remoteRead[0].url` | The Prometheus URL of the endpoint to query from. | `"http://{{ .Release.Name }}-promscale.{{ .Release.Namespace }}.svc:9201/read"` |
| `kube-prometheus-stack.prometheus.prometheusSpec.remoteWrite[0].queueConfig.batchSendDeadline` | BatchSendDeadline is the maximum time a sample will wait in buffer. | `"30s"` |
| `kube-prometheus-stack.prometheus.prometheusSpec.remoteWrite[0].queueConfig.capacity` | Capacity is the number of samples to buffer per shard before we start dropping them. | `100000` |
| `kube-prometheus-stack.prometheus.prometheusSpec.remoteWrite[0].queueConfig.maxBackoff` | MaxBackoff is the maximum retry delay. | `"10s"` |
Expand Down
7 changes: 3 additions & 4 deletions chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -215,10 +215,9 @@ kube-prometheus-stack:
operator: DoesNotExist
# The remote_read spec configuration for Prometheus.
# ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#remotereadspec
remoteRead:
# - {protocol}://{host}:{port}/{endpoint}
- url: "http://{{ .Release.Name }}-promscale.{{ .Release.Namespace }}.svc:9201/read"
readRecent: false
# remoteRead:
# - url: "http://{{ .Release.Name }}-promscale.{{ .Release.Namespace }}.svc:9201/read"
# readRecent: false
# The remote_write spec configuration for Prometheus.
# ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#remotewritespec
remoteWrite:
Expand Down
13 changes: 13 additions & 0 deletions docs/upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,19 @@ Firstly upgrade the helm repo to pull the latest available tobs helm chart. We a
helm repo update
```

## Upgrading from 17.x to 18.x

To get the best performance out of promscale we recommend to query it directly. Since tobs is already shipping grafana datasource configured this way, there is no need to configure `remote_read` option in prometheus. This is a breaking change for people using `remote_read` option. If you need to use `remote_read` option, you can still add it back by putting the following code snippet into your `values.yaml` file.

```yaml
kube-prometheus-stack:
prometheus:
prometheusSpec:
remoteRead:
- url: "http://{{ .Release.Name }}-promscale.{{ .Release.Namespace }}.svc:9201/read"
readRecent: false
```
## Upgrading from 16.x to 17.x
With `17.0.0` we decided to diverge from gathering metrics data only from
Expand Down

0 comments on commit 11b6ee0

Please sign in to comment.