Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Use existing s3 backup buckets in firehose module #89

Closed
stefancarlpeiser opened this issue Sep 8, 2023 · 5 comments
Closed

Comments

@stefancarlpeiser
Copy link

Hello!

In the firehose module there is a variable s3_backup_custom_name, would it be possible to update the function so that if the bucket already exists then to use that instead of trying to create a new bucket?

Or perhaps a new variable s3_backup_existing_bucket and an accompanying IAM role s3_backup_role_arn to use to access the bucket?

In my case I want to use a centralized bucket for backups that is pre-existing but the module doesn't account for this scenario.

@MichaelBriggs-Coralogix
Copy link
Contributor

Thanks for the feedback, I've opened an enhancement request to review this option. If this is a critical requirement for your business, please follow up with your Support or Sales representatives so we can properly prioritize this ask.

@stefancarlpeiser
Copy link
Author

No worries, more of a suggestion so take your time.

@Gershon-A
Copy link

+1

@ryantanjunming
Copy link
Contributor

Hi @stefancarlpeiser, took awhile but in Terraform 1.5 the import block seems useful as TF is declarative so a data block cant even be referenced if it is conditional. The following works but does have the danger of replacing your existing bucket.

variable "new_bucket_name" {
  type    = string
  default = "new-bucket-test"
}

variable "exisiting_bucket_name" {
  type    = string
  default = "existing-bucket-test"
}

import {
  for_each = var.exisiting_bucket_name != null ? [var.exisiting_bucket_name] : []
  to       = aws_s3_bucket.data_bucket
  id       = each.value
}

resource "aws_s3_bucket" "data_bucket" {
  bucket = var.exisiting_bucket_name != null ? var.exisiting_bucket_name : var.new_bucket_name
}

## Test referecing the bucket ID
resource "aws_s3_bucket_public_access_block" "firehose_bucket_bucket_access" {
  bucket = aws_s3_bucket.data_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

@ryantanjunming
Copy link
Contributor

fixed with #175

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants