In this setup we integrate the secrets-exercise online with AWS EKS and let Pods consume secrets from the AWS Parameter Store and AWS Secrets Manager. We use managed node groups so as we don't want the hassle of managing the EC2 instances ourselves, and Fargate doesn't suit our needs since we use a StatefulSet. If you want to know more about integrating secrets with EKS, check EKS and SSM Parameter Store and EKS and Secrets Manager. Please make sure that the account in which you run this exercise has either CloudTrail enabled, or is not linked to your current organization and/or DTAP environment.
Have the following tools installed:
- AWS CLI - Installation
- EKS CTL - Installation
- Tfenv (Optional) - Installation
- Terraform CLI - Installation
- Wget - Installation
- Helm Installation
- Kubectl Installation
- jq Installation
Make sure you have an active account at AWS for which you have configured the credentials on the system where you will execute the steps below. In this example we stored the credentials under an aws profile as awsuser
.
Please note that this setup relies on bash scripts that have been tested in MacOS and Linux. We have no intention of supporting vanilla Windows at the moment.
If you want to host a multi-user setup, you will probably want to share the state file so that everyone can try related challenges. We have provided a starter to easily do so using a Terraform S3 backend.
First, create an s3 bucket (optionally add -var="region=YOUR_DESIRED_REGION"
to the apply to use a region other than the default eu-west-1):
cd shared-state
terraform init
terraform apply
The bucket name should be in the output. Please use that to configure the Terraform backend in main.tf
.
The terraform code is loosely based on this EKS managed Node Group TF example.
Note: Applying the Terraform means you are creating cloud infrastructure which actually costs you money. The authors are not responsible for any cost coming from following the instructions below.
Note-II: The cluster you create has its access bound to the public IP of the creator. In other words: the cluster you create with this code has its access bound to your public IP-address if you apply it locally.
- export your AWS credentials (
export AWS_PROFILE=awsuser
) - check whether you have the right profile by doing
aws sts get-caller-identity
and make sure you have enough rights with the caller its identity and that the actual accountnumber displayed is the account designated for you to apply this TF to. - Do
terraform init
(if required, use tfenv to select TF 0.13.1 or higher ) - Do
terraform plan
- Do
terraform apply
. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane. - When creation is done, do
aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets
- Do
export KUBECONFIG=~/.kube/wrongsecrets
- Run
./k8s-vault-aws-start.sh
Your EKS cluster should be visible in EU-West-1 by default. Want a different region? You can modify terraform.tfvars
or input it directly using the region
variable in plan/apply.
Are you done playing? Please run terraform destroy
twice to clean up.
Run AWS_PROFILE=<your_profile> k8s-vault-aws-start.sh
and connect to http://localhost:8080 when it's ready to accept connections (you'll read the line Forwarding from 127.0.0.1:8080 -> 8080
in your console). Now challenge 9 and 10 should be available as well.
When you stopped the k8s-vault-aws-start.sh
script and want to resume the port forward run: k8s-vault-aws-resume.sh
. This is because if you run the start script again it will replace the secret in the vault and not update the secret-challenge application with the new secret.
When you're done:
- Kill the port forward.
- Run
terraform destroy
to clean up the infrastructure.- If you've deployed the
shared-state
s3 bucket, alsocd shared-state
andterraform destroy
there.
- If you've deployed the
- Run
unset KUBECONFIG
to unset the KUBECONFIG env var. - Run
rm ~/.kube/wrongsecrets
to remove the kubeconfig file. - Run
rm terraform.tfstate*
to remove local state files.
- Does your worker node now have access as well?
- Can you easily obtain the instance profile of the Node?
- Can you get the secrets in the SSM Parameter Store and Secrets Manager easily? Which paths do you see?
- Which of the 2 (SSM Parameter Store and Secrets Manager) works cross-account?
- If you have applied the secrets to the cluster, you should see at the configuration details of the cluster that Secrets encryption is "Disabled", what does that mean?
We added additional scripts for adding an ALB and ingress so that you can use your cloudsetup with multiple people. Do the following:
- Follow the installation section first.
- Run
k8s-aws-alb-script.sh
and the script will return the url at which you can reach the application. - When you are done, before you do cleanup, first run
k8s-aws-alb-script-cleanup.sh
.
Note that you might have to do some manual cleanups after that.
Want to see if the setup still works? You can use terratest to check if the current setup works via automated terratest tests, for this you need to make sure that you have installed terraform and Go version 1.21. Next, you will need to install the modules and set up credentials.
- Run
go mod download
. - Set up your AWS profile using
export AWS_PROFILE=<your-profile-here>
. - Run
go test -timeout 99999s
. The default timeout is 10 min, which is too short for our purposes. We need to override that.
The documentation below is auto-generated to give insight on what's created via Terraform.
Name | Version |
---|---|
terraform | ~> 1.1 |
aws | ~> 5.39.0 |
http | ~> 3.4.0 |
random | ~> 3.6.0 |
Name | Version |
---|---|
aws | 5.39.1 |
http | 3.4.2 |
random | 3.6.0 |
Name | Source | Version |
---|---|---|
ebs_csi_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.5 |
eks | terraform-aws-modules/eks/aws | 20.5.0 |
vpc | terraform-aws-modules/vpc/aws | ~> 5.5.1 |
Name | Type |
---|---|
aws_iam_policy.secret_deny | resource |
aws_iam_policy.secret_manager | resource |
aws_iam_role.irsa_role | resource |
aws_iam_role.user_role | resource |
aws_iam_role_policy_attachment.irsa_role_attachment | resource |
aws_iam_role_policy_attachment.user_role_attachment | resource |
aws_secretsmanager_secret.secret | resource |
aws_secretsmanager_secret.secret_2 | resource |
aws_secretsmanager_secret_policy.policy | resource |
aws_secretsmanager_secret_policy.policy_2 | resource |
aws_secretsmanager_secret_version.secret | resource |
aws_ssm_parameter.secret | resource |
random_password.password | resource |
random_password.password2 | resource |
aws_availability_zones.available | data source |
aws_caller_identity.current | data source |
aws_iam_policy_document.assume_role_with_oidc | data source |
aws_iam_policy_document.secret_manager | data source |
aws_iam_policy_document.user_assume_role | data source |
aws_iam_policy_document.user_policy | data source |
http_http.ip | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cluster_name | The EKS cluster name | string |
"wrongsecrets-exercise-cluster" |
no |
cluster_version | The EKS cluster version to use | string |
"1.29" |
no |
region | The AWS region to use | string |
"eu-west-1" |
no |
tags | List of tags to apply to resources | map(string) |
{ |
no |
Name | Description |
---|---|
cluster_endpoint | Endpoint for EKS control plane. |
cluster_id | The id of the cluster |
cluster_security_group_id | Security group ids attached to the cluster control plane. |
irsa_role | The role ARN used in the IRSA setup |
secrets_manager_secret_name | The name of the secrets manager secret |