Provider types and their properties can be defined as default config for a pipeline. But also at the stage level of a pipeline to structure the source, build, test, approval, deploy or invoke actions.
Provider types and properties defined in the stage of a pipeline override the default type that was defined for that pipeline. Provider types are the basic building blocks of the ADF pipeline creation process and allow for flexibility and abstraction over AWS CodePipeline Providers and Actions.
default_providers:
source:
provider: codecommit|s3|codeconnections
properties:
# All provider specific properties go here.
Use CodeCommit as a source to trigger your pipeline. The repository can also be hosted in another account.
Provider type: codecommit
.
- account_id - (String) (optional)
- The AWS Account ID where the Source Repository is located. If the repository does not exist it will be created via AWS CloudFormation on the source account along with the associated cross account CloudWatch event action to trigger the pipeline.
- Additionally, the default account id for CodeCommit, can be set in adfconfig.yml: config/scm/default-scm-codecommit-account-id.
- If not set here in the provider and if not set in adfconfig.yml, the deployment account id will be used as default value.
- repository - (String) defaults to name of the pipeline.
- The AWS CodeCommit repository name.
- branch - (String) default to configured adfconfig.yml: config/scm/default-scm-branch.
- The Branch on the CodeCommit repository to use to trigger this specific pipeline.
- poll_for_changes - (Boolean) default:
False
.- If CodePipeline should poll the repository for changes, defaults to
False
in favor of Amazon EventBridge events. As the name implies, when polling for changes it will check the repository for updates every minute or so. This will show up as actions in CloudTrail. - By default, it will not poll for changes but use the event triggered by CodeCommit when an update to the repository took place instead.
- If CodePipeline should poll the repository for changes, defaults to
- owner - (String) default:
AWS
.- Can be either
AWS
(default),ThirdParty
, orCustom
. Further information on the use of the owner attribute can be found in the CodePipeline documentation.
- Can be either
- role - (String) default:
adf-codecommit-role
.- The role name of the role to use to fetch the contents of the CodeCommit repository. Only specify when you need a specific role to access it. By default ADF will use its own role to access it instead.
- Please read the user guide to learn more about creating custom roles.
- trigger_on_changes - (Boolean) default:
True
.- Whether CodePipeline should release a change and trigger the pipeline.
- When set to False, you either need to trigger the pipeline manually, through a schedule, or through the completion of another pipeline. This disables the triggering of changes all together when set to False. In other words, when you don't want to rely on polling or event based triggers of changes pushed into the repository.
- By default, it will trigger on changes using the event triggered by CodeCommit when an update to the repository took place.
- output_artifact_format - (String) default:
CODE_ZIP
- The output artifact format. Values can be either
CODEBUILD_CLONE_REF
orCODE_ZIP
. If unspecified, the default isCODE_ZIP
. - If you are using
CODEBUILD_CLONE_REF
, you need to ensure that the IAM role passed in via the role property has theCodeCommit:GitPull
permission. - NB: The
CODEBUILD_CLONE_REF
value can only be used by CodeBuild downstream actions.
- The output artifact format. Values can be either
S3 can be used as the source for a pipeline too. Please note: you can use S3 as a source and deployment provider. The properties that are available are slightly different.
The default role used to fetch the object from the S3 bucket is:
arn:${partition}:iam::${source_account_id}:role/adf-codecommit-role
.
Please add the required S3 read permissions to the adf-codecomit-role
via the
adf-bootstrap/deployment/global-iam.yml
file in the
aws-deployment-framework-bootstrap
repository. Or, allow
the adf-codecommit-role
S3 read permissions in the bucket policy of the
source bucket.
Provider type: s3
.
- account_id - (String) (required)
- The AWS Account ID where the source S3 Bucket is located.
- bucket_name - (String) (required)
- The Name of the S3 Bucket that will be the source of the pipeline.
- object_key - (String) (required)
- The Specific Object within the bucket that will trigger the pipeline execution.
- trigger_on_changes - (Boolean) default:
True
.- Whether CodePipeline should release a change and trigger the pipeline if a change was detected in the S3 object.
- When set to False, you either need to trigger the pipeline manually, through a schedule, or through the completion of another pipeline.
- By default, it will trigger on changes using the polling mechanism of CodePipeline. Monitoring the S3 object so it can trigger a release when an update took place.
Use CodeConnections as a source to trigger your pipeline. The source action retrieves code changes when a pipeline is manually executed or when a webhook event is sent from the source provider. AWS CodeConnections supports various external source providers: following third-party repositories:
- Bitbucket Cloud
- GitHub
- GitHub Enterprise Cloud
- GitHub Enterprise Server
- GitLab.com
- GitLab self-managed
You can find an updated list of the external source providers AWS CodeConnections supports here
The AWS CodeConnections needs to exist and be in the "Available" Status. To use the AWS CodeConnections with ADF, its ARN needs to be stored in AWS Systems Manager Parameter Store in the deployment account's main region (see details below). Read the CodePipeline documentation for more information on how-to setup the connection.
Provider type: codeconnections
.
- repository - (String) defaults to name of the pipeline.
- The repository name. For example, for the ADF repository it would
be
aws-deployment-framework
.
- The repository name. For example, for the ADF repository it would
be
- branch - (String) default to configured adfconfig.yml: config/scm/default-scm-branch.
- The Branch on the repository to use to trigger this specific pipeline.
- owner - (String) (required)
- The name of the third-party user or organization who owns the third-party
repository. For example, for the ADF repository that would be:
awslabs
.
- The name of the third-party user or organization who owns the third-party
repository. For example, for the ADF repository that would be:
- codeconnections_param_path - (String) (required)
- The CodeConnections ARN path in AWS Systems Manager (SSM) Parameter Store in the deployment account in the main region that holds the CodeConnections resource ARN that will be used to download the source code and create the web hook as part of the pipeline. Read the CodeConnections documentation for more information.
- If you are relying on an existing CodeStar connection, the SSM Parameter should contain the AWS CodeStar Connection ARN instead.
- output_artifact_format - (String) default:
CODE_ZIP
- The output artifact format. Values can be either
CODEBUILD_CLONE_REF
orCODE_ZIP
. If unspecified, the default isCODE_ZIP
. - NB: The
CODEBUILD_CLONE_REF
value can only be used by CodeBuild downstream actions.
- The output artifact format. Values can be either
default_providers:
build:
provider: codebuild|jenkins
# Optional: enabled.
# The build stage is enabled by default.
# If you wish to disable the build stage within a pipeline, set it to
# False instead, like this:
enabled: False
properties:
# All provider specific properties go here.
CodeBuild is the default Build provider. It will be provided the assets as produced by the source provider. At the end of the CodeBuild execution, output assets can be configured such that these can be deployed in the deployment phase.
CodeBuild can also be configured as a deployment provider. For more information on this, scroll down to Deploy / CodeBuild. In terms of the properties, the following properties will be usable for running CodeBuild as a Build and Deploy provider.
Provider type: codebuild
.
-
image (String|Object).
-
It is required to specify the container image your pipeline requires.
-
Specify the Image that the AWS CodeBuild will use. Images can be found here.
-
Image can also take an object that contains a reference to a public docker hub image with a prefix of
docker-hub://
, such asdocker-hub://bitnami/mongodb
. This allows your pipeline to consume a public docker hub image if required. Along with the docker hub image name, we also support using a tag which can be provided after the docker hub image name such asdocker-hub://bitnami/mongodb:3.6.23
in order to define which image should be used (defaults tolatest
). -
For images hosted in Amazon ECR, you can define the repository and image to use by specifying an image object. This allows your pipeline to consume a custom image if required. For example, to configure a specific repository ARN, configure it as:
image: repository_arn: 'arn:${partition}:ecr:${region}:${source_account_id}:repository/your-repo-name' tag: 'latest' # Optional, defaults to latest
Alternatively, you can set the
repository_name
if the ECR is hosted in the deployment account in the main deployment region.image: repository_name: 'your-repo-name' tag: 'latest' # Optional, defaults to latest
Along with
repository_arn
orrepository_name
, we also support atag
key. This can be used to define which image should be used (defaults tolatest
). An example of this setup is provided here.
-
-
size (String) (small|medium|large) - default:
small
.- The Compute type to use for the build, types can be found here.
-
environment_variables (Object) defaults to empty object.
-
Any Environment Variables you wish to be available within the build stage for this pipeline. These are to be passed in as Key/Value pairs. For example:
environment_variables: MY_ENV_VAR: some value ANOTHER_ENV_VAR: another value
-
-
role (String) default:
adf-codebuild-role
.- If you wish to pass a custom IAM Role to use for the Build stage of this
pipeline. Alternatively, you can change the
adf-codebuild-role
with additional permissions and conditions in theglobal-iam.yml
file as documented in the User Guide. Please note: Since the CodeBuild environment runs in the deployment account, the role you specify will be assumed in and should be available in the deployment account too. - Please read the user guide to learn more about creating custom roles.
- If you wish to pass a custom IAM Role to use for the Build stage of this
pipeline. Alternatively, you can change the
-
timeout (Number) in minutes, default:
20
.- If you wish to define a custom timeout for the Build stage.
-
privileged (Boolean) default:
False
.- If you plan to use this build project to build Docker images and the
specified build environment is not provided by CodeBuild with Docker
support, set Privileged to
True
. Otherwise, all associated builds that attempt to interact with the Docker daemon fail.
- If you plan to use this build project to build Docker images and the
specified build environment is not provided by CodeBuild with Docker
support, set Privileged to
-
spec_inline (String) defaults to use the Buildspec file instead.
-
If you wish to pass in a custom inline Buildspec as a string for the CodeBuild Project this would override any
buildspec.yml
file.Read more here.
-
Note: Either specify the
spec_inline
or thespec_filename
in the properties block. If both are supplied, the pipeline generator will throw an error instead.
-
-
spec_filename (String) default:
buildspec.yml
. If you wish to pass in a custom Buildspec file that is within the repository. This is useful for custom deploy type actions where CodeBuild will perform the execution of the commands. Path is relational to the root of the repository, sobuild/buildspec.yml
refers to thebuildspec.yml
stored in thebuild
directory of the repository.In case CodeBuild is used as a deployment provider, the default BuildSpec file name is
deployspec.yml
instead. In case you would like to test a given environment using CodeBuild, you can rename it totestspec.yml
or something similar using this property.Note: Either specify the
spec_inline
or thespec_filename
in the properties block. If both are supplied, the pipeline generator will throw an error instead. -
vpc_id (String) defaults to none. Configure the
vpc_id
if the CodeBuild instance needs to connect through a VPC. You will need to set thesubnet_ids
property as well. Plus, optionally, you can configure thesecurity_group_ids
to specify what security groups the instance should use.Please note: VPC support can be added to a CodeBuild step in the pipeline, but cannot be removed that easily.
In case you want to remove VPC support after adding it first: You need to delete the pipeline CloudFormation stack of the pipeline that should be updated. Then release a change in the
aws-deployment-framework-pipelines
in CodePipeline to regenerate the stack without the VPC support.An example of a
vpc_id
value:vpc-01234567890abcdef
-
subnet_ids (List of Strings) (with VPC usage only) defaults to none. The list of subnet ids that the CodeBuild instance is configured to use. These subnets need to be part of the VPC that is configured by the
vpc_id
property of the same provider.Please note: Only configure the
subnet_ids
when thevpc_id
is also configured. Make sure there are multiple subnets listed that are hosted in separate availability zones to ensure a reliable service.An example of a list of
subnet_ids
is:["subnet-1234567890abcdef0", "subnet-bcdef01234567890a"]
-
security_group_ids (List of Strings) (with VPC usage only) defaults to none.
The list of security group ids that the CodeBuild instance is configured to use. These security groups need to be part of the VPC that is configured by the
vpc_id
property of the same provider.ADF will generate a default security group when you configured a
vpc_id
but did not configure anysecurity_group_ids
. The default security group has an allow all egress traffic rule. It is recommended that you make use of specific security groups instead.Typically, one security group would be sufficient, unless you need to combine multiple security groups to grant the build environment all access it needs.
Please note: Only configure the
security_group_ids
when thevpc_id
is also configured. To configure access securely, you need to create and specify the exact security group to use on a pipeline per pipeline basis. Such that pipelines will only have access to the resources they are allowed to access and nothing more.An example of a list of
security_group_ids
is:["sg-234567890abcdef01", "sg-cdef01234567890ab"]
Jenkins can be configured as the build provider, where it will be triggered as part of the CodePipeline deployed by ADF.
To use Jenkins as a Build provider, you will need to install the Jenkins Plugin as documented here.
Provider type: jenkins
.
- project_name (String) (required)
- The Project name in Jenkins used for this Build.
- server_url (String) (required)
- The Server URL of your Jenkins Instance.
- provider_name (String) (required)
- The Provider name that was setup in the Jenkins Plugin for AWS CodePipeline.
default_providers:
deploy:
provider: cloudformation|codedeploy|s3|service_catalog|codebuild|lambda
properties:
# All provider specific properties go here.
The approval provider enables you to await further execution until a key decision maker (either person or automated process) approved continuation of the deployment.
provider: approval
properties:
# All provider specific properties go here.
- message (String) - default:
Approval stage for ${pipeline_name}
.- The message you would like to include as part of the approval stage.
- notification_endpoint (String)
- An email or slack channel (see User Guide docs) that you would like to send the notification to.
- sns_topic_arn (String) - default is no additional SNS notification.
- A SNS Topic ARN you would like to receive a notification as part of the approval stage.
CodeBuild can also be configured as a deployment provider.
However, it cannot be used to target specific accounts or regions. When you specify a CodeBuild deployment step, the step should not target multiple accounts or regions.
As the CodeBuild tasks will run inside the deployment account only. Using the CodeBuild as a deployment step enables you to run integration tests or deploy using CLI tools instead.
When CodeBuild is also configured as the build provider, it is useful to
specify a different spec_filename
like deployspec.yml
or
testspec.yml
.
In case you would like to use CodeBuild to target specific accounts or regions,
you will need to make use of the environment variables to pass in the relevant
target information, while keeping the logic to assume into the correct
role, region and account in the Buildspec specification file as configured
by the spec_filename
property.
Provider type: codebuild
.
See Build / CodeBuild properties above.
Provider type: codedeploy
.
- application_name (String) (required)
- The name of the CodeDeploy Application you want to use for this deployment.
- deployment_group_name (String) (required)
- The name of the Deployment Group you want to use for this deployment.
- role - (String) default
adf-cloudformation-role
- Automatically assumes into the given role in the target account, i.e.
arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role
. - The role you would like to use on the target AWS account to execute the CodeDeploy action. The role should allow the CodeDeploy service to assume it. As is documented in the CodeDeploy service role documentation.
- Please read the user guide to learn more about creating custom roles.
- Automatically assumes into the given role in the target account, i.e.
Useful to deploy CloudFormation templates using a specific or ADF generated IAM Role in the target environment.
When you are using CDK, you can synthesize the CDK code into a CloudFormation template and target that in this stage to get it deployed. This will ensure that the code is compiled with least privileges and can only be deployed using the specific CloudFormation role in the target environment.
CloudFormation is the default action for deployments.
It will fetch the template to deploy from the previous stage its output
artifacts. If you are specific on which files to include in the output
artifacts be sure to include the params/*.json
files and the CloudFormation
template that you wish to deploy.
Provider type: cloudformation
.
- stack_name - (String) default:
${ADF_STACK_PREFIX}${PIPELINE_NAME}
.- The name of the CloudFormation Stack to use. The default
ADF_STACK_PREFIX
isadf-
. This is configurable as part of theStackPrefix
parameter in thedeployment/global.yml
stack. - If the pipeline name is
some-pipeline
, the CloudFormation stack would be named:adf-some-pipeline
by default. Unless you overwrite the value using this property, in which case it will use the exact value as specified. - By setting this to a specific value, you can adopt a stack that was created using CloudFormation before. It can also help to name the stack according to the internal naming convention at your organization.
- The name of the CloudFormation Stack to use. The default
- template_filename - (String) default:
template.yml
.- The name of the CloudFormation Template file to use. Changing the template
file name to use allows you to generate multiple templates, where a specific
template is used according to its specific target environment. For example:
template_prod.yml
for production stages.
- The name of the CloudFormation Template file to use. Changing the template
file name to use allows you to generate multiple templates, where a specific
template is used according to its specific target environment. For example:
- param_filename - (String) default:
${target_account_name}_${target_region}.yml
.- The name of the CloudFormation Parameter file to use. Changing the
parameter file name to use allows you to generate a single parameter file
that is shared between many when required.
The parameter file is read from inside the
${root_dir}/params/
folder. Please note: Setting this parameter will not change the behavior of the generate_params.py script. It is recommended to copy the generated template that you would like to reuse after running generate_params.py and use the name of the copied file as the configuration here when required.
- The name of the CloudFormation Parameter file to use. Changing the
parameter file name to use allows you to generate a single parameter file
that is shared between many when required.
The parameter file is read from inside the
- root_dir - (String) default to empty string.
- The root directory in which the CloudFormation template and
params
directory reside. Example, when the CloudFormation template is stored ininfra/custom_template.yml
and parameter files in theinfra/params
directory, settemplate_filename
tocustom_template.yml
androot_dir
toinfra
. - Defaults to empty string, the root of the source repository or input artifact.
- The root directory in which the CloudFormation template and
- role - (String) default
adf-cloudformation-deployment-role
- Automatically assumes into the given role in the target account, i.e.
arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-deployment-role
. - The role you would like to use on the target AWS account to execute the CloudFormation action.
- Ensure that the CloudFormation service should be allowed to assume that role.
- Additionally, make sure that the
adf-cloudformation-role
is allowed to perform aniam:PassRole
action with the given role. Restrict this action for the CloudFormation service only. You can find an example of this in theadf-bootstrap/deployment/global.yml
file where it allows the CloudFormation Role to performiam:PassRole
with theadf-cloudformation-deployment-role
. Please grant this access in theadf-bootstrap/deployment/global-iam.yml
file in theaws-deployment-framework-bootstrap
repository. - Please read the user guide to learn more about creating custom roles.
- Automatically assumes into the given role in the target account, i.e.
- action -
(
CHANGE_SET_EXECUTE|CHANGE_SET_REPLACE|CREATE_UPDATE|DELETE_ONLY|REPLACE_ON_FAILURE
) default:CHANGE_SET_EXECUTE
.- The CloudFormation action type you wish to use for this specific pipeline or stage. For more information on actions, see the supported actions of CloudFormation.
- outputs - (String) (Required when using Parameter Overrides) defaults
to none.
- The outputs from the CloudFormation Stack creation. Required if you are using Parameter Overrides as part of the pipeline.
- change_set_approval - (Boolean) (Stage Level Only)
- If the stage should insert a manual approval stage between the creation of
the change set and the execution of it. This is only possible when the
target region to deploy to is in the same region as where the deployment
pipelines reside. In other words, if the main region is set to
eu-west-1
, thechange_set_approval
can only be set on targets foreu-west-1
. - In case you would like to target other regions, split it in three stages
instead. First stage, using
cloudformation
as the deployment provider, withaction
set toCHANGE_SET_REPLACE
. This will create the Change Set, but not execute it. Add aapproval
stage next, and the defaultcloudformation
stage after. The latter will create a new change set and execute it accordingly.
- If the stage should insert a manual approval stage between the creation of
the change set and the execution of it. This is only possible when the
target region to deploy to is in the same region as where the deployment
pipelines reside. In other words, if the main region is set to
- param_overrides - (List of Objects) (Stage Level Only) defaults to none.
- inputs (String)
- The input artifact name you want to pass into this stage to take a parameter override from.
- param (String)
- The name of the CloudFormation Parameter you want to override in the specific stage.
- key_name (String)
- The key name from the stack output that you wish to use as the input in this stage.
- inputs (String)
Invoke a Lambda function as a deployment step.
Only Lambda functions deployed in the deployment account can be invoked. Lambda cannot be used to target other accounts or regions.
Provider type: lambda
.
- function_name (String) (required)
- The name of the Lambda function to invoke. For example:
myLambdaFunction
.
- The name of the Lambda function to invoke. For example:
- input (Object|List|String) defaults to empty string.
- An object to pass into the Lambda function as the input event. This input will be object stringified.
Service Catalog deployment provider.
The role used to deploy the service catalog is:
arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role
.
Provider type: service_catalog
.
- product_id - (String) (required)
- The Product ID of the Service Catalog Product to deploy.
- configuration_file_path - (String) default:
params/${account-name}_${region}.json
- If you wish to pass a custom path to the configuration file path.
S3 is available as a source and deployment provider.
S3 cannot be used to target multiple accounts or regions in one stage.
As the bucket_name
property needs to be defined and these are globally
unique across all AWS accounts. In case you would like to deploy to multiple
accounts you will need to configure multiple stages in the pipeline manually
instead. Where each will target the specific bucket in the target account.
Please note: you can use S3 as a source and deployment provider. The properties that are available are slightly different.
When S3 is used as the deployment provider, the default role used to upload
the object(s) to the S3 bucket is the:
arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role
.
The adf-cloudformation-role
is not granted access to read S3 buckets yet.
Please add the required S3 write permissions to the adf-cloudformation-role
via the adf-bootstrap/global-iam.yml
file in the
aws-deployment-framework-bootstrap
repository. Or, alternatively, allow
the adf-cloudformation-role
S3 write permissions in the bucket policy of the
target bucket.
Provider type: s3
.
- bucket_name - (String) (required)
- The name of the S3 Bucket to deploy to.
- object_key - (String) (required)
- The object key within the bucket to deploy to.
- extract - (Boolean) default:
False
.- Whether CodePipeline should extract the contents of the object when it deploys it.
- role - (String) default:
adf-cloudformation-role
.- The role name of the role you would like to use for this action.
- Please read the user guide to learn more about creating custom roles.
- kms_encryption_key_arn - (String)
- The ARN of the AWS KMS encryption key for the host bucket. The
kms_encryption_key_arn
parameter encrypts uploaded artifacts with the provided AWS KMS key. For a KMS key, you can use the key ID, the key ARN, or the alias ARN.
- The ARN of the AWS KMS encryption key for the host bucket. The
- cache_control - (String)
- The
cache_control
parameter controls caching behavior for requests/responses for objects in the bucket.
- The