IaC Security Code Review Guide
Table of Contents
Introduction
Infrastructure-as-Code promises a reviewable, reproducible, version-controlled description of every resource your organization owns. What it delivers, in practice, is an authenticated admin API call hidden in a YAML or HCL diff. A single terraform apply can create a public S3 bucket holding production PII, attach an iam:* policy to a build runner, or replace your VPC flow-log destination with an attacker-controlled bucket — and the change looks like a small documentation PR to a tired reviewer.
Most IaC compromises are not novel vulnerabilities. They are the same handful of patterns repeating across Terraform, CloudFormation, Pulumi, and CDK codebases: secrets in .tfvars, 0.0.0.0/0 in a security group, Action: "*" in an IAM policy, unpinned module sources, unencrypted state backends, missing conditions on cross-account trust policies. This guide is organized around the review heuristics that catch those patterns consistently — with concrete code samples, the exact lines to grep for, and the controls that actually hold up in production.
IaC diffs ARE production changes
An approved PR to terraform/main.tf is indistinguishable from a human manually running aws iam put-role-policy — only with better audit trail. Review IaC with the same rigor you would review a signed prod deploy, because that is exactly what you are approving.
- Review surface:
.tf,.tfvars,.yaml/.jsonCloudFormation templates, Pulumi programs (TS/Python/Go), AWS CDK stacks, ARM/Bicep, Terraform module registries, and anylocals/variablesthey transitively pull in. - Trust boundary: anything in Git that a human with commit rights (or a compromised CI job) can change before the next
apply. - Impact: cloud admin, production data, customer PII, cross-account trust, and — via build-time hooks — arbitrary code execution on whatever runner holds the apply role.
Why is a reviewer's instinct for "this is just a config change" dangerous for IaC?
The IaC Attack Surface
Every IaC pipeline is a chain of privileged steps: source code lands in Git, a plan step renders modules and sometimes executes user code, an apply step assumes a cloud role and writes resources, and the resulting state file often contains secrets as a side effect. Each stage has its own class of bug — and each stage is worth grepping for independently during review.
IaC Attack Surface: Source → Plan → Apply → Cloud
.tf / .yaml / .ts · modules · .tfvars · secrets in commits · PR-level editsHolds cloud admin · writes state · runs local-exec · assumes deploy roleS3 · IAM · RDS · VPC · Lambda · KMS · every resource your org ownsKey Insight: An IaC pipeline holds cloud admin. Any step of this chain — a poisoned module, a leaked state file, a malicious Pulumi program — translates directly into production resource changes. Reviewing .tf is reviewing an authenticated admin API call.
Where Each IaC Tool Puts the Attack Surface
| Stage | Terraform | CloudFormation | Pulumi / CDK |
|---|---|---|---|
| Source | .tf, .tfvars, modules, providers | .yaml / .json templates, parameters, macros | real code: TS / Python / Go / Java |
| Plan | Reads providers; renders modules | ChangeSet; macros may run Lambda | Executes user code to produce a plan |
| Apply | Holds cloud creds; runs local-exec / provisioners | Service-managed; IAM capabilities flag required | Holds cloud creds; @pulumi/command runs local binaries |
| State | terraform.tfstate (backend of your choice) | CloudFormation-managed (no plaintext secrets dump) | Pulumi state backend (Service / self-managed) |
| Review super-power | aws_iam_policy_document, sensitive, validate() | NoEcho, StackPolicy, cfn-guard | Typed config, runtime checks, CDK Aspects |
For a reviewer, two heuristics cover 80% of IaC bugs: (1) look at every place attacker-controlled or environment-sourced data flows into a resource definition, and (2) look at every resource that has a default — because the default is almost always less secure than what you would have written explicitly.
The Four Sinks Every IaC Reviewer Should Grep For
1# 1. Wildcards in IAM — policies, trusts, and resource ARNs
2grep -RIn 'Action.*\*\|Resource.*\*\|Principal.*"\*"' .
3
4# 2. Open network ingress
5grep -RIn '0.0.0.0/0\|::/0\|cidr_blocks.*\*' .
6
7# 3. Secrets embedded as strings (first-pass filter; follow up with gitleaks)
8grep -RInE '(password|secret|token|key)\s*=\s*"[A-Za-z0-9/+=]{16,}"' .
9
10# 4. Unpinned remote sources
11grep -RIn 'source\s*=\s*"git::\|github.com\|registry.terraform.io' . | grep -v '?ref='
12
13# Run these as pre-commit hooks before Checkov/tfsec — they catch the
14# cheapest-to-miss, highest-impact findings first.IaC is a data-flow problem
Treat every variable, local, and data source the same way you would treat request input in a web app. Where does it come from? Who can change it? What resource does it end up in? A reviewer who can trace a single var.allowed_cidr from terraform.tfvars all the way into an aws_security_group_rule will catch more bugs than one who reads each resource in isolation.
A repo defines a reusable VPC module with `var.allow_all_cidrs = false`. A new PR adds `allow_all_cidrs = true` in a dev environment's tfvars. Why is the default-false pattern a good review signal even when the PR is approved?
Hardcoded Secrets & Variable Leaks
Secrets leaking into IaC is the most common IaC finding, and it shows up in five distinct places: the resource argument itself, default values on module variables, locals, output values (which may be consumed by other configurations), and .tfvars files that somehow end up committed. The pattern is the same across tools — what changes is the feature the tool gives you to prevent it.
Terraform — The Classic Footgun
1# VULNERABLE: plaintext DB password in HCL, default on a variable,
2# and a non-sensitive output that echoes it.
3variable "db_password" {
4 description = "RDS master password"
5 type = string
6 default = "Sup3rSecret!" # committed, searchable, unrotatable
7}
8
9resource "aws_db_instance" "main" {
10 identifier = "prod-main"
11 engine = "postgres"
12 username = "admin"
13 password = var.db_password # lands in state file
14 allocated_storage = 100
15}
16
17output "db_password" {
18 value = aws_db_instance.main.password # printed to stdout on apply
19}
20
21# Red flags:
22# 1. variable "db_password" has a default — it should be required
23# 2. aws_db_instance.password pulls from a variable with no sensitive = true
24# 3. The output re-exports the password unprotectedTerraform — Safer Pattern
1# SAFER: secret stays in Secrets Manager; Terraform only holds a reference.
2data "aws_secretsmanager_secret" "db" {
3 name = "prod/rds/main/password"
4}
5
6data "aws_secretsmanager_secret_version" "db" {
7 secret_id = data.aws_secretsmanager_secret.db.id
8}
9
10resource "aws_db_instance" "main" {
11 identifier = "prod-main"
12 engine = "postgres"
13 username = "admin"
14 password = data.aws_secretsmanager_secret_version.db.secret_string
15 allocated_storage = 100
16}
17
18# Mark every interface that carries secret material.
19variable "api_key" {
20 type = string
21 sensitive = true # hides from CLI output (does NOT hide from state — see below)
22}
23
24output "db_endpoint" {
25 value = aws_db_instance.main.endpoint
26 sensitive = false # endpoint is not a secret
27}
28
29# Still required: a KMS-encrypted backend so the state file itself is protected."sensitive = true" hides from the CLI, NOT from state
The sensitive flag only controls what Terraform prints during plan/apply. The actual secret is still written to terraform.tfstate in plaintext. If anyone can read the state file, they can read every secret Terraform has touched. The real fix is narrow IAM on the state backend plus KMS-at-rest — covered in the State File Security section.
CloudFormation — Same Bug, Different Syntax
1# VULNERABLE: password parameter without NoEcho, stored in template
2Parameters:
3 DBPassword:
4 Type: String
5 Default: "Sup3rSecret!" # committed in the template file
6 # Missing: NoEcho: true
7
8Resources:
9 MainDB:
10 Type: AWS::RDS::DBInstance
11 Properties:
12 Engine: postgres
13 MasterUsername: admin
14 MasterUserPassword: !Ref DBPassword
15 AllocatedStorage: 100
16
17Outputs:
18 DBPassword:
19 Value: !Ref DBPassword # echoed into StackOutputsCloudFormation — Dynamic Reference to Secrets Manager
1# SAFER: CFN never sees the plaintext; the dynamic reference is
2# resolved at stack-operation time and not stored in the template.
3Parameters:
4 DBPassword:
5 Type: String
6 NoEcho: true # hides from Describe* APIs & console
7 Default: '{{resolve:secretsmanager:prod/rds/main:SecretString:password}}'
8
9Resources:
10 MainDB:
11 Type: AWS::RDS::DBInstance
12 Properties:
13 Engine: postgres
14 MasterUsername: admin
15 MasterUserPassword: !Ref DBPassword
16 AllocatedStorage: 100
17 StorageEncrypted: true
18 KmsKeyId: !Ref RdsKmsKey
19
20# Still required: StackPolicy preventing updates to sensitive resources;
21# CloudTrail logs for the GetSecretValue call;
22# resource-level IAM restricting who can DescribeStacks / GetStackOutputs.Pulumi — Typed Config with Secrets
1// VULNERABLE: string constant, no secret marker
2import * as aws from "@pulumi/aws";
3
4const dbPassword = "Sup3rSecret!"; // lives in git and in Pulumi state
5
6new aws.rds.Instance("main", {
7 engine: "postgres",
8 username: "admin",
9 password: dbPassword,
10 allocatedStorage: 100,
11});
12
13// ---------------------------------------------------------------
14// SAFER: require a Pulumi Secret — encrypted in state, never in source
15import * as pulumi from "@pulumi/pulumi";
16
17const config = new pulumi.Config();
18const dbPassword = config.requireSecret("dbPassword");
19// Set with: pulumi config set --secret dbPassword <value>
20
21new aws.rds.Instance("main", {
22 engine: "postgres",
23 username: "admin",
24 password: dbPassword, // stored encrypted in Pulumi state
25 allocatedStorage: 100,
26 storageEncrypted: true,
27});- Grep
default =on any variable whose name containspassword,secret,token,key,credential. Defaults on secret variables are almost always bugs — either the value is real (catastrophic) or the value is a placeholder that implies the field will be silently empty in some environment. - Grep
outputblocks for secret-shaped names. Module outputs are read by other modules and may end up in state files they shouldn't. - Flag every
.tfvarsfile that isn't.gitignore'd. The*.auto.tfvarspattern is particularly dangerous because it is loaded automatically — a contributor does not need to know it exists to activate it. - Require
NoEcho: trueon every CloudFormation parameter that carries secret material. cfn-lint will catch most of these; make it a blocking PR check. - For Pulumi and CDK, require
requireSecret()/SecretValue.secretsManager()— never raw strings. Type-level enforcement beats docs-level enforcement.
The "rotate and move on" trap
When a secret is found in a repo, rotating it is necessary but not sufficient. Git history still contains the old value; anyone with a pre-rotation clone (including forks) has a usable leak. The full response is: rotate, force-remove from history (or tombstone the repo), audit for usage of the old value during the exposure window, and add a pre-commit secret scanner so it does not happen again.
A PR adds `sensitive = true` to a Terraform variable holding an API token. The reviewer signs off. What is still broken?
Insecure Resource Defaults
Cloud resources ship with defaults that optimize for "works on the first try," not "secure by default." A Terraform aws_s3_bucket is not public today, but its sibling aws_db_instance is unencrypted by default, aws_security_group accepts an explicit 0.0.0.0/0 with no warning, and aws_instance still honors IMDSv1 unless you opt into v2. A reviewer who only reads what the PR adds — without asking "what are we getting by omission?" — will miss all of these.
Terraform — A Composite of Insecure Defaults
1# VULNERABLE: every resource below relies on a field the author did not set.
2resource "aws_s3_bucket" "logs" {
3 bucket = "acme-prod-logs"
4}
5# Missing:
6# - server_side_encryption_configuration (KMS-backed)
7# - public_access_block (block all four: ACLs + policies)
8# - versioning + MFA-delete
9# - logging target
10
11resource "aws_security_group" "web" {
12 name = "web"
13 vpc_id = aws_vpc.main.id
14
15 ingress {
16 from_port = 22
17 to_port = 22
18 protocol = "tcp"
19 cidr_blocks = ["0.0.0.0/0"] # SSH open to the Internet
20 }
21
22 ingress {
23 from_port = 0
24 to_port = 0
25 protocol = "-1" # all protocols, all ports
26 cidr_blocks = ["0.0.0.0/0"]
27 }
28}
29
30resource "aws_db_instance" "reporting" {
31 engine = "postgres"
32 identifier = "reporting"
33 username = "admin"
34 password = var.db_password
35 allocated_storage = 100
36 publicly_accessible = true # accessible from the Internet
37 # Missing: storage_encrypted, kms_key_id, backup_retention_period, deletion_protection
38}
39
40resource "aws_instance" "bastion" {
41 ami = "ami-0abcdef1234567890"
42 instance_type = "t3.small"
43 # Missing: metadata_options { http_tokens = "required" } to enforce IMDSv2
44 # Missing: root_block_device { encrypted = true }
45}Terraform — Secure Baseline
1# SAFER: every setting that defaults insecure is set explicitly.
2resource "aws_s3_bucket" "logs" {
3 bucket = "acme-prod-logs"
4}
5
6resource "aws_s3_bucket_server_side_encryption_configuration" "logs" {
7 bucket = aws_s3_bucket.logs.id
8 rule {
9 apply_server_side_encryption_by_default {
10 sse_algorithm = "aws:kms"
11 kms_master_key_id = aws_kms_key.logs.arn
12 }
13 }
14}
15
16resource "aws_s3_bucket_public_access_block" "logs" {
17 bucket = aws_s3_bucket.logs.id
18 block_public_acls = true
19 block_public_policy = true
20 ignore_public_acls = true
21 restrict_public_buckets = true
22}
23
24resource "aws_s3_bucket_versioning" "logs" {
25 bucket = aws_s3_bucket.logs.id
26 versioning_configuration { status = "Enabled" }
27}
28
29resource "aws_security_group" "web" {
30 name = "web"
31 description = "HTTPS ingress from the load balancer only"
32 vpc_id = aws_vpc.main.id
33}
34
35resource "aws_security_group_rule" "web_https_from_alb" {
36 type = "ingress"
37 from_port = 443
38 to_port = 443
39 protocol = "tcp"
40 source_security_group_id = aws_security_group.alb.id # no CIDR, no 0.0.0.0/0
41 security_group_id = aws_security_group.web.id
42}
43
44resource "aws_db_instance" "reporting" {
45 engine = "postgres"
46 identifier = "reporting"
47 username = "admin"
48 password = data.aws_secretsmanager_secret_version.db.secret_string
49 allocated_storage = 100
50 storage_encrypted = true
51 kms_key_id = aws_kms_key.rds.arn
52 publicly_accessible = false
53 backup_retention_period = 14
54 deletion_protection = true
55 copy_tags_to_snapshot = true
56 performance_insights_enabled = true
57 performance_insights_kms_key_id = aws_kms_key.rds.arn
58 iam_database_authentication_enabled = true
59}
60
61resource "aws_instance" "bastion" {
62 ami = data.aws_ami.al2023.id
63 instance_type = "t3.small"
64
65 metadata_options {
66 http_tokens = "required" # IMDSv2 only
67 http_put_response_hop_limit = 1 # prevent container hops
68 http_endpoint = "enabled"
69 }
70
71 root_block_device {
72 encrypted = true
73 kms_key_id = aws_kms_key.ebs.arn
74 }
75}Insecure Defaults Cheat Sheet (AWS)
| Resource | Default that bites | Explicit fix |
|---|---|---|
| aws_db_instance | storage_encrypted = false | storage_encrypted = true + kms_key_id |
| aws_ebs_volume | encrypted = false | encrypted = true + account default KMS |
| aws_instance | http_tokens = "optional" (IMDSv1 allowed) | metadata_options { http_tokens = "required" } |
| aws_s3_bucket | No PublicAccessBlock | aws_s3_bucket_public_access_block — all four true |
| aws_security_group | Implicit egress 0.0.0.0/0 to everywhere | Replace default egress with explicit rules |
| aws_elasticsearch_domain | Plaintext node-to-node | node_to_node_encryption + encrypt_at_rest |
| aws_cloudtrail | Single-region; no log-file validation | is_multi_region_trail + enable_log_file_validation |
| aws_lambda_function | Environment vars unencrypted in transit/rest | kms_key_arn + VPC config + reserved_concurrency |
| aws_eks_cluster | Public endpoint, no secret encryption | endpoint_public_access = false + envelope encryption with KMS |
The "missing field" review trick
Reading what a PR adds catches bad values. Reading what a PR does not say catches bad defaults. For every resource in a diff, open the provider docs and ask which fields default to an insecure value. If you are doing this by hand, you are slower than checkov or tfsec — run them in CI and make failures blocking.
A new `aws_db_instance` block passes tests and lint. Which single omitted field, if un-flagged, is the highest-impact finding?
IAM Blast Radius in IaC
IAM policies are the highest-leverage text in any IaC codebase. A single * in the wrong place is the difference between "a Lambda that reads one bucket" and "a Lambda that can delete the organization." Reviewing IAM in IaC is disproportionately more valuable than reviewing any other resource type — and disproportionately harder, because the symptoms only show up on the day someone exploits it.
IAM Blast Radius — What Each Wildcard Costs You
Every AWS API across every service.
iam:PutRolePolicy · ec2:* · s3:* · org:*Every object the caller can see.
every bucket · every key · every role · every VPCGateway to privilege escalation.
hand this identity to Lambda/ECS/EC2Inverted logic; easy to widen silently.
NotAction: ["iam:DeleteUser"] == everything elseAny AWS account — or the public Internet.
public S3, public KMS, cross-tenant LambdaNo aws:SourceArn / aws:PrincipalOrgID.
confused-deputy & cross-account abuseReview lens: Every * in an IaC-authored policy is a finding until someone writes down the specific actions and ARNs that justify it. Policies built with Terraform aws_iam_policy_document make this auditable; policies built with string-concatenated JSON hide it.
Terraform — A Menu of Bad IAM Patterns
1# VULNERABLE: admin-equivalent policy attached to a Lambda execution role.
2resource "aws_iam_role" "ingest" {
3 name = "ingest"
4 assume_role_policy = data.aws_iam_policy_document.assume_lambda.json
5}
6
7resource "aws_iam_role_policy" "ingest_admin" {
8 role = aws_iam_role.ingest.id
9 policy = jsonencode({
10 Version = "2012-10-17",
11 Statement = [{
12 Effect = "Allow",
13 Action = "*", # every AWS API
14 Resource = "*" # on every object
15 }]
16 })
17}
18
19# Separate policy with an iam:PassRole wildcard — the classic privesc primitive
20resource "aws_iam_role_policy" "ingest_passrole" {
21 role = aws_iam_role.ingest.id
22 policy = jsonencode({
23 Version = "2012-10-17",
24 Statement = [{
25 Effect = "Allow",
26 Action = "iam:PassRole",
27 Resource = "*" # pass any role in the account
28 }]
29 })
30}
31
32# Trust policy with a wildcarded principal — confused-deputy waiting to happen
33data "aws_iam_policy_document" "assume_external" {
34 statement {
35 actions = ["sts:AssumeRole"]
36 principals {
37 type = "AWS"
38 identifiers = ["*"] # any account
39 }
40 # Missing: condition { test = "StringEquals", variable = "aws:PrincipalOrgID", ... }
41 }
42}Terraform — Structured IAM That Reviews Well
1# SAFER: generate policies from data sources so the structure is auditable
2# and narrow the actions, resources, and conditions explicitly.
3data "aws_iam_policy_document" "ingest" {
4 statement {
5 sid = "ReadRawBucket"
6 effect = "Allow"
7 actions = ["s3:GetObject", "s3:ListBucket"]
8 resources = [
9 aws_s3_bucket.raw.arn,
10 "${aws_s3_bucket.raw.arn}/*",
11 ]
12 condition {
13 test = "StringEquals"
14 variable = "aws:ResourceAccount"
15 values = [data.aws_caller_identity.current.account_id]
16 }
17 }
18
19 statement {
20 sid = "PutToProcessedBucket"
21 effect = "Allow"
22 actions = ["s3:PutObject"]
23 resources = ["${aws_s3_bucket.processed.arn}/*"]
24 }
25
26 statement {
27 sid = "WriteKmsEncrypted"
28 effect = "Allow"
29 actions = ["kms:Encrypt", "kms:GenerateDataKey"]
30 resources = [aws_kms_key.processed.arn]
31 }
32}
33
34resource "aws_iam_role_policy" "ingest" {
35 role = aws_iam_role.ingest.id
36 policy = data.aws_iam_policy_document.ingest.json
37}
38
39# Cross-account trust with OrgID + external ID
40data "aws_iam_policy_document" "assume_external" {
41 statement {
42 actions = ["sts:AssumeRole"]
43 principals {
44 type = "AWS"
45 identifiers = ["arn:aws:iam::444455556666:role/partner-sync"]
46 }
47 condition {
48 test = "StringEquals"
49 variable = "aws:PrincipalOrgID"
50 values = [var.org_id]
51 }
52 condition {
53 test = "StringEquals"
54 variable = "sts:ExternalId"
55 values = [var.external_id]
56 }
57 }
58}- Every
Action: "*"is a finding. Replace with the minimum list of calls the workload actually makes; use IAM Access Analyzer policy generation to start. - Every
Resource: "*"is a finding unless the API is service-level (e.g., somes3:ListAllMyBuckets). Most actions support a resource ARN — use it. - Watch for
NotAction/NotResource. Their inverted semantics are a common source of silent widening. "Allow everything except DeleteUser" is rarely what the author meant. - Flag every
iam:PassRole *without a condition. PassRole with a wildcard resource + any compute service privilege = trivial privilege escalation to any role in the account. - Cross-account trust needs
aws:PrincipalOrgIDorsts:ExternalId. A bare trust statement on{"AWS": "*"}means any AWS account in the world can assume your role. - Prefer
aws_iam_policy_document(or CDK IAM constructs) over hand-written JSON. Structure beats strings — reviewers can see the statement boundary. - Permissions boundaries and SCPs are your safety net. Make sure every human-writable role has a permissions boundary that forbids
iam:*andorg:*.
A single `iam:PassRole *` is a full takeover
If any caller can iam:PassRole * and then lambda:CreateFunction (or ECS / EC2 RunInstances), they can pick any role in the account — including Organizations-admin roles — and attach it to a Lambda they control. The policy document makes this visible; the reviewer's job is to make it loud.
Which condition is most important on a cross-account `sts:AssumeRole` trust policy allowing a partner AWS account?
State File Security
The Terraform state file is the part of the IaC stack most reviewers never open — and therefore the part most likely to leak a secret. State contains every attribute Terraform knows about every resource: RDS passwords, SSM parameter values, private keys embedded in certificates, Cognito app secrets, and many outputs marked sensitive. A read of the state bucket is a read of every production secret Terraform has ever managed.
Terraform State — Where Secrets Quietly Live
Plaintext JSON. Contains:
- RDS master passwords
- Cognito/Okta app secrets
- SSM parameter values
- Private keys & tokens
Must have:
- KMS/CMK encryption
- Block Public Access
- Versioning + MFA-delete
- Narrow IAM on the bucket
Prevents:
- Concurrent applies
- Split-brain state
- Partial rollouts
Threat model: If any identity can read the state bucket, they can read every secret Terraform has ever managed — even secrets that are marked sensitive = true. sensitive = true hides outputs from the CLI; it does not hide them from the state file. Encryption-at-rest and narrow bucket IAM are the actual controls.
Vulnerable Backend — Local or "Just S3"
1# VULNERABLE: local state committed to Git, or an S3 backend with
2# no encryption, no locking, and broad bucket IAM.
3terraform {
4 # No backend configured — Terraform writes terraform.tfstate locally,
5 # and someone commits it to the repo.
6
7 # OR this minimal (and insufficient) S3 backend:
8 backend "s3" {
9 bucket = "acme-tfstate"
10 key = "prod/main.tfstate"
11 region = "us-east-1"
12 # Missing:
13 # - encrypt = true
14 # - kms_key_id = <CMK>
15 # - dynamodb_table = <lock table>
16 # - acl = "private"
17 }
18}Hardened Backend
1# SAFER: KMS-encrypted S3 backend + DynamoDB locking + narrow IAM.
2terraform {
3 backend "s3" {
4 bucket = "acme-tfstate-prod"
5 key = "prod/main.tfstate"
6 region = "us-east-1"
7 encrypt = true
8 kms_key_id = "arn:aws:kms:us-east-1:111122223333:key/abcdef01-...-..."
9 dynamodb_table = "tfstate-locks"
10 acl = "private"
11 use_lockfile = true # TF 1.10+: also writes a .tflock file
12 }
13}
14
15# The backing resources live in a dedicated "bootstrap" stack.
16resource "aws_s3_bucket" "tfstate" {
17 bucket = "acme-tfstate-prod"
18}
19
20resource "aws_s3_bucket_versioning" "tfstate" {
21 bucket = aws_s3_bucket.tfstate.id
22 versioning_configuration { status = "Enabled" }
23}
24
25resource "aws_s3_bucket_server_side_encryption_configuration" "tfstate" {
26 bucket = aws_s3_bucket.tfstate.id
27 rule {
28 apply_server_side_encryption_by_default {
29 sse_algorithm = "aws:kms"
30 kms_master_key_id = aws_kms_key.tfstate.arn
31 }
32 }
33}
34
35resource "aws_s3_bucket_public_access_block" "tfstate" {
36 bucket = aws_s3_bucket.tfstate.id
37 block_public_acls = true
38 block_public_policy = true
39 ignore_public_acls = true
40 restrict_public_buckets = true
41}
42
43resource "aws_dynamodb_table" "tfstate_locks" {
44 name = "tfstate-locks"
45 billing_mode = "PAY_PER_REQUEST"
46 hash_key = "LockID"
47 attribute {
48 name = "LockID"
49 type = "S"
50 }
51 server_side_encryption { enabled = true }
52 point_in_time_recovery { enabled = true }
53}
54
55# Separate KMS key; key policy enforces that only the pipeline role
56# (and break-glass admin) can call Encrypt/Decrypt.State Backend Security Checklist
| Control | Why |
|---|---|
| Never commit .tfstate / .tfstate.backup | Plaintext secrets in Git |
| Backend encrypts at rest (KMS/CMK) | Stops bucket-read from becoming secret-read |
| Backend blocks public access | S3 public buckets holding state are a recurring incident |
| Backend has versioning + MFA-delete | Ransomware / corruption recovery |
| Bucket IAM narrow to the pipeline role | Readers of the state = holders of the secrets |
| DynamoDB locking (or native lockfile) | Prevents concurrent applies and split-brain state |
| CloudTrail data events on the state bucket | Anomalous GetObject = exfiltration signal |
| `.gitignore` for `terraform.tfstate*`, `.terraform/`, `*.tfvars` | Stops casual commits |
State is the crown jewels
If you had to pick a single IaC artifact to protect, pick the state backend. An attacker with read on state does not need RCE, does not need the cloud console, does not need the pipeline — they already have every secret Terraform manages. Treat state-bucket IAM with the same scrutiny you give root account access.
A reviewer sees `sensitive = true` on an RDS password output and a plain S3 backend (no encryption). What is the practical security posture?