Cloud Misconfiguration: Security Code Review Guide
Table of Contents
1. Introduction to Cloud Misconfigurations
Cloud misconfigurations are the leading cause of cloud data breaches. Unlike traditional vulnerabilities that require code-level exploits, a misconfigured cloud resource — a public S3 bucket, an overly permissive IAM role, or an open security group — can expose your entire infrastructure with a single setting. As organizations adopt Infrastructure as Code (IaC), these misconfigurations increasingly appear in pull requests as Terraform files, CloudFormation templates, and Kubernetes manifests.
Why This Matters
According to Gartner, through 2025 99% of cloud security failures will be the customer's fault — primarily due to misconfigurations. The 2023 Verizon DBIR found that misconfiguration errors were involved in 21% of all breaches. A single public S3 bucket can expose millions of records, and an overly permissive IAM policy can give an attacker lateral movement across your entire AWS account.
In this guide, you'll learn how to spot dangerous cloud storage exposures during code review, why IAM wildcard policies are as dangerous as SQL injection, how to review security group and firewall rules for network exposure, what to look for in Terraform and CloudFormation templates, how to detect missing encryption, logging, and compliance controls, and how real-world breaches like Capital One happened through misconfiguration.
Cloud Misconfiguration Attack Surface
Cloud Infrastructure Layers — Each Has Misconfiguration Risks
Top Cloud Misconfigurations
Why are cloud misconfigurations fundamentally different from traditional application vulnerabilities?
2. Storage Misconfigurations
Cloud storage services (AWS S3, Azure Blob Storage, GCP Cloud Storage) are the most commonly misconfigured cloud resources. Public bucket exposures have caused some of the largest data breaches in history, exposing customer records, credentials, and internal documents.
Terraform: Insecure S3 bucket configuration
1# ❌ INSECURE: Multiple critical misconfigurations
2resource "aws_s3_bucket" "data" {
3 bucket = "company-customer-data"
4}
5
6# Public access NOT blocked — bucket can be made public
7# (missing aws_s3_bucket_public_access_block)
8
9resource "aws_s3_bucket_acl" "data_acl" {
10 bucket = aws_s3_bucket.data.id
11 acl = "public-read" # Anyone on the internet can read!
12}
13
14# No encryption configured — data stored in plaintext
15# No versioning — deleted data is gone forever
16# No logging — no audit trail for access
17# No lifecycle rules — data retained indefinitelyTerraform: Secure S3 bucket configuration
1# ✅ SECURE: Hardened S3 bucket
2resource "aws_s3_bucket" "data" {
3 bucket = "company-customer-data"
4}
5
6# Block ALL public access at the bucket level
7resource "aws_s3_bucket_public_access_block" "data" {
8 bucket = aws_s3_bucket.data.id
9
10 block_public_acls = true
11 block_public_policy = true
12 ignore_public_acls = true
13 restrict_public_buckets = true
14}
15
16# Enable server-side encryption with KMS
17resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
18 bucket = aws_s3_bucket.data.id
19
20 rule {
21 apply_server_side_encryption_by_default {
22 sse_algorithm = "aws:kms"
23 kms_master_key_id = aws_kms_key.s3_key.arn
24 }
25 bucket_key_enabled = true
26 }
27}
28
29# Enable versioning for recovery from accidental deletion
30resource "aws_s3_bucket_versioning" "data" {
31 bucket = aws_s3_bucket.data.id
32 versioning_configuration {
33 status = "Enabled"
34 }
35}
36
37# Enable access logging
38resource "aws_s3_bucket_logging" "data" {
39 bucket = aws_s3_bucket.data.id
40 target_bucket = aws_s3_bucket.logs.id
41 target_prefix = "s3-access-logs/customer-data/"
42}
43
44# Enforce TLS-only access
45resource "aws_s3_bucket_policy" "enforce_tls" {
46 bucket = aws_s3_bucket.data.id
47
48 policy = jsonencode({
49 Version = "2012-10-17"
50 Statement = [
51 {
52 Sid = "EnforceTLS"
53 Effect = "Deny"
54 Principal = "*"
55 Action = "s3:*"
56 Resource = [
57 aws_s3_bucket.data.arn,
58 "${aws_s3_bucket.data.arn}/*"
59 ]
60 Condition = {
61 Bool = { "aws:SecureTransport" = "false" }
62 }
63 }
64 ]
65 })
66}Storage Security Rules
| Rule | Bad Practice | Secure Practice | Risk |
|---|---|---|---|
| Public access | acl = "public-read" | Block all public access + private ACL | Anyone on the internet can download every object |
| Encryption | No encryption configured | SSE-KMS with customer-managed key | Data readable if disk or backup is compromised |
| Versioning | Versioning disabled | Enable versioning + MFA delete | Ransomware can permanently delete data |
| Logging | No access logging | Enable S3 access logging to separate bucket | No audit trail for data access or exfiltration |
| Transport | Allow HTTP access | Deny non-TLS requests via bucket policy | Data intercepted in transit (MITM) |
| Lifecycle | No lifecycle policy | Auto-transition to Glacier, expire old versions | Cost explosion and stale data exposure |
A Terraform PR adds an S3 bucket with no aws_s3_bucket_public_access_block resource. The developer says it's fine because the ACL is set to 'private'. Is this secure?
3. IAM & Identity Misconfigurations
Identity and Access Management (IAM) misconfigurations are the most impactful class of cloud security issues. An overly permissive IAM policy doesn't just expose one resource — it can grant an attacker control over your entire cloud account. IAM policies are essentially code, and they deserve the same rigorous review as application logic.
AWS IAM: Dangerous policies to flag during review
1// ❌ CRITICAL: "God mode" policy — full access to everything
2{
3 "Version": "2012-10-17",
4 "Statement": [
5 {
6 "Effect": "Allow",
7 "Action": "*",
8 "Resource": "*"
9 }
10 ]
11}
12
13// ❌ HIGH: Overly broad S3 access
14{
15 "Version": "2012-10-17",
16 "Statement": [
17 {
18 "Effect": "Allow",
19 "Action": "s3:*",
20 "Resource": "*"
21 }
22 ]
23}
24
25// ❌ CRITICAL: IAM privilege escalation — can create admin users
26{
27 "Version": "2012-10-17",
28 "Statement": [
29 {
30 "Effect": "Allow",
31 "Action": [
32 "iam:CreateUser",
33 "iam:CreateAccessKey",
34 "iam:AttachUserPolicy",
35 "iam:PutUserPolicy"
36 ],
37 "Resource": "*"
38 }
39 ]
40}
41
42// ❌ HIGH: Assumable by any AWS account (confused deputy)
43{
44 "Version": "2012-10-17",
45 "Statement": [
46 {
47 "Effect": "Allow",
48 "Principal": { "AWS": "*" },
49 "Action": "sts:AssumeRole"
50 }
51 ]
52}Terraform: Secure IAM role with least privilege
1# ✅ SECURE: Scoped IAM role for a Lambda function
2resource "aws_iam_role" "lambda_processor" {
3 name = "order-processor-lambda"
4
5 assume_role_policy = jsonencode({
6 Version = "2012-10-17"
7 Statement = [
8 {
9 Effect = "Allow"
10 Principal = {
11 Service = "lambda.amazonaws.com"
12 }
13 Action = "sts:AssumeRole"
14 # Restrict to specific account
15 Condition = {
16 StringEquals = {
17 "aws:SourceAccount" = data.aws_caller_identity.current.account_id
18 }
19 }
20 }
21 ]
22 })
23}
24
25resource "aws_iam_role_policy" "lambda_processor" {
26 name = "order-processor-policy"
27 role = aws_iam_role.lambda_processor.id
28
29 policy = jsonencode({
30 Version = "2012-10-17"
31 Statement = [
32 {
33 # Read from specific DynamoDB table only
34 Effect = "Allow"
35 Action = [
36 "dynamodb:GetItem",
37 "dynamodb:Query"
38 ]
39 Resource = aws_dynamodb_table.orders.arn
40 },
41 {
42 # Write to specific S3 prefix only
43 Effect = "Allow"
44 Action = ["s3:PutObject"]
45 Resource = "${aws_s3_bucket.processed.arn}/orders/*"
46 },
47 {
48 # Publish to specific SNS topic only
49 Effect = "Allow"
50 Action = ["sns:Publish"]
51 Resource = aws_sns_topic.order_notifications.arn
52 }
53 ]
54 })
55}- Never use
"Action": "*"or"Resource": "*"— Wildcard policies grant unlimited access. Always specify exact actions and resource ARNs. - Restrict
Principalin trust policies —"Principal": {"AWS": "*"}lets any AWS account assume the role. Always specify exact account IDs or use conditions. - Watch for IAM privilege escalation paths — Permissions like
iam:CreateUser,iam:AttachUserPolicy,iam:PutRolePolicy, oriam:PassRolecan be chained to escalate to admin. - Use conditions to limit scope — Add
aws:SourceAccount,aws:SourceArn,aws:PrincipalOrgIDconditions to prevent confused deputy attacks. - Prefer managed policies over inline — Managed policies are version-controlled and reusable, making them easier to audit and update.
- Require MFA for sensitive operations — Add
"Condition": {"Bool": {"aws:MultiFactorAuthPresent": "true"}}for destructive or administrative actions.
A Terraform PR creates an IAM role for a CI/CD pipeline with "Action": "s3:*", "Resource": "*". The developer says the pipeline needs to deploy to multiple S3 buckets. What should you recommend?
4. Network & Security Group Misconfigurations
Cloud network misconfigurations — open security groups, permissive firewall rules, and publicly accessible databases — are a top attack vector. Unlike on-premises networks protected by physical firewalls, cloud resources are one misconfigured rule away from being internet-facing.
Terraform: Insecure vs secure security groups
1# ❌ INSECURE: SSH open to the entire internet
2resource "aws_security_group" "web_server" {
3 name = "web-server-sg"
4 vpc_id = aws_vpc.main.id
5
6 ingress {
7 from_port = 22
8 to_port = 22
9 protocol = "tcp"
10 cidr_blocks = ["0.0.0.0/0"] # SSH from anywhere!
11 }
12
13 ingress {
14 from_port = 0
15 to_port = 0
16 protocol = "-1" # ALL traffic
17 cidr_blocks = ["0.0.0.0/0"] # From anywhere!
18 }
19
20 egress {
21 from_port = 0
22 to_port = 0
23 protocol = "-1"
24 cidr_blocks = ["0.0.0.0/0"] # Unrestricted outbound
25 }
26}
27
28# ❌ INSECURE: RDS publicly accessible
29resource "aws_db_instance" "database" {
30 engine = "postgres"
31 instance_class = "db.t3.medium"
32 publicly_accessible = true # Database on the internet!
33 skip_final_snapshot = true # No backup on deletion!
34}
35
36---
37# ✅ SECURE: Properly scoped security groups
38resource "aws_security_group" "web_server" {
39 name = "web-server-sg"
40 vpc_id = aws_vpc.main.id
41
42 # Only allow HTTPS from the load balancer
43 ingress {
44 from_port = 443
45 to_port = 443
46 protocol = "tcp"
47 security_groups = [aws_security_group.alb.id]
48 }
49
50 # Restrict outbound to required services only
51 egress {
52 from_port = 443
53 to_port = 443
54 protocol = "tcp"
55 cidr_blocks = ["0.0.0.0/0"] # HTTPS outbound (APIs, updates)
56 }
57
58 egress {
59 from_port = 5432
60 to_port = 5432
61 protocol = "tcp"
62 security_groups = [aws_security_group.database.id]
63 }
64
65 tags = { Name = "web-server-sg" }
66}
67
68# ✅ SECURE: Database in private subnet, no public access
69resource "aws_db_instance" "database" {
70 engine = "postgres"
71 instance_class = "db.t3.medium"
72 publicly_accessible = false
73 db_subnet_group_name = aws_db_subnet_group.private.name
74
75 storage_encrypted = true
76 kms_key_id = aws_kms_key.rds.arn
77
78 skip_final_snapshot = false
79 backup_retention_period = 7
80
81 vpc_security_group_ids = [aws_security_group.database.id]
82}
83
84resource "aws_security_group" "database" {
85 name = "database-sg"
86 vpc_id = aws_vpc.main.id
87
88 # Only allow connections from web server SG
89 ingress {
90 from_port = 5432
91 to_port = 5432
92 protocol = "tcp"
93 security_groups = [aws_security_group.web_server.id]
94 }
95}Network Security Red Flags in Code Review
| Pattern | What It Means | Risk | Fix |
|---|---|---|---|
| cidr_blocks = ["0.0.0.0/0"] | Open to entire internet | Any attacker can reach the port | Restrict to specific IPs/CIDRs or security groups |
| protocol = "-1" (all) | All protocols allowed | No port restriction at all | Specify exact protocol (tcp/udp) and ports |
| publicly_accessible = true | Database/resource has public IP | Direct internet access to data layer | Set to false, use private subnets + bastion |
| from_port = 0, to_port = 65535 | All ports open | Every service on the instance is exposed | Open only required ports (443, 8080, etc.) |
| ipv6_cidr_blocks = ["::/0"] | Open to all IPv6 addresses | Often overlooked — same risk as 0.0.0.0/0 | Apply same restrictions to IPv6 as IPv4 |
| SSH (22) or RDP (3389) from 0.0.0.0/0 | Remote admin from anywhere | Brute force, credential stuffing | Use SSM Session Manager, VPN, or bastion host |
The 0.0.0.0/0 Problem
Every time you see 0.0.0.0/0 or ::/0 in an ingress rule, it means "anyone on the internet." For SSH, RDP, databases, or any management port, this is never acceptable in production. Even for web servers, traffic should flow through a load balancer — not directly to the instance. In code review, treat any 0.0.0.0/0 ingress rule on non-HTTP/HTTPS ports as a critical finding.
A developer adds a security group with SSH (port 22) open to 0.0.0.0/0 'temporarily for debugging.' They promise to remove it before merging to main. Should you approve the PR?
5. Compute & Serverless Security
Compute resources — EC2 instances, Lambda functions, Azure Functions, and GCP Cloud Run — have their own set of misconfiguration risks. Serverless functions are particularly dangerous because developers often grant them broad permissions "just to get things working" and the lack of a persistent server creates a false sense of security.
Terraform: Insecure vs secure Lambda function
1# ❌ INSECURE: Overly permissive Lambda
2resource "aws_lambda_function" "processor" {
3 function_name = "data-processor"
4 runtime = "nodejs20.x"
5 handler = "index.handler"
6 filename = "lambda.zip"
7
8 # Using an overly broad role (see IAM section)
9 role = aws_iam_role.lambda_admin.arn
10
11 environment {
12 variables = {
13 # Secrets in environment variables — visible in console/API
14 DB_PASSWORD = "super-secret-password"
15 API_KEY = "sk-live-abc123xyz"
16 ENCRYPTION_KEY = "my-encryption-key-2024"
17 }
18 }
19
20 # No VPC — Lambda has unrestricted internet access
21 # No reserved concurrency — can be DoS'd
22 # No dead letter queue — failed events lost
23}
24
25---
26# ✅ SECURE: Hardened Lambda function
27resource "aws_lambda_function" "processor" {
28 function_name = "data-processor"
29 runtime = "nodejs20.x"
30 handler = "index.handler"
31 filename = "lambda.zip"
32
33 role = aws_iam_role.lambda_scoped.arn
34
35 environment {
36 variables = {
37 # Reference secrets by ARN — resolved at runtime
38 DB_SECRET_ARN = aws_secretsmanager_secret.db_creds.arn
39 ENVIRONMENT = "production"
40 }
41 }
42
43 # Deploy inside VPC for network isolation
44 vpc_config {
45 subnet_ids = aws_subnet.private[*].id
46 security_group_ids = [aws_security_group.lambda.id]
47 }
48
49 # Prevent runaway invocations
50 reserved_concurrent_executions = 100
51
52 # Send failed events to DLQ for investigation
53 dead_letter_config {
54 target_arn = aws_sqs_queue.lambda_dlq.arn
55 }
56
57 # Enable X-Ray tracing
58 tracing_config {
59 mode = "Active"
60 }
61
62 # Set a reasonable timeout
63 timeout = 30
64 memory_size = 256
65
66 # Enable code signing for integrity
67 code_signing_config_arn = aws_lambda_code_signing_config.signing.arn
68}Common EC2 misconfigurations in Terraform
1# ❌ INSECURE: EC2 with dangerous defaults
2resource "aws_instance" "web" {
3 ami = "ami-12345678"
4 instance_type = "t3.medium"
5
6 # No IMDSv2 — vulnerable to SSRF-based credential theft
7 # (IMDSv1 allows a single unauthenticated HTTP request
8 # to steal IAM role credentials)
9
10 # User data with secrets in plaintext
11 user_data = <<-EOF
12 #!/bin/bash
13 export DB_PASSWORD="production-password"
14 export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
15 ./start-app.sh
16 EOF
17
18 # EBS volume not encrypted
19 root_block_device {
20 volume_size = 50
21 encrypted = false
22 }
23}
24
25---
26# ✅ SECURE: Hardened EC2 instance
27resource "aws_instance" "web" {
28 ami = "ami-12345678"
29 instance_type = "t3.medium"
30
31 # Enforce IMDSv2 — prevents SSRF credential theft
32 metadata_options {
33 http_endpoint = "enabled"
34 http_tokens = "required" # Requires token (IMDSv2)
35 http_put_response_hop_limit = 1
36 }
37
38 # Use IAM instance profile instead of hardcoded keys
39 iam_instance_profile = aws_iam_instance_profile.web.name
40
41 # Encrypted root volume
42 root_block_device {
43 volume_size = 50
44 encrypted = true
45 kms_key_id = aws_kms_key.ebs.arn
46 }
47
48 # Enable detailed monitoring
49 monitoring = true
50
51 tags = {
52 Name = "web-server"
53 Environment = "production"
54 }
55}IMDSv2 Is Critical for EC2
The Instance Metadata Service (IMDS) at 169.254.169.254 provides IAM credentials to EC2 instances. IMDSv1 allows a single unauthenticated GET request to steal these credentials — making SSRF vulnerabilities in your application instantly escalate to full AWS account compromise. Always enforce IMDSv2 (http_tokens = "required") which requires a session token, blocking SSRF-based credential theft. The Capital One breach exploited exactly this: SSRF → IMDSv1 → IAM credentials → S3 data exfiltration.
A Lambda function has DB_PASSWORD stored directly in its environment variables. The developer argues this is fine because 'Lambda environment variables are encrypted at rest by default.' Is this secure?
6. Infrastructure as Code Security
Infrastructure as Code (Terraform, CloudFormation, Pulumi, ARM templates) is the primary way cloud infrastructure is defined and deployed. Every IaC pull request is a potential security change — adding, modifying, or removing security controls. Reviewing IaC requires understanding both the code syntax and the security implications of each resource attribute.
CloudFormation: Common misconfigurations
1# ❌ INSECURE CloudFormation patterns
2
3# Public S3 bucket via CloudFormation
4Resources:
5 DataBucket:
6 Type: AWS::S3::Bucket
7 Properties:
8 BucketName: company-data
9 AccessControl: PublicRead # Public!
10 # Missing: BucketEncryption
11 # Missing: PublicAccessBlockConfiguration
12 # Missing: LoggingConfiguration
13
14 # RDS without encryption or backup
15 Database:
16 Type: AWS::RDS::DBInstance
17 Properties:
18 Engine: postgres
19 MasterUsername: admin
20 MasterUserPassword: !Ref DatabasePassword # From parameter
21 PubliclyAccessible: true # On the internet!
22 StorageEncrypted: false # Plaintext storage
23 BackupRetentionPeriod: 0 # No backups!
24 DeletionProtection: false # Can be accidentally deleted
25
26---
27# ✅ SECURE CloudFormation patterns
28Resources:
29 DataBucket:
30 Type: AWS::S3::Bucket
31 Properties:
32 BucketName: company-data
33 AccessControl: Private
34 PublicAccessBlockConfiguration:
35 BlockPublicAcls: true
36 BlockPublicPolicy: true
37 IgnorePublicAcls: true
38 RestrictPublicBuckets: true
39 BucketEncryption:
40 ServerSideEncryptionConfiguration:
41 - ServerSideEncryptionByDefault:
42 SSEAlgorithm: aws:kms
43 KMSMasterKeyID: !Ref DataKMSKey
44 VersioningConfiguration:
45 Status: Enabled
46 LoggingConfiguration:
47 DestinationBucketName: !Ref LogBucket
48 LogFilePrefix: s3-access-logs/
49
50 Database:
51 Type: AWS::RDS::DBInstance
52 Properties:
53 Engine: postgres
54 MasterUsername: !Sub "{{resolve:secretsmanager:${DBSecret}:SecretString:username}}"
55 MasterUserPassword: !Sub "{{resolve:secretsmanager:${DBSecret}:SecretString:password}}"
56 PubliclyAccessible: false
57 StorageEncrypted: true
58 KmsKeyId: !Ref RDSKMSKey
59 BackupRetentionPeriod: 7
60 DeletionProtection: true
61 EnableCloudwatchLogsExports:
62 - postgresqlIaC scanning tools for CI/CD
1# Checkov — comprehensive IaC scanner
2checkov -d . # Scan all IaC files
3checkov -f main.tf # Scan specific file
4checkov --framework terraform # Terraform only
5checkov --check CKV_AWS_18 # Specific check (S3 logging)
6
7# tfsec — Terraform-specific scanner
8tfsec .
9tfsec --severity-override HIGH # Only high+ findings
10
11# Trivy config scanning (Terraform, CloudFormation, Dockerfile)
12trivy config .
13trivy config --severity HIGH,CRITICAL .
14
15# terrascan — policy-as-code for IaC
16terrascan scan -i terraform -d .
17
18# KICS (Keeping Infrastructure as Code Secure)
19kics scan -p .
20
21# cfn-lint — CloudFormation linter
22cfn-lint template.yaml
23
24# OPA/Conftest — custom policy checks
25conftest test main.tf --policy policy/IaC Security Scanning Comparison
| Tool | Frameworks | Key Strength | Integration |
|---|---|---|---|
| Checkov | TF, CFN, K8s, ARM, Helm | 1000+ built-in policies, custom policies | GitHub Actions, GitLab, Jenkins |
| tfsec | Terraform | Deep Terraform understanding, fast | GitHub Actions, pre-commit |
| Trivy | TF, CFN, Docker, K8s | Also scans container images and code | GitHub Actions, GitLab, CI/CD |
| Terrascan | TF, CFN, K8s, Helm | OPA-based custom policies | GitHub Actions, admission controller |
| KICS | TF, CFN, Docker, K8s, ARM | Broad framework support, fast | GitHub Actions, GitLab, Jenkins |
Terraform State Contains Secrets
Terraform state files (terraform.tfstate) contain the plaintext values of every resource attribute — including database passwords, IAM access keys, and TLS private keys. Never commit state files to git. Store them in an encrypted backend (S3 with SSE-KMS + DynamoDB locking, Terraform Cloud, or Azure Blob with encryption). Restrict access to the state backend using IAM policies. Treat the state file as a critical secret.
A PR adds a Terraform configuration that passes the database password as a variable: variable 'db_password' {} and uses it in the RDS resource. The .tfvars file is gitignored. Is this approach secure?
7. Detection During Code Review
When reviewing pull requests that include cloud infrastructure changes, use a systematic approach. Cloud misconfigurations are often a single attribute or missing resource — easy to overlook but catastrophic if deployed.
Code Review Detection Patterns for Cloud Config
| File Type | What to Review | Red Flags |
|---|---|---|
| Terraform (.tf) | S3 buckets, IAM policies, security groups, RDS | Missing public_access_block, "Action": "*", cidr 0.0.0.0/0, publicly_accessible = true |
| CloudFormation (.yaml/.json) | Same resources in CFN syntax | AccessControl: PublicRead, PubliclyAccessible: true, StorageEncrypted: false |
| IAM policies (.json) | Actions, resources, principals, conditions | Wildcard actions/resources, Principal: "*", no conditions, iam:* permissions |
| .tfvars / .env | Should not contain secrets | Passwords, API keys, access keys in variable files |
| terraform.tfstate | Should never be committed | State file in git = all secrets exposed |
| Helm values.yaml | Security contexts, resource limits, images | Disabled security settings, :latest tags, no limits |
| docker-compose.yml | Environment variables, volumes, ports | Secrets in environment, host ports exposed, privileged mode |
GitHub Actions: Automated IaC scanning in CI
1name: IaC Security Scan
2on:
3 pull_request:
4 paths:
5 - '**.tf'
6 - '**.yaml'
7 - '**.yml'
8 - '**.json'
9
10jobs:
11 checkov:
12 runs-on: ubuntu-latest
13 steps:
14 - uses: actions/checkout@v4
15
16 - name: Run Checkov
17 uses: bridgecrewio/checkov-action@v12
18 with:
19 directory: infrastructure/
20 framework: terraform
21 soft_fail: false # Fail PR on violations
22 output_format: sarif
23 download_external_modules: true
24
25 - name: Upload SARIF
26 if: always()
27 uses: github/codeql-action/upload-sarif@v3
28 with:
29 sarif_file: results.sarif
30
31 tfsec:
32 runs-on: ubuntu-latest
33 steps:
34 - uses: actions/checkout@v4
35
36 - name: Run tfsec
37 uses: aquasecurity/tfsec-action@v1.0.3
38 with:
39 working_directory: infrastructure/
40 soft_fail: false- Check every S3/storage resource for public access blocks — The
aws_s3_bucket_public_access_blockresource is separate from the bucket. It is easy to create a bucket without adding the block. - Search for wildcard IAM patterns — Look for
"*"in Action, Resource, and Principal fields. Each wildcard is a potential privilege escalation. - Verify encryption is enabled everywhere — S3 (SSE-KMS), RDS (StorageEncrypted), EBS (encrypted = true), and DynamoDB (server_side_encryption) should all have encryption configured.
- Check for hardcoded secrets — Search for patterns like
password =,secret =,api_key =, andaccess_key =in IaC files. - Verify logging and monitoring — CloudTrail, VPC Flow Logs, S3 access logging, and RDS audit logs should be enabled for production resources.
- Look at what is NOT there — Missing resources are as dangerous as misconfigured ones. No NetworkPolicy, no public access block, no encryption config — all are findings.
A PR adds 15 new Terraform resources for a microservice. You notice there are no aws_security_group_rule resources with 0.0.0.0/0, no public S3 buckets, and no wildcard IAM. Should you approve it?