How to Create S3 Bucket in AWS
The generation and sharing of data have been growing exponentially in recent years. Therefore, a proper storage solution for all the created data is desperately needed. AWS S3 provides scalable, secure, and cost-effective cloud storage solutions to manage data in the cloud. Its ability to handle vast volumes of data and easy integration with other AWS services, makes S3 a cornerstone for modern cloud storage solutions. In case you wish to get started using AWS, learning how to create an S3 bucket is a step in the right direction. In this blog, we will guide you through the process of adding a new S3 bucket in AWS; it will give you the ability to know how to utilize this powerful tool for data storage needs. For more info, don’t forget to enroll for the AWS course today.
How to Set Up AWS Access Credentials?
Working with Terraform and AWS requires properly handling the Access Key and Secret Key. AWS resources will provide these credentials as static, plain text, which is risky to store directly inside your Terraform files.
Two secure ways to manage your AWS credentials :
1.\\tUsing Spacelift with IAM Roles: Spacelift has seamless integration for AWS. You can follow their detailed guide to connect to AWS safely: AWS Integration Tutorial.
- HashiCorp Vault: Generate dynamic AWS credentials by creating IAM policies and then store them in HashiCorp Vault. Refer to creating IAM policies using Terraform here.
Spacelift Setup
If you’re using Spacelift, you need to configure it with Terraform. Here’s a snippet of code for setting up Spacelift and AWS IAM roles:
# Creating a Spacelift stack
resource “spacelift_stack” “managed-stack” { name = “Stack managed by Spacelift” repository = “my-awesome-repo” branch = “master” } # Creating an IAM role resource “aws_iam_role” “managed-stack-role” { name = “spacelift-managed-stack-role” assume_role_policy = jsonencode({ Version = “2012-10-17” Statement = [ jsondecode(spacelift_stack.managed-stack.aws_assume_role_policy_statement) ] }) } # Attaching an administrative policy to the role resource “aws_iam_role_policy_attachment” “managed-stack-role” { role = aws_iam_role.managed-stack-role.name policy_arn = “arn:aws:iam::aws:policy/PowerUserAccess” } # Linking AWS role to Spacelift stack resource “spacelift_stack_aws_role” “managed-stack-role” { stack_id = spacelift_stack.managed-stack.id role_arn = aws_iam_role.managed-stack-role.arn } |
HashiCorp Vault Setup
For those using HashiCorp Vault, here’s how you can set up AWS IAM roles to manage S3 Buckets:
variable “aws_access_key” {}
variable “aws_secret_key” {} variable “name” { default = “dynamic-aws-creds-vault-admin” } terraform { backend “local” { path = “terraform.tfstate” } } provider “vault” {} resource “vault_aws_secret_backend” “aws” { access_key = var.aws_access_key secret_key = var.aws_secret_key path = “${var.name}-path” default_lease_ttl_seconds = “120” max_lease_ttl_seconds = “240” } resource “vault_aws_secret_backend_role” “admin” { backend = vault_aws_secret_backend.aws.path name = “${var.name}-role” credential_type = “iam_user” policy_document = <<EOF { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “iam:*”, “ec2:*”, “s3:*” ], “Resource”: “*” } ] } EOF } output “backend” { value = vault_aws_secret_backend.aws.path } output “role” { value = vault_aws_secret_backend_role.admin.name } |
Using these methods, you can securely handle AWS credentials and ensure your Terraform setup is safe and effective.
How to Create an S3 Bucket Using Terraform?
To start using Terraform with AWS, you must first set up your credentials securely. Once that’s done, you can create your first S3 bucket. For this example, we’ll create a bucket named spacelift-test1-s3.
Here’s what you’ll need:
Region: Choose your AWS region.
Bucket: Name your bucket (spacelift-test1-s3).
ACL: Set the access control to private.
Create a file named main.tf and add the following Terraform code:
variable “name” { default = “dynamic-aws-creds-operator” }
variable “region” { default = “eu-central-1” } variable “path” { default = “../vault-admin-workspace/terraform.tfstate” } variable “ttl” { default = “1” } terraform { backend “local” { path = “terraform.tfstate” } } data “terraform_remote_state” “admin” { backend = “local” config = { path = var.path } } data “vault_aws_access_credentials” “creds” { backend = data.terraform_remote_state.admin.outputs.backend role = data.terraform_remote_state.admin.outputs.role } provider “aws” { region = var.region access_key = data.vault_aws_access_credentials.creds.access_key secret_key = data.vault_aws_access_credentials.creds.secret_key } resource “aws_s3_bucket” “spacelift-test1-s3” { bucket = “spacelift-test1-s3” acl = “private” } |
Also, create a version.tf file to specify the versions of AWS and Vault providers:
terraform {
required_providers { aws = { source = “hashicorp/aws” version = “3.23.0” } vault = { source = “hashicorp/vault” version = “2.17.0” } } } |
To apply this configuration, run these commands:
$ terraform init – Initializes Terraform.
$ terraform plan – Prepares the execution plan.
$ terraform apply – Creates the S3 bucket in AWS.
Uploading Files to Your S3 Bucket
With the S3 bucket created, the next step is uploading files. For this, you’ll use the aws_s3_bucket_object resource.
Here’s how to extend your Terraform script to include file uploads:
resource “aws_s3_bucket_object” “object1” {
for_each = fileset(“uploads/”, “*”) bucket = aws_s3_bucket.spacelift-test1-s3.id key = each.value source = “uploads/${each.value}” } |
Place your files (e.g., test1.txt and test2.txt) in a directory named uploads. Adjust the Terraform script to reflect this setup.
Managing Public Access to Your S3 Bucket
To control public access, use the aws_s3_bucket_public_access_block resource. This helps you manage the bucket’s public access settings:
resource “aws_s3_bucket_public_access_block” “app” {
bucket = aws_s3_bucket.spacelift-test1-s3.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
This configuration blocks public ACLs, policies, and access, ensuring your bucket remains private.
Deleting Your S3 Bucket
When you no longer need the S3 bucket, you can delete it with Terraform. Simply run:
$ terraform destroy |
This command removes all resources, including the S3 bucket and any objects within it. By following these steps, you can efficiently manage S3 buckets and their contents using Terraform. If you are preparing for the job interview, practice these AWS Interview Questions now.
Conclusion
Mastering Terraform and AWS is needed to manage your cloud efficiently. You can upload files to an S3 bucket, manage credentials securely, control access from the terminal, and clean up resources. This makes your cloud infrastructure more robust and well-maintained. Even though Terraform is used to create an S3 bucket, it is not recommended to use it for highly data-intensive tasks.