Skip to main content

Overview

S3 Buckets provide scalable object storage for your Fjall applications. While Fjall automatically handles infrastructure like ECR repositories and logs, you can add S3 buckets when you need dedicated storage for user content, static websites, or data sharing. Use S3 buckets for:
  • User file uploads (images, documents, videos)
  • Static website hosting
  • Application backups and archives
  • Sharing data between applications
  • Media asset storage

When to Use

Fjall’s factories automatically provision most storage needs:
  • ComputeFactory - Creates ECR repositories for container images
  • DatabaseFactory - Manages RDS storage and backups
  • Logs - Automatically stored in CloudWatch
Add explicit S3 buckets when you need:
  • User-uploaded content storage
  • Static website hosting with custom requirements
  • Cross-application file sharing
  • Specific compliance or lifecycle rules

Accessing from Fjall Applications

Lambda Function Access

import { ComputeFactory } from "@fjall/components-infrastructure";
import { Bucket } from "aws-cdk-lib/aws-s3";

// Create bucket
const uploadBucket = new Bucket(this, "Uploads", {
  bucketName: "myapp-uploads",
});

// Lambda with S3 access
const processor = app.addCompute(
  ComputeFactory.build("FileProcessor", {
    type: "lambda",
    handler: "index.handler",
    runtime: Runtime.NODEJS_20_X,
    code: Code.fromAsset("./lambda"),
    environment: {
      BUCKET_NAME: uploadBucket.bucketName,
    },
  })
);

// Grant read/write access
uploadBucket.grantReadWrite(processor.role);

ECS Container Access

const uploadBucket = new Bucket(this, "UserContent", {
  bucketName: "myapp-user-content",
  versioned: true,
});

const api = app.addCompute(
  ComputeFactory.build("API", {
    type: "ecs",
    ecsType: "fargate",
    containerEnvironment: {
      S3_BUCKET: uploadBucket.bucketName,
    },
  })
);

// Grant access to ECS task role
uploadBucket.grantReadWrite(api.taskRole);

Common Patterns

User Upload Bucket

import { Bucket, BlockPublicAccess } from "aws-cdk-lib/aws-s3";
import { RemovalPolicy } from "aws-cdk-lib";

const uploadBucket = new Bucket(this, "Uploads", {
  bucketName: "myapp-user-uploads",
  versioned: true,
  encryption: BucketEncryption.S3_MANAGED,
  blockPublicAccess: BlockPublicAccess.BLOCK_ALL,
  removalPolicy: RemovalPolicy.RETAIN,
});

Static Website Hosting

const websiteBucket = new Bucket(this, "Website", {
  websiteIndexDocument: "index.html",
  websiteErrorDocument: "error.html",
  publicReadAccess: true,
  blockPublicAccess: new BlockPublicAccess({
    blockPublicAcls: false,
    blockPublicPolicy: false,
    ignorePublicAcls: false,
    restrictPublicBuckets: false,
  }),
});

Lifecycle Rules for Cost Optimization

const archiveBucket = new Bucket(this, "Archives", {
  lifecycleRules: [
    {
      // Move to cheaper storage after 30 days
      transitions: [
        {
          storageClass: StorageClass.INFREQUENT_ACCESS,
          transitionAfter: Duration.days(30),
        },
        {
          storageClass: StorageClass.GLACIER,
          transitionAfter: Duration.days(90),
        },
      ],
      // Delete after 1 year
      expiration: Duration.days(365),
    },
  ],
});

Lambda Integration Examples

Process Uploaded Files

// Lambda function code
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";

const s3 = new S3Client();

export const handler = async (event: any) => {
  const bucket = process.env.BUCKET_NAME;
  const key = event.Records[0].s3.object.key;
  
  // Get file from S3
  const command = new GetObjectCommand({ Bucket: bucket, Key: key });
  const response = await s3.send(command);
  
  // Process file...
  
  return { statusCode: 200 };
};
// Infrastructure
const processor = app.addCompute(
  ComputeFactory.build("FileProcessor", {
    type: "lambda",
    handler: "index.handler",
    runtime: Runtime.NODEJS_20_X,
    code: Code.fromAsset("./lambda"),
    environment: {
      BUCKET_NAME: uploadBucket.bucketName,
    },
  })
);

uploadBucket.grantRead(processor.role);

// Trigger on uploads
uploadBucket.addEventNotification(
  EventType.OBJECT_CREATED,
  new LambdaDestination(processor.function)
);

Generate Presigned URLs

// Lambda for presigned upload URLs
export const handler = async (event: any) => {
  const { getSignedUrl } = await import("@aws-sdk/s3-request-presigner");
  const { S3Client, PutObjectCommand } = await import("@aws-sdk/client-s3");
  
  const s3 = new S3Client();
  const command = new PutObjectCommand({
    Bucket: process.env.BUCKET_NAME,
    Key: `uploads/${Date.now()}-${event.fileName}`,
  });
  
  const url = await getSignedUrl(s3, command, { expiresIn: 3600 });
  
  return { statusCode: 200, body: JSON.stringify({ url }) };
};

CORS Configuration

For browser uploads:
const uploadBucket = new Bucket(this, "Uploads", {
  cors: [
    {
      allowedMethods: [HttpMethods.GET, HttpMethods.PUT, HttpMethods.POST],
      allowedOrigins: ["https://myapp.com"],
      allowedHeaders: ["*"],
      maxAge: 3000,
    },
  ],
});

Access Patterns

Read-Only Access

uploadBucket.grantRead(processor.role);

Write-Only Access

uploadBucket.grantWrite(uploader.role);

Full Access

uploadBucket.grantReadWrite(api.taskRole);

Public Read

const publicBucket = new Bucket(this, "Public", {
  publicReadAccess: true,
  blockPublicAccess: new BlockPublicAccess({
    blockPublicPolicy: false,
    restrictPublicBuckets: false,
  }),
});

Best Practices

  • Enable versioning for user-uploaded content
  • Use lifecycle rules to transition old files to cheaper storage
  • Block public access by default for security
  • Use presigned URLs for secure browser uploads
  • Enable encryption for sensitive data
  • Set RemovalPolicy.RETAIN for production buckets
  • Grant minimum permissions needed for each service
  • Use environment variables to pass bucket names to compute

See Also