I wanted a project that would force me to learn AWS infrastructure properly: not just clicking through the console, but building repeatable, automated deployments with real services wired together. Ghost CMS seemed like a good candidate because it needs a database, persistent file storage, SSL, DNS, and email; enough moving parts to make the infrastructure interesting.

The goal was a production-grade application deployment on EKS, fully automated through GitHub Actions and CloudFormation. No manual steps, no console clicking, everything as code. I chose CloudFormation to make it entirely AWS.

The architecture

The deployment uses eight AWS services. Ghost runs as a container on EKS, backed by RDS MySQL for the database and EFS for persistent content storage. Lambda functions handle automated backups to S3. Secrets Manager stores credentials, ACM provides SSL certificates, and Route 53 manages DNS.

VPC (eu-west-2) Route 53 DNS ACM SSL certificates EKS cluster Ghost CMS Kubernetes deployment RDS MySQL Ghost database EFS Persistent content storage Secrets Manager CSI driver mount Lambda Backup functions S3 Backup storage

Everything sits inside a single VPC in eu-west-2. RDS and EFS are in private subnets with security groups limiting access to the EKS nodes. Ghost connects to Secrets Manager via the CSI Secrets Store driver, so credentials are mounted as volumes rather than passed as environment variables.

The CI/CD pipeline

The entire deployment is triggered by a single GitHub Actions workflow dispatch. The workflow contains around a dozen jobs with dependency chains so services are created in the right order: VPC first, then RDS, EFS, EKS, and ECR in parallel, then Ghost deploys once everything it depends on is ready.

Build VPC Build RDS Build EFS Build EKS Build certs Build SES Build ECR Build S3 Deploy Ghost to EKS

Authentication uses OIDC: no long-lived AWS keys in GitHub. Each job checks out the repo, assumes the IAM role via OIDC, and deploys its CloudFormation stack. The deploy job at the bottom waits for RDS, EFS, EKS, and the SSL certificate to be ready, then applies the Kubernetes manifests using envsubst to inject the runtime values.

There's also a separate delete workflow that tears everything down in reverse order. Being able to spin up and destroy the entire stack on demand was one of the original requirements because it keeps costs down and forces the infrastructure to be truly repeatable.

Infrastructure as code

Each AWS service has its own CloudFormation template: vpc-create.yaml, rds-create-mysql.yaml, efs-create.yaml, eks-create.yaml, and so on. Parameters like database passwords and domain names are passed in from GitHub secrets at deploy time.

The EKS setup includes installing the CSI Secrets Store driver via Helm and configuring IRSA (IAM Roles for Service Accounts) so the Ghost pods can pull credentials from Secrets Manager without needing static access keys. This was one element that caused a lot of head scratching as I'd missed the private endpoint for ASM! The Kubernetes manifests use envsubst for variable substitution: things like the RDS endpoint, EFS mount target, and certificate ARN are all resolved dynamically from CloudFormation stack exports.

One thing I'd highlight: the VPC template exports its outputs (subnet IDs, security group IDs) and downstream templates reference those exports. That keeps the templates loosely coupled; each one can be deployed or updated independently as long as the exports exist.

Backup and restore

I built Lambda functions for both database and filesystem backups so that content can be redeployed easily too. The RDS backup function uses pymysql to connect to the MySQL instance and dump each database to S3. The EFS backup function is more unusual: it mounts the EFS filesystem using Lambda's native filesystem support, creates a compressed archive of the Ghost content directory, and uploads it to S3.

Both functions run on schedules via CloudWatch Events and can also be triggered manually for on-demand backups or restores. The restore functions reverse the process: pull the latest archive from S3 and either execute the SQL dump or extract the files back to EFS.

Having automated backups was important because EKS pods are ephemeral. If a pod restarts, the EFS mount ensures content persists. But if the EFS filesystem itself has issues, the S3 backups provide a safety net.

Security

The deployment follows a few AWS security patterns that were good learning exercises in themselves: IRSA gives pods fine-grained IAM permissions via Kubernetes service accounts rather than node-level roles. RDS and EFS sit in private subnets with no public access; security groups limit traffic to the EKS nodes. All storage is encrypted at rest: RDS, EFS, and S3. SSL terminates at the load balancer using ACM certificates with automatic DNS validation through Route 53.

Credentials flow through Secrets Manager and are mounted into pods via the CSI driver, so they never appear in Kubernetes ConfigMaps, environment variables, or workflow logs.

Reflection

Ghost is not really meant to be deployed this way. It's designed to run on a single server with a local MySQL instance and local file storage. Spreading it across EKS, RDS, and EFS with Lambda backup functions and a full GitHub Actions workflow is massive overkill for what is essentially a blogging platform.

But I guess that was never really the point. I picked Ghost because it needed enough infrastructure to make the exercise worthwhile: a database, persistent storage, email, SSL, DNS. The app itself was almost secondary. What I actually learned was how to wire up a VPC with public and private subnets, deploy and configure EKS with IRSA and the CSI driver, build CloudFormation templates that export and reference each other, automate the whole thing through GitHub Actions with OIDC, and handle backup and restore across managed services.

If I were deploying Ghost for real I'd probably use a single EC2 instance, or better yet, Ghost's own managed hosting. But as an infrastructure learning project it was the right level of complexity. Sometimes picking the wrong app architecture is the best way to learn a platform.