Development

Deploying a Spring Boot App on a $0 Budget My Charity Clinic Journey

How I squeezed a Java, Spring Boot, Postgres, and React stack onto a free-tier AWS EC2 instance.

A

Azhar

Author

April 11, 2026

Published

5 min read

Read time

I’ve created a clinic management system for a charity organisation in my locality (I’ll share the link here). It’s a standard but robust stack: Java, Spring Boot, Postgres for the backend, and Vite/React for the frontend.

Here I’ll explain exactly how I deployed it, and more importantly, what my actual thought process and considerations were when trying to launch a full-stack application on an absolute $0 starting budget.

The Reality Check

While I’m pretty familiar with AWS EC2, DigitalOcean, and bouncing around various VPS providers over the years, I didn’t actually spin up a production-ready t3.medium right out of the gate. I’m building for a charity project. Their budget starting out is literally zero.

Their requirements seemed modest but strict:

  • Traffic Profile: Around 100 concurrent users performing moderate updates, and maybe 1,000 peak read users just viewing dashboards. Eventually, we want to scale to 10k users.
  • Data Safety: Zero Data Loss is a hard requirement. We are dealing with clinic data—backups and disaster recovery are essential.
  • Cost: As cheap as humanly possible without sacrificing that database safety.

I looked at the grand architectures—doing it “the proper way” with an AWS Application Load Balancer, an ECS Fargate cluster for the Java backend, and a Multi-AZ RDS instance for Postgres. That’s a bulletproof setup, but it would easily cost $50-$80 a month baseline.

I also considered a Hybrid strategy: hosting the frontend on Vercel for free, grabbing a $7 Hetzner VPS for the Java backend, and throwing $15 at a Managed Postgres Database so I wouldn’t have to worry about backups. That’s about $22 a month. Usually, that’s my sweet spot.

But then I looked at the $0 budget again and said, “You know what? Free tier it is.”

I decided to deploy this entire stack onto a single AWS EC2 t3.micro free-tier instance just for testing and initial feedback. Yes, the one with 1GB of RAM. Yes, I’m cramming a Spring Boot backend, a Vite frontend (via Nginx), and a PostgreSQL database all onto that poor little server.

Does the JVM cry when it starts up? Absolutely, it takes its sweet time. Am I dangerously close to an OutOfMemoryError if three people start batch-updating records simultaneously? Perhaps.

But I need the stakeholders to interact with the system now. Waiting for funding or approval to buy “proper” servers just delays getting real user feedback. The free tier EC2 proves the concept works over the internet, not just on my localhost. It gives them a real URL they can click on their phones today.

Once the clinic starts using it seriously and we actually have steady traffic, I’ll migrate to a beefier droplet. But for now, here is exactly how I bullied a free-tier Amazon Linux 2023 instance into hosting my entire clinic management stack.


My Step-by-Step EC2 Survival Guide

Because I’m using Amazon Linux 2023 (AL2023), things are a bit different than the standard Ubuntu tutorials. AL2023 uses dnf instead of yum or apt, and it requires a few modern workarounds. Here is my personal runbook.

Step 0: The Baseline Security

Before touching the terminal, I set up the EC2 Security Group. Since this is just a single instance doing everything, I had to open it up:

  • Port 22 (SSH): Locked down to strictly my IP address.
  • Port 80/443 (HTTP/HTTPS): Open to the world (0.0.0.0/0) for the web app.

Step 1: Prepping the Machine

Once I SSH’d in, the first thing was to get the system packages up to date:

sudo dnf update -y

AL2023’s dnf is noticeably faster than the old yum.

Step 2: Grappling with Java

Since we’re on AWS, the path of least resistance is Amazon Corretto. It’s an AWS-supported LTS JDK that costs nothing. I needed Java 21 for this project:

sudo dnf install java-21-amazon-corretto -y
java -version

Step 3 & 4: Getting Docker Ready for the Database

I didn’t want to install Postgres directly onto the host OS—that’s a disaster to migrate later. I use Docker exclusively for my databases. Fortunately, AL2023 ships Docker in its default repos:

sudo dnf install docker -y
sudo systemctl enable docker --now
sudo usermod -aG docker $USER
newgrp docker

Now, getting Docker Compose was a bit weird. AL2023 doesn’t have docker-compose-plugin in its repos like Ubuntu does. I had to install the v2 CLI plugin manually from GitHub:

mkdir -p ~/.docker/cli-plugins
curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
docker compose version

Step 5 & 6: Pulling the Code

With the environment ready, I installed Git and pulled my repository.

(A quick note: For a brief second I thought about SCP-ing the files from my Windows machine, but I quickly realized that’s a terrible habit. Git is auditable and repeatable).

sudo dnf install git -y
cd ~
git clone https://github.com/my-username/palliative-project.git
cd palliative-project

Step 7: Hiding the Secrets

Obviously, I wasn’t going to check my database passwords into GitHub. I created a humble .env file right there on the server:

nano .env
POSTGRES_DB=palliative_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=my_super_secret_password
SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/palliative_db
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=my_super_secret_password

Step 8: Waking up Postgres

With the secrets in place, I booted the database first. I always do this before starting Spring Boot to avoid those annoying connection timeouts on startup:

docker compose up -d db
docker compose logs db --follow

Once I saw “database system is ready to accept connections”, I knew it was time for the heavy lifter.

Step 9 & 11: The Spring Boot Balancing Act

Running the backend was the moment of truth. Would the 1GB of RAM hold up during the Maven build?

cd ~/palliative-project/palliative-backend
chmod +x mvnw
./mvnw clean package -DskipTests

The system groaned, but it built the .jar successfully!

I didn’t want to just run it with nohup because if the EC2 instance rebooted (or crashed from OOM), my API would stay dead. I wrote a quick systemd service for it.

First, I had to find the exact Java path since systemd is notoriously picky:

which java
# Outputs: /usr/bin/java

Then I created the service:

sudo nano /etc/systemd/system/palliative-backend.service

Notice the -Xms256m -Xmx512m? That was my attempt to put the JVM on a strict diet so there’s enough RAM left for Postgres and Nginx.

[Unit]
Description=Palliative Care Backend (Spring Boot)
After=network.target docker.service

[Service]
User=ec2-user
WorkingDirectory=/home/ec2-user/palliative-project/palliative-backend
ExecStart=/usr/bin/java -Xms256m -Xmx512m -jar /home/ec2-user/palliative-project/palliative-backend/target/palliative-backend-0.0.1-SNAPSHOT.jar
EnvironmentFile=-/home/ec2-user/palliative-project/.env
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable palliative-backend --now
sudo journalctl -u palliative-backend -f

Seeing the Spring Boot start successfully on a free-tier instance is a deeply satisfying feeling.

Step 10: Building the Frontend (Avoiding Node Traps)

Our frontend is React/Vite. The initial instinct was just to install Node via dnf, but AL2023 ships Node 18, which is too old for some modern Vite setups (Vite 8 throws conflicts).

To save myself endless headache, I sidestepped the system package manager entirely and used NVM to install Node 20:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
source ~/.bashrc
nvm install 20
nvm use 20

Then I just let Vite do its thing:

cd ~/palliative-project/palliative-ui
npm install --legacy-peer-deps
npm run build

(The --legacy-peer-deps was a necessary evil because of a vite-plugin-pwa conflict. It works fine at runtime, but NPM is very strict these days).

Step 12: Stitching it Together with NGINX

I needed NGINX to sit at the front door, serve the compiled React files, and forward /api/ traffic to the Spring Boot app sitting on port 8080.

sudo dnf install nginx -y
sudo systemctl enable nginx --now
sudo cp -r dist/* /usr/share/nginx/html/

I swapped out the default config:

sudo nano /etc/nginx/conf.d/palliative.conf
server \{
    listen 80;
    server_name _; 

    root /usr/share/nginx/html;
    index index.html;

    location /api/ \{
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    \}

    location / \{
        try_files $uri $uri/ /index.html;
    \}
\}
sudo nginx -t
sudo systemctl reload nginx

Boom. I could type the EC2’s public IP into my phone and see the clinic dashboard loading data from my backend.

Step 13 & 14: Polishing for the Real World

Eventually, the IP address wasn’t going to cut it, so I bought a cheap domain name and ran Certbot to secure everything:

sudo dnf install certbot python3-certbot-nginx -y
sudo certbot --nginx -d clinic.example.com

The absolute final, non-negotiable step was data backups. Because the Postgres database is sitting right there on the EC2’s disk, if AWS terminates the instance, the user data vanishes.

I wrote a quick script that uses pg_dump mapped to an S3 bucket:

nano ~/backup-db.sh
#!/bin/bash
DATE=$(date +%Y-%m-%d_%H-%M-%S)
BACKUP_FILE="/tmp/palliative_backup_$DATE.sql"

# Dump the database from the docker container
docker exec db pg_dump -U postgres palliative_db > $BACKUP_FILE

# Throw it over to S3
aws s3 cp $BACKUP_FILE s3://my-clinic-backups/db-backups/

rm $BACKUP_FILE
chmod +x ~/backup-db.sh
crontab -e

I scheduled it to run every 6 hours (0 */6 * * * /home/ec2-user/backup-db.sh). A 6-hour data loss window isn’t perfect, but for a newly launched charity app running on entirely free infrastructure, it’s pretty solid peace of mind.

My Redeploys Made Easy

Every time I add a feature to the tracking side of the app, repeating all of this is tedious. So I left myself a one-liner redeploy.sh script on the server that handles the Git pull, Maven build, systemctl restart, Vite build, and Nginx copy in one swoop.

#!/bin/bash
set -e

cd ~/palliative-project
git pull

cd palliative-backend
./mvnw clean package -DskipTests
sudo systemctl restart palliative-backend

cd ../palliative-ui
npm install --legacy-peer-deps
npm run build
sudo cp -r dist/* /usr/share/nginx/html/
sudo systemctl reload nginx

echo "Redeploy complete!"

The View from Here

So, that’s what it entails. I took a robust enterprise architecture and shoved it into a single megabyte-starved Linux box.

Does this setup have downsides? Of course. The biggest tradeoff is the single point of failure—if the EC2 instance goes down, the entire app goes offline until it reboots. And I am asking a lot from that free-tier CPU.

But it accomplished the most critical goal: the charity clinic has a working, secure, data-backed system accessible from their mobile phones right now, and their monthly hosting bill remains effectively $0 until they get the traction to justify an upgrade.

A

Azhar

Published on April 11, 2026