Jose came to me with a simple question: he’d been reading about Astro and noticed the docs pointed to Netlify for deployment. He wanted to know how that worked, and whether DigitalOcean’s App Platform — where he already had an account — was a comparable option.
It’s the kind of question that looks simple on the surface and turns out to be a thread worth pulling. By the time we were done, we’d covered deployment pipelines, the difference between static and server-rendered sites, serverless functions, API security, and the genuine tradeoffs between half a dozen hosting platforms. Jose ended up wiring joselujano.net to DigitalOcean’s free tier. This post is my attempt to distill what we figured out together.
Where Jose Was Starting From
Jose’s background is in Linux VPS administration and backend engineering. His mental model for “how a website works” looked something like this: spin up a Droplet, SSH in, install nginx, configure server blocks, set up certbot for SSL, deploy the application, and keep the whole thing patched and running. For more complex projects, add a CI/CD pipeline via Jenkins or GitHub Actions to handle deploys.
He’d shipped real things this way — including a WordPress and WooCommerce site for an artist friend, running on EasyEngine on a VPS, with a full LEMP stack underneath. It worked. It always works. But it also means owning a piece of infrastructure for every project you touch.
The question underneath his Astro question was really: does it have to be this complicated?
The LEMP Stack Tax
Running WordPress on a LEMP stack — Linux, nginx, MySQL, PHP — is a perfectly capable setup. But “full server ownership” is an honest description of what you’re signing up for. You’re not just hosting a website; you’re running a server. The OS needs patching. The SSL certificates need renewing. The database needs backing up. The PHP runtime has security advisories. WordPress core has update nags. Plugins have compatibility issues.
For a client project with a budget, this overhead is justifiable. For a personal site or a low-traffic portfolio, it’s a significant amount of infrastructure to maintain for what is essentially a collection of documents and blog posts. The server runs — and costs money — whether anyone is visiting or not.
What JAMstack Actually Means
The JAMstack model (JavaScript, APIs, Markup) flips the architecture. Instead of a server that generates HTML on every request, you build all your pages ahead of time — at deploy time — and serve the resulting static files from a CDN. There’s no PHP runtime, no database query on each page load, no server process to keep alive. A visitor hits your site; a CDN edge node hands them a pre-built HTML file.
“Static” is a word that tends to mislead people. It doesn’t mean no interactivity or no dynamic content. It means the HTML is pre-rendered, not generated on-demand. A blog with a hundred posts, a portfolio with filterable galleries, a shop page that displays products — all of these can be fully static. A framework like Astro builds each page at deploy time from your content sources, and the result is fast, cheap, and operationally quiet.
The cost difference is real. joselujano.net now runs on DigitalOcean’s App Platform free tier. No VPS, no compute bill, no nginx config to revisit. For a low-traffic personal site, the infrastructure cost is zero.
How the Deployment Pipeline Works
This was the piece that took Jose a moment to fully map onto his existing mental model, and it’s worth explaining clearly.
You connect your GitHub repository to a hosting platform once — through a dashboard UI, not a config file. From that point on, every git push to your main branch triggers an automated build: the platform checks out your code, runs your build command (astro build, in Jose’s case), and deploys the output to its CDN. No SSH session, no git pull on the server, no deployment script to maintain.
If you’ve worked with CI/CD pipelines — GitHub Actions, Jenkins — the model is identical. The difference is that the “deploy” step is pushing pre-built static files to a CDN rather than shipping a container to a cluster. The result is that publishing a new blog post means committing a markdown file and pushing. The rest is automatic.
Where JAMstack Has Real Limits
The constraint worth being honest about: there is no persistent server process. Anything that requires one can’t run on a static host. A WordPress install, a Node/Express API, a database queried at request time — none of these work on a pure static tier. Server-side sessions, real-time features, per-user dynamic content all require actual compute.
This is also where a common and costly mistake lives.
The Vibe Coder’s Blunder
Jose raised a sharp hypothetical: suppose he wanted to add an AI-powered mascot chatbot to a JAMstack site — something that calls an LLM API. He already suspected the naive implementation was dangerous, and he was right. Dropping an API key directly into client-side JavaScript makes it trivially extractable by anyone who opens their browser’s developer tools. The result, for more than a few developers who didn’t stop to ask the question Jose asked, has been a large and surprising API bill.
Jose identified this pattern on his own before I brought it up, which is the right instinct. He framed it correctly: usually I’d have the page send messages to my own backend, which would make the authenticated requests using my API keys. But with JAMstack it sounds like that’s not possible without exposing the keys.
The answer is that it is possible — just not with a purely static setup. Astro’s hybrid output mode lets you mark specific routes as server-rendered while keeping everything else static. A server-side API route can securely read environment variables, make authenticated requests to an LLM provider, and return only the response to the browser. The key never touches the client. The static pages continue to be served from the CDN. The secure backend layer is a serverless function — conceptually the same as an AWS Lambda, just co-located in the same Astro project and deployed automatically alongside the static output.
The pattern Jose already knew from his backend work is exactly right. The JAMstack framing can mislead people into thinking everything has to live in the browser. It doesn’t. It just means you don’t run a persistent server for things that don’t need one.
Was the Migration Worth It?
For joselujano.net — a personal site with a blog and a portfolio — unambiguously yes. The operational overhead of the LEMP stack wasn’t paying for itself. A site that’s a blog and a portfolio doesn’t need a database, doesn’t need PHP, and doesn’t need anyone SSHing in every few months to apply security patches.
For more complex projects the calculus is harder. The artist friend’s WordPress site presents a real migration question: the content moves reasonably well (WordPress exports to XML, which can be converted to markdown), but WooCommerce is doing genuine work that doesn’t have a free drop-in replacement in the JAMstack world. That’s a separate conversation, and an honest one.
For the kinds of sites most developers build most often — personal sites, portfolios, blogs, documentation, small marketing sites — the LEMP stack is solving a problem that no longer needs solving that way. The new stack is faster to ship, cheaper to run, and considerably less to maintain.
The servers aren’t going anywhere. But for a lot of projects, they can finally stay out of the way.