Moving a Production Laravel App to Laravel Cloud

Categories: Laravel
9 min read

Laravel Cloud

A Laravel app I work on as freelance lead developer recently moved to Laravel Cloud from a hand-rolled Docker setup. We've been on the new platform a few months and it's a big improvement. Here's the story: what we did, how it went, what works, and a few minor downsides.

The Old Setup, and Why It Had to Go

Before the move, the app ran on a single VPS with a separate database server, using Docker Compose: PHP-FPM, MariaDB, Redis, Nginx Proxy Manager out front. It worked. It also nibbled away at our time in three different ways.

Engineering time was the biggest one. Every small infrastructure niggle (a new env var, a logging tweak, a backup script change, a container update) needed time and attention to QA and roll out. That's all time we didn't spend on the product. The job is to ship features, not to engineer our own hosting.

Peaky workload. The app does a lot of scheduled and event-driven work: background jobs running on timers, plus integrations that hit us in waves. On a single server, queue workers, scheduled tasks, and the public app shared CPU and memory. We were forever tuning the balance between giving HTTP traffic enough headroom and not starving the workers.

Resilience issues. The most notable was a long-running backup that caused a brief outage, and the recovery from it caused a couple of hours of degraded performance. Beyond that, smaller bumps (including planned things like rolling out schema changes) showed up as monitoring blips that were too close to user-visible for comfort. The platform was getting fragile and harder to operate.

Why Laravel Cloud Over the Alternatives

We considered a range of options: managed-VPS tooling like Forge, general-purpose container PaaS, rolling our own on AWS. What pushed us toward Laravel Cloud is that we wanted a true PaaS, not another way to manage servers. Forge and similar tools take the pain out of provisioning, but you still own a server underneath: OS updates, container drift, the rest of it. We wanted to stop owning a server.

Among the PaaS options, Laravel Cloud unsurprisingly fits Laravel better than the alternatives. Queues, scheduled tasks, cache, deployment, logging, disks, environment variables - everything is built around how Laravel actually works, not around treating the app as a generic PHP container that happens to have Laravel inside.

Laravel Cloud, in Practice

Laravel Cloud is Laravel's own managed platform. You hand it your repo and a build/deploy script, and it gives you a managed Laravel app: HTTP serving, queue workers, scheduled tasks, cache, and database, with auto-scaling and a CLI plus dashboard to manage it.

Our build script does the boring stuff: composer install, yarn build, php artisan optimize, php artisan filament:optimize. The deploy script runs migrations. That's it. No Dockerfiles, no nginx config, no host SSH access to manage.

The architecture we landed on has three separately-scaling instance pools.

Web instances (HTTP)

These serve the public-facing app. Two-vCPU/2GB instances, scaling 1-10 replicas, triggering on 60% CPU or memory. If a peak hits, the pool grows. When it eases off, it shrinks back. We pay for what we use, not for a big server sized to our highest workload.

Queue cluster

A separate pool of instances dedicated to processing background jobs. Same machine size as the web instances, also scaling 1-10 replicas, but triggered by queue wait time rather than CPU or memory: when jobs start backing up, the pool grows. Critically, busy queue work cannot exhaust the resources serving HTTP traffic and cause slowdowns. That solves an occasional problem we used to hit.

Worker cluster (scheduled tasks)

Smaller, cheaper instances (1vCPU/512MB) for scheduled work and longer-running services. We split this out from the queue cluster so scheduled tasks would run on schedule even if the queue was buried.

Everything else is managed for us: Valkey (a Redis-compatible service) for cache and queue backend, Cloudflare R2 for object storage, a managed database cluster. Migrating data was a non-event. We barely store any user files, and the bits we did move went via S3 CLI tooling.

Database backups happen automatically, with no visible performance impact while they run. Given the incident that started this whole conversation was a backup-induced outage, that matters.

Deploys are wired through our existing CI: BitBucket Pipelines runs the test suite on push, and on green the production or staging branch fires a deploy webhook at Laravel Cloud. Laravel Cloud also offers a deploy-on-push option, but I deliberately don't use it. Failing QA shouldn't deploy. Letting the pipeline finish before the webhook fires means a red build never makes it to production.

For more on why queue-based design pays for itself in startups, see Laravel queues for startups and scaling Laravel backends. Laravel Cloud's separation of pools is the operational version of that thinking.

Migration: Staging First, for a Month

We didn't rush to migrate all at once. The whole migration was staged over a few weeks; the final production cutover took a couple of hours on the day. We provisioned the staging environment on Laravel Cloud first and ran it there for about a month, with production still on the old Docker setup. Staging is throwaway by definition, so it was a low-risk place to shake out the platform's quirks: build and deploy script tweaks, env-var differences, queue and scheduler config, deploy-hook wiring from CI.

First impression: getting the app running on Laravel Cloud was easier than I'd expected. The build and deploy scripts came together quickly and the platform mostly just worked. No code changes were needed. It's the same Laravel app it was before; the move was just an infrastructure change. By the time we cut production over, the surprises were behind us.

If you're moving a real Laravel app to any new hosting platform, run staging on the new platform first - long enough that a normal release and QA cycle has flushed things out.

Launch Day Runbook

On the day itself, the cutover was boring. The dry runs did most of the work. We rehearsed the database cutover, the application bring-up, and the checks that confirmed every background process and scheduled task was running. By launch day we knew the timings and had a solid plan to migrate it all, with an accurate maintenance window communicated to clients in advance.

The order on the day:

  1. DNS TTLs dropped on the relevant records in advance, so changes would propagate fast.
  2. Old application taken down to stop writes.
  3. Database cutover, with timings already known from the dry runs.
  4. DNS change to point public traffic at Laravel Cloud.
  5. Application up on Laravel Cloud, with all background processes and scheduled tasks confirmed running.
  6. Old application reconfigured to point at the new database and brought back up. A safety net: any traffic that hit the old box during DNS propagation still wrote to the canonical database. No split-brain, no lost writes.
  7. Monitor. Dashboards, logs, queues, integrations. Watch for anything that didn't come back up cleanly.

Nothing went wrong. Most of the real work had already happened on staging.

Bumps Along the Way

Laravel Cloud is still a relatively new platform, so I was nervous about being early. So far it's gone well. We hit two minor issues. First, a bug in bandwidth accounting early on, which Cloud addressed quickly. Second, we overwhelmed their logging service: a bug in Laravel Cloud's injected logging config was firing far more output than it should have. Cloud's team worked through it with us and shipped a fix.

Neither caused a customer-visible problem. Both were handled quickly. I'm more confident in the platform now than when we started.

The HTTP-Only Client Problem

One specific and unusual issue we encountered: Laravel Cloud is HTTPS-only. HTTP requests get redirected to HTTPS - totally normal, and a sensible default for the platform. However, we have a hardware supplier whose devices use SOAP with their own encryption inside the payload, and they can't speak HTTPS at the transport layer. The vendor isn't going to change that any time soon, so we needed to accept HTTP traffic from those devices.

The solution was a Cloudflare Worker that accepts HTTP requests and proxies them to our Laravel Cloud HTTPS backend. The device thinks it's still talking HTTP. The worker doesn't peek at the payload - the supplier's encryption is inside SOAP, which we don't touch. It's a straight scheme shift, not a rewrite or redirect.

What's Coming: Octane

Octane is a persistent-process server: instead of bootstrapping the framework on every request, the app stays in memory and serves requests directly. For apps where the bootstrap phase is a meaningful chunk of the total request time, Octane can shave significant latency.

We have it running on staging and we'll move production over once we've satisfied ourselves there are no surprises. Laravel Cloud supports Octane natively, so flipping it on is a configuration change, not a re-architecture.

What I Don't Love

A few honest gripes.

No SSH. I get why - shell access on a managed platform would defeat half the value proposition. But running ad-hoc commands through a dashboard is awkward compared to dropping into a shell. Laravel Cloud has recently added tinker to the cloud CLI, which is a big improvement. More tooling like that, please.

No Horizon equivalent. I miss the at-a-glance overview Laravel Horizon gives you for queue health: throughput, failed jobs, runtime distribution, all in one place. Cloud has its own queue and job UI, but it's not at parity yet. If your team relies on Horizon's dashboards, plan for that gap.

Log tooling is clunky. Reading logs through the dashboard is a step down from grep'ing a file on a server you control. The right answer is to ship logs and errors to a dedicated tool. We use Nightwatch; Sentry or anything similar would do the same job. Once you've got that, day-to-day debugging stops being log-file driven and the gap mostly disappears. It's still a minor frustration when you just want to tail something.

That's about it. The platform feels well-made and I'd recommend it.

The Cost Conversation

Cost is broadly similar to the old setup. The dramatic savings aren't in the hosting bill. They're in the engineering time we no longer spend keeping the lights on. I'm spending the saved time working on the product, rather than solving infrastructure issues.

Should You Move?

If you're maintaining your own Docker Compose setup for a production Laravel app and noticing the maintenance creep, take a serious look at Laravel Cloud. The migration is straightforward, the auto-scaling is genuinely useful, and you stop being your own ops team.

If you're hosting Laravel by hand on a single VPS and finding it fragile, the same applies, more so.

If your app is simpler (a smaller site, an internal tool with predictable load), Laravel Forge is a great option, and one I use daily on other Laravel and Symfony projects. (I don't only use Laravel ecosystem products: I also use Runcloud and Cloudways on current projects, plus self-managed servers.) Laravel Cloud earns its keep when your app has shape: separate web, queue, and worker concerns; peaky load; real scaling needs.

I'm a freelance Laravel developer based in Wiltshire, near Bath, working with startups across Bath, Bristol, and the UK. If you're thinking about moving a Laravel app to a managed platform, or you've already moved and need someone to make the architecture sing, let's talk. There's some related context in my notes on hiring a freelance backend developer.

Ben Lumley StackOverflow Github Linkedin

Related posts