Long Zyrous Loading Animation
Short Zyrous Loading Animation
Blog Banner image
AWS serverless Drupal
Select Blog Categories

Serverless Drupal: Supercharge and Automate Deployments with AWS Cloud Infrastructure (Part 1)

Paragraph Types Field

Introduction

When it comes to building highly customisable websites, few platforms offer the same richness of features as Drupal. At Zyrous, we use Drupal for most of our websites and over the course of a number of client projects we’ve honed our approach to deploying and running sites in AWS. In this article, you’ll learn how we create highly redundant, scalable architecture with repeatable, dependable deployments. To get the most out of it, you’ll need:

  • working knowledge of Drupal,

  • a good understanding of the AWS components involved in the solution, and

  • some experience with Docker.

Why Deploy Drupal in this Way?

The classic method of running a Drupal site is on a dedicated server (virtual or otherwise, on-premise or in the cloud) that remains in place to serve requests. But what happens when the server goes down? How about when your site experiences a huge increase in traffic that your single server (or cluster of servers) wasn’t designed to handle? How do you manage drift in the configuration of one server over time, ensuring that it stays in sync with other servers and environments? By using a serverless approach, you’ll achieve the following outcomes:

Robustness: Nobody wants the 3am call to reboot a server because the site is unresponsive. By using a container management system, site instances are automatically replaced as needed.

Scalability: Although forecasting and planning for load is important, things often happen in production that we don’t expect. Especially for sites that experience regular peaks and troughs of demand, being able to dynamically scale-out ensures your site continues to work in any condition.

Reliability: Relying on static Drupal instances with manual configuration can be a risky proposition and can lead to unexpected behaviour between environments. By replacing entire instances rather than updating them, we produce a repeatable and dependable deployment outcome.

Cost Saving: Paying for computing resources that you’re not using (or that you only use sporadically) is wasteful. A system that can dynamically scale in and out ensures that resources are released when they aren’t needed, saving you money.

Drupal as Cattle (Not Pets)

The change to running Drupal in a serverless way starts with a mind-shift. If you’re the kind of Developer/Engineer that names their servers and likes to know their individual IP addresses, or if you regularly use a terminal to run drush commands on a deployed site, you’ll need to start thinking about Drupal instances less as pets and more like cattle. This means that they are expendable, identical (or nearly identical) to one another and are easily replaced when necessary. This can be a tough change to make if your current infrastructure or deployment/management methods depend on access to specific machines, but it’s fundamental to getting the most out of your cloud platform of choice. It gives you the freedom to address some issues quickly (by automatically destroying instances that are unresponsive), to easily scale out/in when necessary (by creating or destroying instances) and to make your deployments painless and dependable (by replacing healthy instances with new versions). At Zyrous, we explicitly disallow any SSH connections to any of our running instances to enforce this approach. Our developers didn’t thank us for it to start with, but I assure you that we’ve reaped the benefit of this approach many times.

The Moving Parts

Our architecture begins with Docker. A custom container (described below) is designed to host a Drupal site either locally (for development) or in a deployed environment. When run locally, the site is configured to use the local file system (sites/default/files) and a local MySQL instance (also in a Docker container). In a deployed environment, however, the architecture looks more like this (dotted lines indicate subnets):

AWS-Sever-Farm

Our public subnet contains only two things; the load balancer for our site instances and a VPN server for admin access. Meanwhile, one of our private subnets hosts an Aurora cluster of MySQL instances (here we’re showing three instances total, assuming that we’re in a region with three Availability Zones for redundancy). These components could be used for any Drupal site (or any other application for that matter), regardless of the hosting method.

It’s the middle subnet where things get more interesting. We use ECS (Zyrous prefers Fargate over EC2) to host our Drupal containers (the same ones that our developers create) and configure a scaling policy to ensure that each one remains at a stable utilisation level (we found that scaling based on CPU works well rather than memory usage but your mileage may vary). The containers each require a number of different settings to configure themselves correctly based on the environment they’re in (database credentials, API keys etc) and these are stored in Systems Manager Parameter Store; Task definitions for ECS Services are designed to inject these as environment variables when a container is created. Finally, we must ensure that our containers remain stateless and the biggest challenge here is file storage. By default, Drupal stores files in its internal file system. In an environment where several containers could exist and any one of them could be destroyed, we must ensure that files are stored outside the containers themselves. This is where S3 comes in. A dedicated S3 bucket is used as the storage mechanism for all Drupal instances, effectively replacing their file systems. This gives us an added performance benefit; files are served to users through the Cloudfront CDN (rather than each instance needing to read and return the file to the browser), which reduces the load on our containers and dramatically improves response time for users.

Note that there are some components left out of this model that you’ll need to consider as well:

  • A firewall to filter traffic to the load balancer (we use AWS WAF or Cloudflare).

  • A Docker image repository for ECS to use (we use AWS ECR).

  • A Domain Name System service to direct traffic to the load balancer and to Cloudfront (we have used both AWS Route53 and Cloudflare).

  • A log aggregation and analysis tool (we use AWS Elasticsearch with Kibana).

Conclusion

In this article, we’ve shared an approach to deploying and managing Drupal that works well for us. By embracing serverless principles and thinking of your site as a collection of expendable instances, you’ll be able to run sites that are scalable, cost-efficient and painless to deploy. In future articles, we’ll show you how to set up your Docker environment, which Drupal modules you’ll need, how to use source control and how to automate site installation and upgrades.

Who we are

Zyrous is a digital agency that provides website development and management services and custom development, user experience, design, and marketing. Zyrous is one of the best Drupal development agencies that also focuses on digital strategy and eCommerce development. If you need help building a great website, get in touch! With teams based in Perth, Abu Dhabi, and Dubai.

Mason Yarrick - Technical Director

Mason is a digital builder. As an Enterprise Architect, he loves using technology to solve real business issues and drive improvement across the value chain. Mason models business process and layers the architecture of the underlying software and infrastructure to support the process. He has worked on a  myriad of projects from fleet and shipping management systems to inventory tracking for clients such as BHP.

City: Perth

Country: Australia

Website: www.zyrous.com

Mason-Yarrick