1. Introduction
Amazon Web Services (AWS) offers a vast array of cloud services for running applications, but harnessing them typically requires significant cloud expertise. With over 200+ services to choose from, configuring the right mix of networking, compute, and storage for a production deployment can be daunting. Defang is a tool that bridges this gap by enabling developers to take an application described in a succinct Docker Compose file and deploy it to their AWS account with a single command. This whitepaper explores how Defang works with AWS, how it maps Docker Compose definitions to AWS resources, and how to integrate Defang into CI-CD workflows. We also consider a case study of deploying the Defang Playground on AWS with Defang and discuss future enhancements on the Defang roadmap specific to AWS.
2. What is Defang?
Defang is a cloud deployment tool for developers that takes your Docker Compose application – written in any language or stack – and deploys it to a secure, scalable configuration on your favorite cloud, including AWS. Defang abstracts away the complexities of provisioning and managing cloud services. Instead of manually configuring AWS services (or writing extensive CloudFormation/Terraform/Pulumi infrastructure code), you define your application (services, databases, caches, etc.) in a standard compose.yaml
file, and Defang maps those components to corresponding AWS resources in your own AWS account.
Defang supports configuring custom domain names, advanced networking, scalable compute (including provisioning GPUs), managed storage options such as databases and caches, and even building your project from source. It also supports multiple deployment modes optimized for cost, availability, or a balance of both. Under the hood, Defang uses a rules-based workflow to generate deployments that conform to AWS best practices and are reliable, repeatable, and predictable. This empowers small teams and individual developers to achieve cloud deployments without a dedicated DevOps effort, and allows larger teams to accelerate delivery by automating their deployment processes.
3. Why Use Defang to Deploy to AWS?
There are several reasons why Defang is an ideal tool for deploying applications to AWS:
3.1. Ease of Use
Single-Command Deployment: Defang provides a frictionless developer experience. Once your application is described in a Docker Compose file, you can deploy it to AWS with a single CLI command (e.g. defang compose up --provider=aws
). Defang will automatically build your containers and provision the necessary AWS infrastructure, dramatically simplifying the usual multi-step AWS deployment process.
Compose Compatibility: Because Defang uses the standard Docker Compose format as the application definition, developers can leverage existing Compose files and familiarity. There is no new proprietary format to learn – if you know how to define services in Compose (images, ports, volumes, etc.), you can use Defang. This is especially attractive to teams that already containerize their apps or use Compose for local development. When you defined services in Compose, you already expressed the networking, storage, and dependencies of your app; Defang reuses that information instead of requiring you to redefine it in cloud-specific templates.
IDEs & AI Integration: Defang integrates with modern development workflows, including AI-assisted coding tools and popular IDEs. Using Defang’s Model Context Protocol (MCP) server, developers can even trigger deployments via natural language prompts from environments like VS Code, Cursor, Windsurf, or Claude Desktop. In practice, a developer can tell their AI pair-programmer “deploy my project to AWS”, and Defang’s backend will translate that into a deterministic deployment operation. For fast-moving teams and “vibe coders” experimenting with AI-generated apps, this removes even the context switch of running CLI commands – deployment becomes part of the development conversation.
3.2. Flexibility
Any Language, Any Stack: Defang is language-agnostic. Your Compose file can define services in Node.js, Python, Go, Java, or any runtime – as long as it can be containerized, Defang can deploy it. This frees teams to use the best tools for the job without worrying if the deployment platform supports that tech stack. Polyglot architectures (e.g. a Python backend, Node frontend, Redis cache) are first-class citizens in Defang.
Use of AWS Free Tier & Credits: Using Defang with AWS doesn’t mean leaving the AWS ecosystem – on the contrary, Defang deploys everything into your own AWS account. This means you can leverage AWS’s Free Tier and any credits you have. For example, you might use AWS’s startup credits or free usage tier to cover the resources Defang provisions. All resources run under your ownership, allowing you to use AWS’s billing, monitoring, and IAM security controls as usual.
Multi-Cloud Portability: While our focus here is AWS, Defang’s cloud-agnostic design gives teams cloud portability. Startups especially value the option to deploy on different providers without rewriting infrastructure code. For instance, a team could prototype on the free Defang Playground, then deploy to DigitalOcean for production, and later decide to migrate to AWS to take advantage of a service like Amazon Bedrock – all using the same Compose file. This flexibility assures organizations (especially those wary of vendor lock-in) that they can avoid being tied to a single cloud’s deployment scripts or syntax. In fact, Defang is an official partner on multiple clouds (including AWS) and adheres to each cloud’s best practices, making cross-cloud moves feasible with minimal changes.
3.3. Quality (Best Practices: Security, Performance, Cost)
Defang doesn’t just simplify deployments – it enforces cloud best practices for you. Defang’s architecture and implementation have been developed in line with AWS’s Well-Architected Framework (encompassing security, reliability, performance efficiency, and cost optimization).
Security: Defang sets up AWS resources following the principle of least privilege and modern security standards. For example, it will create dedicated IAM roles for building and running your application, each with only the permissions required. Containers in your ECS tasks assume an IAM task role that grants access only to the specific AWS services they need (e.g. access to RDS, ElastiCache, or Bedrock), without embedding any AWS credentials in your code. Network security is handled via AWS networking best practices: Defang deploys your services into private subnets, and only services explicitly marked for ingress are exposed via a public Application Load Balancer. When you expose a service, Defang also automatically provisions an HTTPS listener. For custom domains, it requests an AWS Certificate Manager SSL certificate and attaches it to the load balancer, so your traffic is encrypted end-to-end without extra effort. Internally, Defang manages sensitive configuration like passwords or API keys using AWS Systems Manager Parameter Store (with encryption), so you don’t have to bake secrets into container images or code. The net effect is that even small teams automatically deploy in a secure manner akin to what an expert cloud architect would configure.
Performance & Scalability: Applications deployed via Defang on AWS benefit from the scalable architecture of AWS’s managed services. Each service in your Compose file typically becomes an ECS service running on AWS Fargate – a serverless container engine. Fargate automatically manages the underlying compute, and can scale your services by launching more container tasks as needed behind the scenes. Defang can also enable auto-scaling policies for your services: by adding a simple Compose extension, you allow AWS to monitor CPU and scale out the number of task replicas during traffic spikes (and scale in when load diminishes). Conversely, in idle periods you only pay for minimal running resources. Defang can even accommodate GPU-based workloads: if a service is marked as requiring a GPU, Defang will provision an appropriate EC2-based ECS instance with GPU capacity for that container. In summary, your app can scale horizontally and make use of specialized hardware automatically, all configured by Defang.
Cost Efficiency: By automating resource selection and scaling, Defang helps enforce cost-effective practices. It chooses managed AWS services with pay-as-you-go pricing (Fargate, RDS, etc.), so you generally pay only for what you actually use. Defang’s deployment modes (detailed later) also allow tuning for cost: for example, in the affordable mode, Defang uses smaller instance sizes or even Spot instances to save money. By contrast, high_availability mode uses on-demand, multi-AZ deployments for maximum resilience, accepting higher cost for production uptime. Defang also provides tooling for cost planning: you can run defang estimate
to get a projection of what your AWS deployment would cost before you deploy.
Overall, using Defang to deploy to AWS provides ease, flexibility, and quality: it simplifies deployment to a one-liner, supports essentially any tech stack on AWS, and applies Amazon’s cloud best practices for you.
4. How Defang Maps Your App to AWS Resources
Defang employs an optimized mapping of your Compose-defined application to AWS services and resources. In this section, we break down how Defang translates various aspects of your project into AWS infrastructure, and how these may differ according to the deployment mode.
4.1. Building Your Project (Container Builds)
When you trigger a Defang deployment to AWS, one of the first steps is building your application’s container images. Defang handles this by running an on-demand build process in your AWS account rather than relying on your local Docker environment. Specifically, the Defang CLI packages your source code (excluding any files you’ve .dockerignore
’d) and uploads it to an S3 bucket that it creates in your account Then, Defang launches a temporary AWS Fargate task in a dedicated build cluster to compile the Docker images as specified by your Compose file. This build task pushes the resulting images to Amazon Elastic Container Registry (ECR). Defang’s use of an isolated, repeatable cloud build process ensures that builds are not dependent on the developer’s local machine and that any environment-specific issues are eliminated. The build is also appropriately sized to the deployment mode – for example, affordable mode allocates fewer CPUs to the build task (trading off speed for cost), whereas high_availability mode uses a more powerful build configuration to finish faster.
4.2. Security: Accounts, Roles, Secrets, and Best Practices
Defang sets up a secure foundation on AWS so that your app runs with the correct identity, minimal privileges, and with secure management of sensitive values. These settings follow AWS security best practices and ensure your cloud resources are isolated to your control.
-
Cloud Account Isolation: All resources are deployed into your AWS account – Defang uses your AWS credentials (e.g. an IAM user or role you’ve authenticated with) to create resources on your behalf. Defang itself does not retain any long-term access to your account. This model is similar to using an IaC tool like Terraform: you temporarily grant rights to perform the deployment, but once complete, the infrastructure is owned and operated by you. You can even revoke Defang’s IAM privileges after deployment and your app would continue running unaffected: Defang is not a runtime hosting provider, it’s a deployment orchestrator.
-
IAM Roles for Services: Defang creates AWS IAM roles to assign to the various components of your application. For example, it will create a build execution role that the ephemeral build task assumes (allowing it access to S3, ECR, and ECS as needed to build and deploy images), and a task role for each service in your Compose file (allowing the running containers to access only the specific AWS services they need). Each ECS task that runs your service will assume its task’s IAM role via AWS’s IAM integration with ECS. This means, for instance, if one of your containers needs to read from an S3 bucket or talk to a DynamoDB table, it can do so without hardcoding any credentials – the appropriate permission can be granted via the role, and no other AWS resources can be accessed by that task. Defang ensures that these roles are configured with least privilege policies (each service gets only what it needs). There’s no monolithic “Defang admin” role hanging around; permissions are granular and service-specific, reducing blast radius and improving security auditing (AWS CloudTrail logs will show exactly which task assumed which role to perform any AWS actions).
-
Secrets Management: Managing secrets (API keys, passwords, tokens) is often a challenge in deployments. Defang integrates with AWS’s native secrets solution, Systems Manager Parameter Store, to store and retrieve sensitive configuration. If you specify any secret values in your Compose file or via
defang config
(for example, environment variables that shouldn’t be exposed in plaintext), Defang will store them as encrypted parameters in the Parameter Store. At deployment time, those secrets are injected into your containers as environment variables (retrieved securely via the task’s IAM role). You avoid ever putting secrets in source control or Docker images, and you can rotate them by updating the Parameter Store value and re-deploying. -
Secure Defaults: Defang aligns with AWS’s secure-by-default services. For compute, it uses AWS Fargate for isolation — each container task runs in its own minimal micro-VM with its own kernel, providing a strong isolation boundary by design. Defang’s balanced and high_availability modes deploy services into private subnets (no direct internet access) and only attach a public load balancer to services that you explicitly mark as needing ingress. By default, if a service is not meant to be public-facing (no ports in
ingress
mode or domainname in Compose), it will only be accessible within the private network (e.g. other services can call it via internal DNS, but it won’t have a public endpoint). When you do expose a service to the internet, traffic goes through an AWS Application Load Balancer that terminates SSL. Defang automatically provisions an AWS Certificate Manager (ACM) certificate for custom domains you define, ensuring HTTPS for all external traffic. All inter-service communication inside the deployment uses internal AWS networking (which is protected and does not traverse the public internet). Additionally, data at rest for managed services is encrypted by default: for example, RDS databases and ElastiCache clusters that Defang creates will have encryption-at-rest enabled (using AWS-managed KMS keys unless specified otherwise). Backup retention and, where applicable, deletion protection are enabled for critical resources in higher availability modes to prevent accidental data loss. In short, Defang’s default configurations on AWS mirror what an experienced security-focused cloud engineer would implement for a production-grade environment.
4.3. Domain Names
Defang allows you to bring your own domain for your applications deployed on AWS. If you have an existing domain (e.g. myapp.com), you can instruct Defang to configure it for your Defang deployment. Under the hood, Defang leverages Amazon Route 53 for DNS management, if your domain is managed in Route 53. Once Route 53 is set up, using a custom domain with Defang is as simple as adding a domainname
field to the relevant service in your Compose file and deploying. Alternatively, Defang will use ACME (Let’s Encrypt) for domains that are not managed by Route 53.
When you deploy, Defang will automatically: (1) provision an ACM SSL/TLS certificate for your domain (using DNS validation to prove domain ownership), (2) create the necessary DNS records in Route 53 to map your domain (and any subdomains like www
, if specified in your Compose file) to the load balancer endpoints, and (3) attach the certificate to the Application Load Balancer so it serves your app over HTTPS. This is all done transparently during the defang compose up
process. The result is that, after deployment, your application is accessible at your chosen domain with a proper HTTPS URL, and you don't have to manually configure any certificates or DNS entries.
In addition to public domain support, Defang also sets up an internal DNS namespace for your services within the AWS deployment. It creates a private Route 53 hosted zone (e.g. tied to the VPC) and registers each service’s hostname in that zone. For example, if you have a service named “api”, Defang creates an internal DNS name like api.project.internal
that resolves to the service’s internal load balancer or IP. This internal service discovery means that your containers can reach each other by service name, much like Docker Compose links them via names on localhost networks. Defang runs a small Route 53 sidecar container alongside each service to update DNS records dynamically (for instance, if a task’s IP changes on restart, the DNS record is updated). In the future, Defang will deploy an internal load balancer for high_availability
mode. All of this happens behind the scenes, giving your distributed services a reliable way to find each other without exposing those endpoints publicly.
4.4. Networking
Networking in AWS can be complex, involving Virtual Private Clouds (VPCs), subnets, routing tables, internet gateways, NAT gateways, security groups, etc. Defang shields you from most of this complexity by automatically provisioning a secure network architecture for your Compose application, implementing semantics according to the Docker Compose Networks specification. When you deploy to AWS, Defang will create (or reuse, in subsequent deploys to the same project) a VPC in your account to house the deployment. Within this VPC, Defang sets up multiple subnets: typically, private subnets for your application tasks, and public subnets for load balancers or any component that must have internet access. It also creates an Internet Gateway attached to the VPC to enable outbound connectivity, and configures route tables accordingly (private subnets route through a NAT gateway for egress, see below; public subnets route directly via the Internet Gateway).
Each service in your Compose file runs in one of the private subnets by default, which means the containers are not directly reachable from the public internet. Instead, if a service is marked for external exposure (e.g. via a ports: … mode: ingress
or a domainname
), Defang will create an Application Load Balancer (ALB) in a public subnet and configure it to route HTTP/HTTPS traffic to the appropriate service’s tasks on the private subnet. The ALB is fully managed and can scale automatically to handle incoming request load. It monitors the health of your tasks via health checks and only sends traffic to healthy instances, improving availability.
Defang uses Security Groups (AWS’s cloud firewall mechanism) to tightly control traffic flow. It assigns a security group to the ALB that allows inbound traffic on ports 80/443 from the internet, and another security group to your ECS tasks that only allows inbound traffic from the ALB’s security group (and allows the tasks to talk to each other or to internal services as needed). This default setup means your containers aren’t exposed to internet traffic – only the ALB can reach them, and the ALB itself only exposes the ports you intended (80/443 for web services, etc.).
For outbound connectivity (e.g. your containers need to call external APIs or download updates), Defang configures a NAT Gateway depending on the deployment mode. In the affordable mode, to minimize cost, Defang leans on public IPs. This avoids incurring NAT Gateway costs when possible. In balanced and high_availability modes, Defang will provision a NAT Gateway in a public subnet to allow full outbound internet access from the private subnets. The NAT Gateway provides a route for tasks to call out to any internet service (useful if your app calls third-party APIs, etc.), while still keeping the tasks themselves not directly reachable from outside. The use of multiple Availability Zones (AZs) is also configured in higher modes: for example, the VPC will have subnets in multiple AZs, and the ALB will span at least two AZs for resilience. Private subnets similarly will be in multiple AZs so that ECS can place tasks redundantly (ensuring that even if an AZ goes down, some tasks in another AZ remain). All of this networking setup – VPC, subnets, routes, gateways, security groups, DNS, endpoints – is handled by Defang in an automated way. The resulting architecture follows AWS best practices: isolated private execution environments, least-privilege connectivity, and high availability for critical components.
4.5. Compute (ECS on Fargate, Auto-Scaling, and GPUs)
For compute, Defang deploys your services to Amazon ECS (Elastic Container Service) using the Fargate launch type by default. In practice, each service defined in your Compose file becomes an ECS service (or ECS task set) running your container image on AWS Fargate. Amazon ECS is a fully managed container orchestration service, and Fargate is its serverless compute engine for containers. This means you do not have to provision EC2 servers for your containers – AWS handles provisioning the right amount of CPU/RAM for each task, and you pay per second of vCPU and memory that your tasks use. ECS on Fargate also abstracts away patching and scaling of the underlying machines, so you can focus on your application.
Defang configures each ECS service with the settings from your Compose file’s deploy section (if provided). For example, if you specify a certain number of replicas for a service, Defang will ensure that many tasks are running (spread across AZs if possible). If you specify resource reservations (CPU/memory), it will translate those into ECS task size definitions. ECS will automatically take care of placing tasks on the Fargate infrastructure and restarting them if they fail health checks.
Auto-scaling: Defang can enable horizontal auto-scaling for your ECS services. If you want a service to scale its replica count based on load, you can add the x-defang-autoscaling: true extension to that service in your Compose file. When deployed to AWS, Defang will create the appropriate CloudWatch Alarms and Application Auto Scaling policies to scale the ECS service in or out based on CPU utilization (by default). This ensures your app can handle variable traffic without over-provisioning.
GPU support: AWS Fargate currently doesn’t support GPU workloads, but Defang has you covered. If any of your services require a GPU (for instance, an AI model server or a video processing worker), you can indicate this in the Compose file (by simply marking the service with devices: [ "gpu" ]
) Defang will detect this and automatically provision an ECS EC2 cluster (instead of Fargate) for that service, with an EC2 instance type that has GPU capabilities (like a p3 or g4 instance family). The service will then run as an EC2-backed ECS service in that cluster, using the GPU as requested. Defang handles the provisioning of the EC2 instance (including the appropriate AMI with NVIDIA drivers, etc.), so from the developer’s perspective it’s still a one-command deployment. This hybrid approach (mostly Fargate, but EC2 when needed for GPUs or other special requirements) gives you the best of both worlds: serverless simplicity for general services and raw EC2 power when necessary for specialized hardware.
4.6. Managed Storage (Postgres, Redis, and MongoDB)
Many applications need stateful services like databases and caches. Defang offers Managed Storage options that allow you to specify these in your Compose file and have them provisioned as native cloud services rather than containers. On AWS, the following managed storage services are supported:
Managed Postgres: If your Compose file has a service (say db
) using the x-defang-postgres: true
extension, Defang will provision an Amazon RDS for PostgreSQL instance in your AWS account. Amazon RDS is AWS’s managed relational database service, which handles automated backups, updates, failover, and scaling of databases so you don’t have to run Postgres in a container. When Defang creates an RDS Postgres, it will automatically use sensible defaults for instance size and storage based on the deployment mode (e.g. a small instance in affordable mode, a multi-AZ larger instance in high_availability mode). It also wires up the connectivity: Defang will put the RDS instance in the same VPC and private subnets as your services, and populate the appropriate environment variables in your app (like POSTGRES_HOST
, POSTGRES_USER
, etc.) so that your application containers can connect to the database. The database credentials can be generated and stored as secrets (Parameter Store) so that they aren’t exposed. Using Amazon RDS means you get a production-grade PostgreSQL with minimal effort – automated backups, point-in-time recovery, encryption at rest, and more, managed by AWS.
Managed Redis: Similarly, if you mark a service with x-defang-redis: true
, Defang will provision an Amazon ElastiCache for Redis cluster to satisfy that dependency. ElastiCache is AWS’s fully managed Redis service – an in-memory data store useful for caching, real-time data, message brokering, etc. Defang creates a Redis cluster and then provides the connection endpoint to your application (e.g. via REDIS_URL
env var or similar). Your containers then use Redis as a service, rather than running a Redis container themselves. This provides benefits of higher performance and reliability – ElastiCache Redis can handle large throughput with sub-millisecond latency and is managed by AWS (including patching, snapshots, failover).
Managed MongoDB: Defang is rapidly expanding support for more databases. One recent addition is support for MongoDB. If you include the x-defang-mongodb: true
extension for a service that would otherwise run MongoDB, Defang will provision an Amazon DocumentDB cluster for you. This means you don’t need to run a MongoDB container or manage a MongoDB cluster yourself; DocumentDB will handle storage scaling, replication, and patching. As with Managed Postgres and Redis, the database runs in your account, and you retain full ownership.
Defang’s philosophy with managed storage is to let developers declare what they need (a SQL database, a cache, etc.) and have the “right” cloud resource be provisioned. The benefit is that you get the operational maturity of AWS’s managed services (with automated backups, scaling, monitoring) without extra integration work. It also means, for example, your database is not tied to the lifecycle of your app containers – you can deploy new versions of your app without destroying the data store. Each of these services is created with best-practice configurations (e.g., parameter groups, instance classes appropriate for the deployment mode, security group rules that only allow access from your app’s subnets, etc.). Defang manages their lifecycle: if you tear down a Defang deployment, it takes down these managed services but retains a snapshot of their contents to prevent accidental data loss in higher environments. In summary, by using Defang’s managed storage extensions, you get cloud-native data services integrated into your stack with a Compose declaration, rather than maintaining those services yourself.
4.7. Managed LLMs – Amazon Bedrock Integration
One of Defang’s unique features is built-in support for Managed Large Language Models via the x-defang-llm
extension. This feature is designed to help deploy AI applications that utilize cloud-hosted AI models. On AWS, this means integration with Amazon Bedrock, AWS’s fully managed generative AI service. Amazon Bedrock provides access to a range of high-performing foundation models (from AWS and third parties like AI21, Anthropic, Cohere, Stability AI, etc.) through a simple API.
If your Compose file marks a service with x-defang-llm: true
, Defang will provision any supporting infrastructure needed to enable that. Currently on AWS, Defang will set up an OpenAI Access Gateway when using the LLM extension, which will transform any OpenAI-compatible API requests to Bedrock and transform the responses from Bedrock to the OpenAI-compatible API responses.
Concretely, suppose you have a chatbot service that uses Anthropic’s Claude via Bedrock. You would include x-defang-llm: true
for that service and deploy. Defang ensures that the IAM role for that container has permissions to call Amazon Bedrock APIs. The idea is to provide a cloud-native environment for your AI service to run effectively. This Bedrock integration is a powerful example of Defang’s value: you can build an AI application that leverages large language models and deploy it with all the necessary cloud plumbing (compute, networking, caching, IAM, etc.) set up for you. This drastically lowers the barrier to entry for deploying generative AI apps on AWS.
4.8. Logs / Observability
Observability is crucial in any deployment. Defang provides multiple ways to view and analyze your application logs and runtime status:
Defang CLI (Live Log Streaming): Right after you deploy, the Defang CLI automatically attaches to your service logs and streams them in your terminal by tailing them. You will see the build logs (the output of the image build steps) and then the runtime logs from your application containers as they come up. This immediate feedback is extremely useful for development and debugging – you can verify that your app has started correctly on AWS and see logs in real-time, without opening the AWS console. The CLI log tail will continue until you stop it, and you can always re-run defang logs
or defang tail
to reconnect to log streams later.
AWS CloudWatch Logs: When your app runs on AWS (ECS/Fargate, RDS, etc.), all stdout/stderr from containers and system logs from managed services go to Amazon CloudWatch Logs by default. Defang sets this up automatically by using the AWSLogs driver for ECS tasks and creating the appropriate Log Groups. This means you can always go to the AWS CloudWatch console and view the log streams for each of your services. For example, you will find a log group named after your Defang project and service, containing log streams for each task or container instance. Because Defang uses your AWS account, you have full access to these logs – they never leave your environment. This is important for compliance and integration: you can, for instance, set up CloudWatch Log Insights to run queries on your logs, or configure metric filters and alarms (e.g. trigger an alert on a certain error message). Defang ensures that the standard AWS logging pipelines are utilized, so any AWS-native or third-party tools that work with CloudWatch logs will work out-of-the-box with your Defang deployment.
defang tail: In addition to the immediate streaming at deployment, the Defang CLI offers a defang tail
command that you can run on demand to fetch logs from your AWS deployment. This command connects to your cloud’s logging backend (CloudWatch Logs in AWS’s case) and streams logs to your console continuously. You can specify a particular service to filter logs if you have multiple services. This is analogous to running awslogs
or using aws ecs execute-command
to tail logs, but it’s simplified through the Defang CLI. It’s very handy for quickly checking on a service’s output without digging through the AWS console.
AI Debugger: Defang includes an AI Debugger feature to assist with diagnosing deployment issues. After you deploy, the Defang CLI monitors the health status of all your services - it waits until ECS reports that tasks are running and any health checks have passed. If any service fails to become healthy – for example, perhaps a container crashed on startup, or a health check didn’t succeed – Defang will detect this and automatically offer to analyze the situation. With your permission, it will use an LLM to analyze the logs and even your project files to identify possible causes of the failure, then present suggestions for fixes. For instance, if your container failed because of a missing environment variable or a database connection issue, the AI debugger would catch that and recommend adding a config or adjusting a setting. While this doesn’t replace traditional monitoring, it can significantly speed up the debug cycle for deployment issues, especially for developers who are less familiar with AWS nuances.
In summary, Defang ensures you have immediate visibility (via CLI streaming logs) and ongoing visibility (via CloudWatch logs in your account) into your application’s behavior. Because everything runs in your AWS account, you can leverage AWS’s rich ecosystem of observability tools – CloudWatch metrics, alarms, dashboards, X-Ray (for distributed tracing, if you add it), and third-party APM tools – exactly as you would with a hand-crafted AWS deployment. Defang simply automates the wiring of these components so you start off with a sane, observable setup by default.
4.9. Deployment Modes
Defang currently supports three deployment modes on AWS: affordable, balanced, and high_availability. These modes primarily differ in how resources are allocated and how deployments are executed, allowing you to tailor the cost vs. resilience trade-offs. You choose the mode by specifying --mode=affordable|balanced|high_availability
when deploying (or configuring it in Defang’s project settings).
affordable mode is optimized for lower cost and quick iteration, ideal for development or small projects. For example, build processes in this mode use minimal resources (e.g. a 2 vCPU build task) and deployments are not highly redundant. Defang will favor other cost-saving measures: it uses smaller instance sizes for databases, and uses spot instances or spot Fargate capacity when possible. It also forgoes certain components – for instance, affordable mode avoids creating a NAT Gateway (to save the hourly cost) and instead rely on public IPs for ECS to pull images. Updates in affordable mode may incur a brief downtime (Defang tears down and recreates services instead of doing zero-downtime transitions) to optimize costs. Logs in affordable deployments are retained for a shorter period (e.g. 1 day) to reduce storage costs. The affordable mode gives you the cheapest footprint to experiment with.
balanced mode is a middle-ground, often used for staging or low-traffic production environments. It aims to provide increased reliability closer to production standards, but without all the costs of full high availability. In balanced mode, Defang will perform rolling updates for services (launch new task, then stop old task to achieve zero-downtime or minimal downtime deployments). It also enables infrastructure components that mirror production: a single NAT Gateway is provisioned so that private tasks have outgoing internet access just like in prod. However, balanced mode still uses smaller instance sizes and keeps replication factors lower than high_availability to reduce cost. Logs in balanced mode are retained longer (e.g. 7 days) for debugging purposes. Essentially, balanced mode is there to ensure that your staging environment is as close as possible to production in topology (so you can find issues with, say, NAT or DNS or multi-AZ behavior in staging first) while optimizing costs.
high_availability mode maximizes performance, redundancy, and uptime – it’s intended for production workloads. In this mode, Defang pulls out all the stops to ensure a resilient deployment. It will use on-demand instances/capacity and enable more resilience features. Builds run with more CPU (e.g. a 4 vCPU build task) so container images are built quickly and deployments proceed faster. When deploying updates, high_availability mode uses zero-downtime rolling updates: it will spin up new ECS tasks with the updated version behind the load balancer, wait until they are healthy, then deregister and shut down the old tasks. This ensures continuous service availability during deployments. Resources are provisioned in a highly available fashion: for example, databases (RDS) are multi-AZ by default (with automatic failover), and ElastiCache clusters have cluster mode enabled for failover. The capacity chosen is a production-grade memory-optimized instance – e.g. an RDS Postgres would be a db.r5.large instead of a micro. Additional safety measures are in place: termination protection might be enabled on databases (to prevent someone accidentally deleting the prod DB), final snapshot on DB deletion is turned on (so if you intentionally delete, it takes a backup), and so forth. Security is tight: encryption at rest is enforced for all storage; communications are all over TLS; and any potentially destructive operations (like replacing a database instance as part of an update) are done with care to avoid data loss. Logging and monitoring are beefed up: logs are retained for longer periods (e.g. 30 days or more) for audit compliance. High_availability mode, as the name implies, also spreads critical components across multiple AZs – e.g. ECS services run tasks in at least 2 AZs if possible, ALB is multi-AZ, etc., to withstand an AZ outage. All of this ensures that production deployments via Defang adhere to industry best practices for uptime and fault tolerance, while still being managed via the same Compose-centric workflow.
5. Integrating Defang Deployments into CI/CD Workflows
Defang can be seamlessly integrated into your Continuous Integration / Continuous Deployment (CI/CD) pipelines so that deployments to AWS occur automatically on code changes or merges. Two notable integration paths are using GitHub Actions and the Defang Pulumi Provider
5.1. GitHub Actions
GitHub Actions is a popular CI/CD platform, and Defang provides an official Action to deploy projects as part of your repository’s workflow. This allows you to, for example, automatically run defang compose up --provider=aws
whenever you push to the main
branch or create a new release tag. Integrating Defang with GitHub Actions has several benefits:
- Consistency: The same deployment logic runs in CI as you would run locally, reducing “it works on my machine” issues. The Compose file is the single source of truth.
- Speed: Defang’s rapid deployment means your CI pipeline doesn’t need to handle the nitty-gritty of provisioning; it just calls Defang. And since Defang can do zero-downtime deploys, you can push updates frequently.
- Rollback: If something fails, you can run a GitHub Action to deploy a previous version (since the Compose file can reference a specific image tag or you can keep older images in the container registry).
Overall, with a few lines of YAML in a GitHub Actions workflow, you can set up continuous deployment of your Compose-described app to AWS via Defang. This can significantly accelerate your team’s development pace by automating releases in a safe and repeatable way.
5.2. Pulumi Provider
Pulumi is an Infrastructure-as-Code (IaC) framework that allows cloud infrastructure to be managed using general-purpose programming languages (Python, TypeScript, Go, etc.). Recognizing that some users need the full power of IaC or want to integrate Defang into existing IaC setups, Defang provides a Pulumi Provider. This provider enables Pulumi programs to orchestrate Defang deployments as part of a larger infrastructure definition.
What does that mean? Imagine you are an enterprise already using Pulumi to manage certain AWS resources – for example, you use Pulumi to set up networking (VPCs, subnets) and maybe some data services (say an S3 bucket or a proprietary system your app needs). You can embed a Defang deployment into that Pulumi code. The Defang Pulumi provider exposes a resource (defang.Project) where you can point to your Compose file (and Defang project settings) and have Pulumi kick off a Defang deployment as part of its run. Pulumi will orchestrate it such that, for instance, it can create a bucket, then run Defang to deploy your app which perhaps reads from that bucket.
6. AWS + Defang in Action – the Defang Playground
An elegant demonstration of Defang’s power on AWSa is Defang deploying itself. As detailed in a series of blog posts (Part 1, Part 2), Defang automates the provisioning and deployment of its own infrastructure and services (such as the Defang.io website, the Defang Portal, the Playground environment, as well at its CD back-end) using a variety of AWS services such as Route53 for DNS, ECS/Fargate for compute, RDS/Postgres databases, Redis ElastiCache, Bedrock LLMs, and more - a testament to the power of AWS combined with the simplicity, and flexibility of Defng deployments.
7. Future Enhancements
Defang is rapidly evolving. The Defang team has an active roadmap to further expand its capabilities on AWS and other clouds. Here are some enhancements in the pipeline (or recently introduced) that will make AWS + Defang even more powerful:
Managed Object Storage integration is a key upcoming addition. This would allow applications that need object storage (e.g. to store user-uploaded images or serve static files) to leverage AWS S3 seamlessly. This feature would let you either create a new S3 bucket or use an existing one, and then provide the app with the necessary access (credentials or IAM role permissions) to that bucket. .
Persistent volumes support is also being considered. Some applications need a mounted filesystem or shared volume (beyond what a single container’s ephemeral storage provides). On AWS, this could be provided by services like Amazon EFS (Elastic File System) or FSx.
Cost Estimation and Optimization: Nobody likes a surprise cloud bill. Defang has introduced a defang estimate
command to estimate deployment costs in advance of deployment. You can run defang estimate --provider=aws --mode=<mode>
and Defang will analyze your Compose file against AWS pricing data to predict the monthly cost of that deployment. This estimate currently includes the cost of services that have a fixed monthly price such as ALBs, NAT gateways, etc.. Future enhancements include estimates for services which charge based on consumption such as ECS, EC2, RDS, Bedrock etc., based on the estimated workload. Looking further ahead, Defang’s roadmap for AWS includes a more intelligent optimizer that can suggest cost-saving changes (e.g. “your Compose requests 4GB RAM but average usage is low, consider reducing reservation to 2GB or using spot instances”).
Other Enhancements: Defang continues to improve its core deployment engine as well. We can expect to see even tighter AWS integration – for example, Defang becoming aware of AWS-specific features like Graviton processors (ARM instances) for cost savings. Another area is monitoring and alerts: Defang may provide out-of-the-box dashboards and notifications by adding common CloudWatch alarms for CPU high, memory high, etc..
8. Conclusion
Defang’s integration with AWS brings together the best of both worlds – the simplicity of Docker Compose and the robustness of AWS’s cloud services. By using Defang, development teams can deploy complex, scalable applications to AWS with a fraction of the effort traditionally required, all while adhering to AWS best practices in security, performance, and cost management. This combination is particularly empowering for startups aiming to iterate quickly without a dedicated DevOps team, and for enterprise teams seeking consistency and compliance across deployments.
With Defang, an engineer can go from a Compose-defined app on their laptop to a production-grade deployment on AWS in minutes. They don’t need to become experts in VPC design, load balancer tuning, or database maintenance – yet the resulting infrastructure is akin to what an AWS Solutions Architect would design for high quality and reliability. Defang effectively turns the cloud into a friendlier place for developers: the focus remains on the application’s architecture and code, rather than the arcane incantations of cloud setup. As such, Defang is poised to become a key accelerator for cloud applications on AWS. Organizations can enjoy the innovation and scalability of AWS without the traditional complexity and potential missteps of configuring everything manually. In effect, Defang + AWS makes “docker compose up” for the cloud a reality – enabling simple, secure, and scalable deployments that are accessible to every development team.