Logo
Jaime Elso

AWS Solutions Architect

Design and implementation of a Zero Trust architecture for Elastic Container Service

Zero Trust is a network security philosophy based on the premise of not trusting any network element, both external and internal. Inspired by this approach, not only for the network but for all infrastructure elements, I began designing and implementing an architecture with the goal of being as restrictive, private, and secure as possible to deploy orchestrated containers by Elastic Container Service (ECS).

Throughout this article, we will meticulously explore each step I have taken in creating this architecture, breaking down and explaining each element sequentially. From the design and segmentation of the Amazon Virtual Private Cloud (VPC) to routing, Security Groups (SG), and NetworkACL, the endpoints used for communication with Amazon Elastic Container Registry (ECR), the configuration of necessary roles and policies, and finally, we will deploy a load balancer along with the containers in ECS.

Project Overview

With the purpose of maintaining simplicity and practically executing the conceptual design that I will develop in this post, I have created a basic web server using Bun. This server has a fundamental function: to respond with a "Hello World!". The underlying goal is to create a container for this application and deploy it redundantly in two private subnets within an ECS cluster. The corresponding Docker image will be stored in ECR, and to track the logs generated by our server, we will make use of Amazon CloudWatch. The following diagram provides a visual representation of the final architecture.

Zero Trust infraestructure diagram

Infrastructure as Code (IaC) for Zero Trust architecture

In the GitHub repository of the project, you will find all the necessary code to implement the Zero Trust architecture described above. In this repository, you'll find the Infrastructure as Code (IaC) needed to deploy the infrastructure on AWS, such as the web server and files required to create the Docker container image used in this example. The IaC is organized into six AWS CloudFormation Stacks (Networking, Permissions, Security, Endpoints, Balancer, and Containers), along with a master Stack that automates the deployment of the other six.

My Zero Trust VPC concept

A VPC in Amazon Web Services (AWS) is a virtualized network environment that enables users to create a private and isolated network in the cloud. It allows defining and controlling the network topology, configuring security rules, and securely connecting AWS resources. A VPC is a regional service that spans across all existing Availability Zones (AZs) in the region where it has been deployed.

Deciding on Classless Inter-Domain Routing

When creating a new VPC, you need to specify the Classless Inter-Domain Routing (CIDR) for the network it will use. CIDR is a standard used in networking for IP address assignment, represented in the form of prefix/subnet mask length. The prefix is the network's IP address, and the subnet mask length indicates how many bits of the IP address are used to identify hosts on the network.

The Request for Comments (RFC) 1918 standard defines a set of IP addresses reserved for private use in local networks. The main goal is to avoid IP address conflicts and allow organizations to implement private networks without the need to request unique public IP addresses from Internet registrars. The ranges defined in this standard are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.

In AWS, the allowed block size ranges from /16 to /28. While it's possible to choose a range outside of RFC 1918, it is not considered a best practice. It's essential to note that once the VPC is created, you cannot modify its CIDR. For the network design in this article, I will use the range 10.0.0.0/16, providing a broad range for subnet segmentation within the VPC.

Network Segmentation

A subnet is a subdivision of a VPC that is given a smaller network range within the VPC's network range. Subnets are located within a specific Availability Zone in the VPC region and allow us to deploy AWS services within them.

To design the subnet segmentation within our VPC, it is a best practice to isolate each layer of our infrastructure in a separate subnet. This approach enables the application of specific network security measures for each layer, provides more precise control over access to services, and facilitates environment management.

Zero Trust infraestructure diagram

Public subnets are those with internet access and are reachable from the internet. In these subnets, we will deploy the necessary network elements to establish an entry point for our application. On the other hand, private subnets are not capable of establishing a connection to the internet on their own, and they are not reachable from the internet without an intermediary in a public subnet. We will use private subnets to deploy the application's containers.

As shown in the diagram, these subnets reside within an Availability Zone. This means that if this AZ experiences issues, our application is susceptible to downtime. Everything fails all the time, be prepared for failure, and nothing will fail. For this reason, it is necessary to replicate our subnets in a different AZ, creating a foundation for deploying our application with high availability.

Zero Trust infraestructure diagram

Internet Gateway

Once the network is segmented and defined, we will add the necessary services to make the application accessible from the internet for users. The first thing needed is an Internet Gateway (IG), which is a horizontally scalable, redundant, and highly available component of the VPC that enables communication between the VPC and the internet. The IG allows resources in public subnets to connect to the internet if the resource has a public IPv4 address. Similarly, internet resources can initiate a connection with resources in the subnet using the public IPv4 address.

Zero Trust infraestructure diagram

Application Load Balancer

Since the containers are deployed in private subnets and, therefore, not accessible from the internet, the use of an intermediate element in our public subnets is necessary, one that does have internet access. Additionally, we need to balance the requests our application receives among the existing containers, especially if they are redundant, or route requests based on the desired destination.

The Application Load Balancer (ALB) is a level 7 load balancing service that distributes network traffic among specific applications. The ALB operates at the application layer and can route requests based on content, such as URL paths or HTTP headers. It provides high availability, scalability, and allows for efficient traffic distribution among multiple destinations, thus improving the performance and reliability of applications in the cloud.

Zero Trust infraestructure diagram

When creating an ALB, it is necessary to specify two or more public subnets to associate with. These subnets must have a minimum /27 network mask and have 8 available IP addresses per subnet. In these subnets, an Elastic Network Interface (ENI) will be deployed, allowing the ALB to route traffic to the internet and to our resources in private subnets.

Private traffic with VPC Endpoint

Having an ALB allows directing incoming traffic to ECS containers. However, containers deployed in private subnets cannot establish connections to the outside world, not even with AWS services like S3 or ECR. These services require connections to endpoints outside the VPC. While deploying a NAT Gateway in public subnets solves this issue, if you only need connectivity to AWS service APIs and don't require additional internet connections, a more efficient alternative is the use of VPC Endpoints.

VPC Endpoints are a highly available and scalable technology that allows for private connectivity between a VPC and AWS services as if they were in your own network. This option is more cost-effective in terms of both implementation and the cost per processed GB compared to a NAT Gateway. Within VPC Endpoints, two types are distinguished: Interface Endpoint and Gateway Endpoint.

Interface Endpoint

Interface Endpoints function by adding a Elastic Network Interface to the specified subnets within your VPC. Once deployed, these subnets will have a private IP address within the subnet's CIDR to which you can launch HTTPS requests, connecting you to the AWS service endpoint.

In the case of this architecture, we need the containers running in both private subnets to access ECR to pull Docker images. ECR has two endpoints that are equally necessary to use the service correctly:

A best practice is to deploy Interface Endpoints in both private subnets instead of just one. This achieves high availability since the Endpoint is independently accessible from each AZ. Additionally, we avoid inter-AZ traffic, resulting in cost savings for data transfer.

To manage access to the endpoints, we can assign them a resource-based policy. This allows us, for example, to limit the roles that can make calls through them or restrict specific actions within the API. To limit access to ECR only to the Task Execution Role, an IAM role used by ECS agents to make AWS API calls on behalf of the user, we will apply the following policy:

{
  "Statement": [{
    "Sid": "AllowPull",
    "Principal": {
      "AWS": "arn:aws:iam::{ACCOUNT_ID}:role/TaskExecutionRole"
    },
    "Action": [
      "ecr:BatchGetImage",
      "ecr:GetDownloadUrlForLayer",
      "ecr:GetAuthorizationToken"
    ],
    "Effect": "Allow",
    "Resource": "*"
  }]
}

In addition to these, a third endpoint is necessary to establish a connection with Amazon CloudWatch: com.amazonaws.{REGION}.logs. This service will be used by containers to store logs generated by applications. To restrict who has the ability to write logs to CloudWatch through the endpoint and determine in which specific log group, we will apply the following policy to the endpoint:

{
  "Statement": [{
    "Sid": "PutOnly",
    "Principal": {
      "AWS": "arn:aws:iam::{ACCOUNT_ID}:role/TaskExecutionRole"
    },
    "Action": [
      "logs:CreateLogStream",
      "logs:PutLogEvents"
    ],
    "Effect": "Allow",
    "Resource": [
      "arn:aws:logs:{REGION}:{ACCOUNT_ID}:log-group:{GROUP_NAME}:*"
    ],
  }]
}
Zero Trust infraestructure diagram

Gateway Endpoint

Unlike Interface Endpoints, Gateway Endpoints do not add any ENIs to the subnets. Instead, they add a rule to the route table of the specified subnets. Since there is no ENI, communication with the AWS service is not done by targeting a private IP within the subnet's CIDR but by calling the IPs of AWS services. The rule added to the subnet's route table is a prefix-list managed by AWS. This list contains all the CIDR ranges in which S3 provides service in the region where it is deployed.

One surprise I encountered when implementing the infrastructure is that ECR uses S3 behind the scenes to store the layers of Docker images. Therefore, for pulling an image in ECR from ECS to work, private subnets need access to the bucket: prod-{REGION}-starport-layer-bucket. This bucket is transparent to us, and we neither have access to it nor can we see it in the list of buckets in our account.

To connect with S3, we can do it with either an Interface or a Gateway Endpoint. There are some subtle differences, but the most important one is that Gateway Endpoints have no cost. We will configure the endpoint to connect to com.amazonaws.{REGION}.s3 and apply the following Document Policy to restrict access to the specific bucket used by ECR.

{
  "Statement": [
    {
    "Sid": "Access-to-specific-bucket-only",
    "Principal": "*",
    "Action": [
      "s3:GetObject"
    ],
    "Effect": "Allow",
    "Resource": ["arn:aws:s3:::prod-{REGION}-starport-layer-bucket/*"]
    }
  ]
}

The current policy in place imposes restrictions on accessing the prod-{REGION}-starport-layer-bucket, allowing only the GetObject action. In an effort to further limit access, I sought to specify the Principal to exclusively permit ECS to access the endpoint. I explored options such as specifying TaskExecutionRole or TaskRole as the Principal, but this configuration prevented containers from pulling Docker images.

In my attempt to identify S3 calls, I reviewed AWS CloudTrail logs, but the relevant calls were not recorded. The option to enable S3 Access Logs is not available, as AWS does not grant us access to this specific bucket. Ultimately, I concluded that AWS transparently performs these calls for the user, hindering me from applying additional restrictions to the policy.

Despite the absence of a specific Principal, I verified that other resources in the subnets, excluding ECS containers, cannot access the bucket. This is due to AWS implementing a Bucket Policy that restricts access solely to calls originating from ECR operations.

Zero Trust infraestructure diagram

Traffic management with Route Tables

Route tables play a crucial role in controlling how data flows within a VPC. It is essential to establish rules dictating the path network packets should take, specifying destinations and their routing.

For the two public subnets, two routes must be configured. The first route should direct all traffic with the destination 0.0.0.0/0 to the Internet Gateway created in the VPC. This configuration enables internet access for our public subnets. The second rule ensures that all traffic with the destination 10.0.0.0/16 (the VPC range) is internally routed within the VPC.

Regarding the private subnets where containers are deployed, in addition to the route for local traffic within the VPC, a rule for sending traffic to S3 through the Endpoint is added during Gateway Endpoint creation. The destination for this rule is an AWS Managed Prefix List containing all IP ranges where the S3 service can be accessed privately.

Destination Target 0.0.0.0/0 10.0.0.0/16 internet_gateway_id local Destination Target com.amazonaws.region.s3 10.0.0.0/16 gateway_endpoint_id local Public Route Table Private Route Table

Elastic Container Service

ECS is a container orchestration service developed by AWS to simplify the deployment, management, and scalability of containerized applications using Docker. While ECS shares similarities with Kubernetes and shares the same overarching goal, it stands out for being considerably simpler and managed by AWS.

To deploy the example container I have created, the first step is to create a new cluster, a logical grouping of computing resources where tasks and services can be executed. To do this, we need to specify a capacity provider. In this case, we will choose Fargate.

ECS Fargate provides a serverless deployment option, allowing containers to run without the need to manage underlying EC2 instances. Instead of worrying about infrastructure, developers can focus solely on their containers and resource configurations, as AWS automatically handles capacity, scalability, and infrastructure management.

Additionally, a task definition must be created, specifying how the application runs, including details such as the container image stored in ECR, memory and CPU allocation, environment variables, and other essential parameters. A task is a running instance of a task definition and can consist of one or more containers that share resources and run together.

Finally, a new service is created, an abstraction that allows defining and running tasks in a durable and continuous manner. Services enable maintaining a specific number of tasks in execution, automatically managing scalability, fault recovery, and load distribution.

Zero Trust infraestructure diagram

Roles and Permissions

In an ECS environment, assigning roles and permissions is crucial for container management and task execution. There are two main roles that need to be defined: TaskExecutionRole and TaskRole.

The TaskExecutionRole is a role assumed by the ECS service to execute and manage container tasks. This role is designed to be assumed by ECS agents on cluster instances that run container tasks. Below is the policy assigned to this role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PullDockerImage",
      "Effect": "Allow",
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "ecr:GetAuthorizationToken"
      ],
      "Resource": "*"
    },
    {
      "Sid": "WriteLogs",
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:{REGION}:{ACCOUNT_ID}:log-group:{GROUP_NAME}:*"
    }
  ]
}

The TaskRole is intended to be directly used by applications running within the containers. This allows us to define, for each container in our application, the permissions it has to access other resources in AWS. In our example, as our application does not need to access any resources, we will create the role without any associated policies.

Zero Trust infraestructure diagram

Security and Isolation

The fundamental concept of Zero Trust involves complete distrust of everything. Therefore, when designing the network infrastructure, my approach was to restrict and authorize traffic only when strictly necessary for the proper functioning of the application. To manage the approval or denial of traffic in a VPC, we have two levels where these rules can be established: Security Groups and Network ACL.

Zero Trust infraestructure diagram

Security Groups

A Security Group is a security mechanism at the ENI (Elastic Network Interface) level. It functions as a virtual firewall that controls allowed network traffic to and from the ENIs associated with it. These groups contain rules specifying the types of permitted traffic based on IP addresses, ports, and protocols. It is crucial to understand that they are Stateful, meaning that if we configure a rule to allow outgoing traffic, we automatically authorize the incoming traffic associated with that established connection. Similarly, if we allow incoming traffic, we implicitly permit outgoing traffic that our instance generates in response.

Protocol Target tcp internet_gateway_id Destination com.amazonaws.region.s3 10.0.0.0/16 Application Load BalancerSecurity Group Port 443 Source 0.0.0.0/0 Ingress Protocol tcp Interface EndpointSecurity Group tcp Port 443 443 Source 10.0.2.0/24 10.0.3.0/24 Ingress Protocol tcp TaskSecurity Group Port 3000 Source alb_sg_id Ingress Protocol tcp tcp Port 3000 3000 Destination 10.0.2.0/24 10.0.3.0/24 Egress Protocol tcp tcp Port 443 443 Destination 10.0.2.0/24 10.0.3.0/24 Egress 10.0.0.0/16 Protocol - Port - Destination - Egress

In our architecture, we have identified three distinct Security Groups. The first one will be associated with the ENIs of the public subnets belonging to the Application Load Balancer. This Security Group is tasked with blocking all traffic except incoming traffic on port 443 from any source. This setup allows any user to access our web server. Regarding outgoing traffic, it should permit access to port 3000 on the private subnets where the containers are deployed.

The second security group is intended for ECS tasks. These should allow incoming traffic from the load balancer's security group on the port where the container exposes the web server, in this case, port 3000. Additionally, it should allow outgoing traffic on port 443 to the network range of the private subnets. This configuration is implemented to enable tasks to make calls through the ENIs of the endpoints.

Finally, we have the security group for the endpoints. These should simply allow traffic on port 443 from the private subnets where they have been deployed. This allows tasks to make calls through them, establishing effective communication in the architecture.

Network ACLs

A Network ACL is a firewall operating at the subnet level. It uses rules to specify what type of traffic is allowed or blocked based on IP addresses, port ranges, and specific protocols. Unlike Security Groups, Network ACLs are stateless, meaning they do not maintain information about the state of connections. Each rule in a Network ACL is applied independently, without considering the state of previous connections. It is crucial to understand that the rule configuration in a Network ACL impacts incoming and outgoing traffic separately, without automatically establishing mutual permissions as in the case of Security Groups.

Establishing a Network ACL policy to restrict traffic is not redundant, even if it has already been done through Security Groups. Implementing security measures at different layers of an architecture provides a more robust protection against potential attacks. By applying Network ACLs at the subnet level, unwanted traffic is blocked before reaching specific ENIs, thereby preventing network-level attacks such as port scanning or DDoS before reaching individual elements of the architecture. In situations where Security Group rules fail or are accidentally modified, ACLs prevent unauthorized access and offer centralized traffic management at the subnet level, facilitating consistent enforcement of security policies throughout the network.

# Target 100 com.amazonaws.region.s3 Public Subnet Network ACL Port 443 Source 0.0.0.0/0 Protocol TCP Ingress Egress Action Allow Public Subnet Network ACL 110 TCP 1024-65535 10.0.2.0/24 Allow 120 * TCP 1024-65535 10.0.3.0/24 Allow Deny All All 0.0.0.0/0 # Target 100 Port 1024-65535 Destination 0.0.0.0/0 Protocol TCP Action Allow 110 TCP 3000 10.0.2.0/24 Allow 120 * TCP 3000 10.0.3.0/24 Allow Deny All All 0.0.0.0/0 # Target 100 com.amazonaws.region.s3 Private Subnet Network ACL Port 3000 Source 10.0.0.0/24 Prot TCP Ingress Egress Action Allow Private Subnet Network ACL 110 TCP 3000 10.0.1.0/24 Allow 120 * TCP 1024-65535 18.34.240.0/22 Allow Deny All All 0.0.0.0/0 # Target 100 Port 1024-65535 Destination 10.0.0.0/24 Prot TCP Action Allow 110 TCP 1024-65535 10.0.1.0/24 Allow 120 * TCP 443 18.34.240.0/22 Allow Deny All All 0.0.0.0/0 130 TCP 1024-65535 18.34.32.0/20 Allow 140 TCP 1024-65535 3.5.64.0/21 Allow 150 TCP 1024-65535 3.5.72.0/23 Allow 1024-65535 52.218.0.0/17 Allow 160 170 TCP TCP 1024-65535 52.92.0.0/17 Allow 130 TCP 443 18.34.32.0/20 Allow 140 TCP 443 3.5.64.0/21 Allow 150 TCP 443 3.5.72.0/23 Allow 443 52.218.0.0/17 Allow 160 170 TCP TCP 443 52.92.0.0/17 Allow

Since Network ACLs are stateless, we know which ports in our subnets will receive traffic, but we don't know the ports from which we will respond. Ephemeral ports (1024-65535) are temporary and dynamically assigned by the operating system for outgoing network connections. We have no choice but to enable ephemeral ports from desired sources or to desired destinations.

As mentioned earlier, when implementing a Gateway Endpoint, a route is added to the address table of the corresponding subnet. This route has a destination Managed Prefix List covering all network ranges of the service, such as S3 in this case. However, in Network ACLs, we cannot directly use a Managed Prefix List as the source or destination. Therefore, it is necessary to review the addresses present in the list and establish a rule for each of them.

In public subnets, incoming traffic on port 443 must be allowed from any source, as well as ephemeral ports from private subnets. Additionally, outgoing traffic on ephemeral ports to the internet and from port 3000 to private subnets should be allowed.

On the other hand, private subnets should allow incoming traffic on port 3000 from public subnets and ephemeral ports from all address ranges of the S3 service in the specific region. They should also permit outgoing traffic on ephemeral ports to the public subnet and port 443 to S3 ranges.

End-to-End Encryption

Once we have set up the infrastructure on AWS, we will explore how to implement the Zero Trust philosophy in communications both with application users and among the components of the architecture.

An initial option is to maintain communications using the HTTP protocol between users and the ALB, as well as between the ALB and the ECS container. However, this choice does not seem to be the most secure, as HTTP transmits data without encryption, exposing information to potential interceptions and compromising the confidentiality of sensitive data. Moreover, it lacks server authentication, making it susceptible to man-in-the-middle attacks and identity impersonation.

ALB Container HTTP : 80 HTTP : 80

To ensure encryption of the connections between the user and the ALB, it is necessary to create or import an SSL certificate in the AWS Certificate Manager (ACM). This certificate will be used by the balancer to establish the SSL tunnel with the users. When using an ALB and HTTPS communications, it is mandatory for the balancer to terminate the connection and decrypt the request, as it operates at layer 7 (application layer) and requires access to the content of the request to perform balancing based on that information.

Delegating the task of terminating SSL connections to the ALB offloads our applications, which are behind it, from the responsibility of carrying out this process. This way, requests from the ALB to the containers are made via HTTP, resulting in a more agile and lightweight communication for our applications.

ALB Container HTTPS : 443 HTTP : 80 Certificate Manager

Encrypting exclusively the communication between users and the ALB is a secure and appropriate scenario for the vast majority of situations. This is because the communication between the ALB and the containers takes place privately within our VPC, avoiding exposure to the internet. However, there are specific situations, driven by compliance requirements or the need to enhance security, where it is essential to implement end-to-end encryption to secure communications between the ALB and the containers.

To achieve this additional security, the ALB can establish HTTPS connections with the containers using certificates installed directly on the containers. It is relevant to note that the ALB does not validate these certificates, allowing the use of self-signed certificates or even those that have expired. Being within a VPC ensures that traffic between the ALB and the containers is authenticated at the packet level. Consequently, there is no risk of man-in-the-middle attacks or impersonation, even in cases where the certificates at the destinations are not valid.

ALB Container HTTPS : 443 HTTPS : 443 Certificate Manager Self Signed Certificate Zero Trust infraestructure diagram

Docker Image of a Web Server with Bun

As mentioned at the beginning of this article, I have configured a fairly simple web server using Bun, an alternative to NodeJS for running JavaScript on the server side. This will allow us to containerize it and use it to test the infrastructure we have implemented on AWS.

const server = Bun.serve({
  port: 3000,
  fetch(req) {
    console.log('METHOD: ' + JSON.stringify(req.method));
    console.log('HEADERS: ' + JSON.stringify(req.headers));
    let res = new Response("Hello World!");
    console.log('STATUS: ' + JSON.stringify(res.status));
    return res;
  },
  tls: {
    key: Bun.file("./key.pem"),
    cert: Bun.file("./cert.pem"),
    passphrase: "my-secret-passphrase",
  }
});

This server will respond with the message "Hello World!" for every request it receives. As mentioned earlier, we are using a self-signed certificate and private key to enable the server to function over HTTPS with the ALB. To generate them, the following OpenSSL commands can be executed:

openssl genpkey -algorithm RSA -out key.pem -aes256
openssl req -new -key key.pem -out cert.pem -x509

To generate the Docker image and upload it to a repository in ECR for subsequent deployment on ECS, we will need a Dockerfile. A Dockerfile is a configuration file used in Docker to build container images, specifying the necessary steps to install and configure the application environment.

# Use the official bun image as the base image
FROM oven/bun
# Set the working directory to /webserver
WORKDIR /webserver
# Copy the inde.js and SSL certificates to /webserver
COPY ./index.js /webserver
COPY ./key.pem /webserver
COPY ./cert.pem /webserver
# Expose port 3000 for the web server
EXPOSE 3000
# Start the web server
CMD ["bun", "index.js"]

It is necessary to create a new private repository in our AWS account in ECR. Once this step is completed, we can proceed to execute the following commands to build the Docker image and push it to the repository:

# Build the Docker image from the current context
docker build -t webserver .
# Tag the local image with the AWS ECR repository address
docker tag webserver:latest ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com/REPO_NAME:latest
# Get the AWS ECR access token to authenticate with the repository
aws ecr get-login-password --region REGION | docker login --username AWS --password-stdin ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com
# Push the tagged image to the AWS ECR repository
docker push ACCOUNT_ID.dkr.ecr.REGION.amazonaws.com/REPO_NAME:latest

Conclusion

This proposed architecture, following the Zero Trust approach and implemented in AWS, provides a robust framework for the secure deployment of containers. It adheres to the fundamental principles of zero trust, establishing a high standard of security in the infrastructure and ensuring strong protection against potential attacks. This strategy not only focuses on perimeter protection but goes beyond, applying security measures at every layer of the infrastructure.

Network segmentation into public and private subnets, combined with the application of security rules in Security Groups and Network ACLs, allows us to isolate the components of the architecture and restrict access to resources only to authorized users and processes. The use of VPC Endpoints for private connections to AWS services such as ECR and CloudWatch prevents the exposure of these resources to the internet. Implementing end-to-end encryption in communications between users and the ALB, as well as between the ALB and containers, ensures the confidentiality of transmitted data.

Thanks for reading! Catch you in the next one.