Você está na página 1de 76

Reference Architecture Guide

for AWS
RELEASE 4 | FEBRUARY 2019
Table of Contents

Table of Contents
Purpose of This Guide......................................................................................................................................................... 3
Audience................................................................................................................................................................................................................. 3

Related Documentation...................................................................................................................................................................................... 4

Introduction.......................................................................................................................................................................... 5

Amazon Web Services Concepts and Services............................................................................................................. 7


AWS Top Level Concepts.................................................................................................................................................................................... 7

Virtual Network................................................................................................................................................................................................... 10

Virtual Compute.................................................................................................................................................................................................. 14

AWS Virtual Network access methods.......................................................................................................................................................... 16

AWS VPC Peering............................................................................................................................................................................................... 20

AWS Resiliency.................................................................................................................................................................................................... 23

Palo Alto Networks Design Details: VM-Series Firewall on AWS..........................................................................26


VM-Series on AWS Models.............................................................................................................................................................................. 26

VM-Series Licensing........................................................................................................................................................................................... 28

VM-Series on AWS Integration....................................................................................................................................................................... 29

Managing AWS Deployments with Panorama.............................................................................................................................................. 31

Security Policy Automation with VM Monitoring........................................................................................................................................34

RedLock Infrastructure Protection for AWS.................................................................................................................................................. 35

Networking in the VPC...................................................................................................................................................................................... 37

AWS Traffic Flows............................................................................................................................................................................................... 41

Design Models....................................................................................................................................................................53
Choosing a Design Model................................................................................................................................................................................. 53

Single VPC Model............................................................................................................................................................................................... 54

Transit VPC Model.............................................................................................................................................................................................. 57

Summary..............................................................................................................................................................................72

What’s New in This Release............................................................................................................................................73

Palo Alto Networks


Preface

Preface

GUIDE TYPES
Overview guides provide high-level introductions to technologies or concepts.

Reference architecture guides provide an architectural overview for using Palo Alto Networks® technologies to provide
visibility, control, and protection to applications built in a specific environment. These guides are required reading prior
to using their companion deployment guides.

Deployment guides provide decision criteria for deployment scenarios, as well as procedures for combining Palo Alto
Networks technologies with third-party technologies in an integrated design.

DOCUMENT CONVENTIONS

Notes provide additional information.

Cautions warn about possible data loss, hardware damage, or compromise of security.

Blue text indicates a configuration variable for which you need to substitute the correct value for your environment.

In the IP box, enter 10.5.0.4/24, and then click OK.

Bold text denotes:

• Command-line commands;

# show device-group branch-offices

• User-interface elements.

In the Interface Type list, choose Layer 3.

• Navigational paths.

Navigate to Network > Virtual Routers.

• A value to be entered.

Enter the password admin.

Palo Alto Networks 1


Preface

Italic text denotes the introduction of important terminology.

An external dynamic list is a file hosted on an external web server so that the firewall can import objects.

Highlighted text denotes emphasis.

Total valid entries: 755

ABOUT PROCEDURES
These guides sometimes describe other companies' products. Although steps and screen-shots were up-to-date at the
time of publication, those companies might have since changed their user interface, processes, or requirements.

GETTING THE LATEST VERSION OF GUIDES


We continually update reference architecture and deployment guides. You can access the latest version of this and all
guides at this location:

https://www.paloaltonetworks.com/referencearchitecturess

FEEDBACK
To provide feedback about this guide, use the link on the back cover.

Palo Alto Networks 2


Purpose of This Guide

Purpose of This Guide


This guide describes reference architectures for securing network infrastructure using Palo Alto Networks® VM-Series
virtualized next-generation firewalls running PAN-OS® 8.1 within the Amazon Web Services (AWS) public cloud.

This guide:

• Provides an architectural overview for using VM-Series firewalls to provide visibility, control, and protection to
your applications built on an AWS Virtual Public Cloud (VPC) environment.

• Links the technical design aspects of AWS and the Palo Alto Networks solutions and then explores technical
design models. The design models begin with a single VPC for organizations getting started and scales to meet
enterprise-level operational environments.

• Provides a framework for architectural discussions between Palo Alto Networks and your organization.

• Is required reading prior to using the VM-Series on AWS Deployment Guides. The deployment guides provide
decision criteria for deployment scenarios, as well as procedures for enabling features of the AWS and the Palo
Alto Networks VM-Series firewalls in order to achieve an integrated design.

AUDIENCE
This guide is written for technical readers, including system architects and design engineers, who want to deploy the
Palo Alto Networks VM-Series firewalls and Panorama™ within a public cloud datacenter infrastructure. It assumes the
reader is familiar with the basic concepts of applications, networking, virtualization, security, and high availability, as
well as a basic understanding of network and data center architectures.

A working knowledge of networking and policy in PAN-OS is required in order to be successful.

Palo Alto Networks 3


Purpose of This Guide

RELATED DOCUMENTATION
The following documents support this guide:

• Palo Alto Networks Security Operating Platform Overview—Introduces the various components of the Security
Operating Platform and describes the roles they can serve in various designs

• Deployment Guide for AWS Single VPC—Details deployment scenarios for the Single Virtual Private Cloud design
model, which is well-suited for initial deployments and proof of concepts of Palo Alto Networks VM-Series on
AWS. This guide describes deploying VM-Series firewalls to provide visibility and protection for inbound and
outbound traffic for the VPC.

• Deployment Guide for AWS Transit VPC—Details the deployment of a central Transit VPC design model. This
model provides a hub-and-spoke design for centralized and scalable firewall services for resilient outbound and
east-west traffic flows. This guide describes deploying the VM-Series firewalls to provide protection and visibil-
ity for traffic flowing through the Transit VPC.

• Deployment Guide for Panorama on AWS—Details the deployment of Palo Alto Networks Panorama management
node in the AWS VPC. The guide includes Panorama High Availability deployment option and Logging Service
setup.

Palo Alto Networks 4


Introduction

Introduction
Organizations are adopting AWS to deploy applications and services on a public cloud infrastructure for a variety of
reasons including:

• Business agility—Infrastructure resources are available when and where they are needed, minimizing IT staffing
requirements and providing faster, predictable time-to-market. Virtualization in both public and private cloud
infrastructure has permitted IT to respond to business requirements within minutes instead of days or weeks.

• Better use of resources—Projects are more efficient and there are fewer operational issues, permitting employ-
ees to spend more time adding business value. Employees have the resources they need when they need to
bring value to the organization.

• Operational vs capital expenditure—Costs are aligned directly with usage, providing a utility model for IT infra-
structure requiring little-to-no capital expense. Gone are the large capital expenditures and time delays associ-
ated with building private data center infrastructure.

Although Infrastructure as a Service (IaaS) providers are responsible for ensuring the security and availability of their
infrastructure, ultimately, organizations are still responsible for the security of their applications and data. This require-
ment does not differ from on-premises deployments. What does differ are the specific implementation details of how
to properly architect security technology in a public cloud environment such as AWS.

Figure 1 Security responsibility in the IaaS environment

Public cloud environments use a decentralized administration framework that often suffers from a corresponding lack
of any centralized visibility. Additionally, compliance within these environments is complex to manage. Incident re-
sponse requires the ability to rapidly detect and respond to threats; however, public cloud capabilities are limited in
these areas.

Palo Alto Networks 5


Introduction

The Palo Alto Networks VM-Series firewall deployed on AWS has the same features, benefits, and management as the
physical next-generation firewalls you have deployed elsewhere in your organization. The VM-Series can be integrated
with the Palo Alto Networks Security Operating Platform, which prevents successful cyberattacks by harnessing analyt-
ics to automate routine tasks and enforcement.

RedLock® offers comprehensive and consistent cloud infrastructure protection that enables organizations to effec-
tively transition to the public cloud by managing security and compliance risks within their public cloud infrastructure.
RedLock makes cloud-computing assets harder to exploit through proactive security assessment and configuration
management by utilizing industry best practices. RedLock enables organizations to implement continuous monitoring of
the AWS infrastructure and provides an essential, automated, up-to-date status of security posture that they can use
to make cost effective, risk-based decisions about service configuration and vulnerabilities inherent in cloud deploy-
ments.

Organizations can also use RedLock to prevent the AWS infrastructure from falling out of compliance and to provide
visibility into the actual security posture of the cloud to avoid failed audits and subsequent fines associated with data
breaches and non-compliance.

Palo Alto Networks 6


Amazon Web Services Concepts and Services

Amazon Web Services Concepts and Services


To understand how to construct secure applications and services using Amazon Web Services, it is necessary to under-
stand the components of AWS Infrastructure Services. This section describes the components used by this reference
architecture.

AWS TOP LEVEL CONCEPTS


A cloud compute architecture requires consideration of where computing resources are located, how they can com-
municate effectively, and how to provide effective isolation such that an outage in one part of the cloud does not
impact the entire cloud compute environment. The AWS global platform offers the ability to architect a cloud compute
environment with scale, resilience, and flexibility.

Regions
Regions enable you to place services and applications in proximity to your customers, as well as to meet governmental
regulatory requirements for customer data residency. Regions represent AWS physical data center locations distributed
around the globe. A region consists of several physically separate and co-located data center buildings, which provides
maximum fault-tolerance and stability. Communications between regions can be facilitated over the AWS backbone,
which provides redundant encrypted transport, or the public internet. When using public internet between regions
over the internet, encrypted transport, such as IPsec tunnels, is required to ensure confidentiality of communications.

Regions are specified beginning with two letter geographic region (us—United States, eu—European Union, ap—Asia Pa-
cific, sa—South America, ca—Canada, cn—China), followed by regional destination (east, west, central, south, northeast,
southeast), followed by a number to distinguish multiple regions in a similar geography. Examples of region designa-
tions: us-west-2 (Oregon), ap-northeast-1 (Tokyo), and eu-central-1 (Frankfurt). Figure 2 shows approximate locations
of current AWS regions as of February 2019.

Palo Alto Networks 7


Amazon Web Services Concepts and Services

Figure 2 AWS global region map

Virtual Private Cloud


A VPC is a virtual network associated with your Amazon account and isolated from other AWS users. You create a VPC
within a region of your choice, and it spans multiple, isolated locations for that region, called Availability Zones. The VPC
closely resembles a traditional data center in that you can provision and launch virtual machines and services into the
VPC.

When creating a new VPC, you specify a classless inter-domain (CIDR) IPv4 address block and optionally, Amazon-
provided IPv6 CIDR address block. Your VPC can operate in dual-stack mode: your resources can communicate over
IPv4, IPv6, or both. IPv4 and IPv6 addresses are independent of each other; you must configure routing and security in
your VPC separately for IPv4 and IPv6. All resources provisioned within your VPC use addresses from this CIDR block.
This guide uses IPv4 addressing. The IPv4 address block can be any valid IPv4 address range with a network prefix in
the range of /16 (65535 hosts) to /28 (16 hosts). The chosen IPv4 address block can then be broken into subnets for
the VPC. The actual number of host addresses available to you on any subnet is less than the maximum because some
addresses are reserved for AWS services. After the VPC is created, the IPv4 CIDR block cannot be changed; however,
secondary address ranges can be added. It’s recommended you choose a CIDR prefix that exceeds your anticipated
address space requirements for the lifetime of the VPC. There are no costs associated with a VPC CIDR address block,
and your VPC is only visible to you. Many other customer VPCs can use the same CIDR block without issue.

The primary considerations when choosing VPC CIDR block are the same as with any enterprise network:

• Anticipated number of IP addresses needed within a VPC

• IPv4 connectivity requirements across all VPCs

• IP address overlap in your entire organization—that is, between your AWS environment and your organization
on-premises IP addressing or other IaaS clouds that you may use

Palo Alto Networks 8


Amazon Web Services Concepts and Services

Unlike enterprise networks that are mostly static and where network addressing changes can be difficult, cloud infra-
structure tends to be highly dynamic, which minimizes the need to anticipate growth requirements very far into the
future. Instead of upgrading the resources in a VPC, many cloud deployments build new resources for an upgrade and
then delete the old ones. Regardless of network address size, the general requirement for communications across the
enterprise network is for all network addresses to be unique; the same requirement applies across your VPCs. When
you create new VPCs, consider using unique network address space for each, to ensure maximum communications
compatibility between VPCs and back to your organization.

Most VPC IP address ranges fall within the private (non-publicly routable) IP address ranges specified in RFC 1918;
however, you can use publicly routable CIDR blocks for your VPC. Regardless of the IP address range of your VPC,
AWS does not support direct access to the Internet from your VPC’s CIDR block, including a publicly-routable CIDR
block. You must set up Internet access through a gateway service from AWS or a VPN connection to your organizations
network.

Availability Zones
Availability Zones provide a logical data center within a region. They consist of one or more physical data centers, are
interconnected with low-latency network links, and have separate cooling and power. No two Availability Zones share
a common facility. A further abstraction is that Availability Zone-mapping to physical data centers can differ according
to AWS account. Availability Zones provide inherent fault tolerance, and well architected applications are distributed
across multiple Availability Zones within a region. Availability Zones are designated by a letter (a,b,c) after the region.

Figure 3 Example of Availability Zones within a region

VPC us-west-2 (Oregon)

Data Center 5

Data Center 2 Data Center 4

Data Center 1 Data Center 3 Data Center 6

us-west-2a us-west-2b us-west-2c


Availability Zone Availability Zone Availability Zone

Palo Alto Networks 9


Amazon Web Services Concepts and Services

VIRTUAL NETWORK
When you create a VPC, you must assign an address block for all of the resources located in the VPC. You can then
customize the IP address space in your VPC to meet your needs on a per-subnet basis. All networking under your con-
trol within the VPC is at Layer 3 and above. Any Layer 1 or Layer 2 services are for AWS network operation and cannot
be customized by the customer. This means that a machine cluster that needs to communicate between the members
with services like multicast, must use an IP based multicast and not a Layer 2–based operation.

Subnets
A subnet identifies a portion of its parent VPC CIDR block as belonging to an Availability Zone. A subnet is unique to
an Availability Zone and cannot span Availability Zones, and an Availability Zone can have many subnets. At least one
subnet is required for each Availability Zone desired within a VPC. The Availability Zone of newly created instances and
network interfaces is designated by the subnet with which they are associated at the time of creation. A subnet prefix
length be as large as the VPC CIDR block (VPC with one subnet and one Availability Zone) and as small as /28 prefix
length. Subnets within a single VPC cannot overlap.

Subnets are either public subnets, which means they are associated with a route table that has internet connectivity via
an internet gateway (IGW), or they are private subnets that have no route to the internet. Newly created subnets are
associated with the main route table of your VPC. In Figure 4, subnets 1 and 2 are public subnets, and subnets 3 and 4
are private subnets.

Route Tables
Route tables provide source-based control of Layer 3 forwarding within a VPC environment, which is different than
traditional networking where routing information is bidirectional and might lead to asymmetric routing paths. Subnets
are associated with route tables, and subnets receive their Layer 3 forwarding policy from their associated route table.
A route table can have many subnets attached, but a subnet can only be attached to a single route table. All route
tables contain an entry for the entire VPC CIDR in which they reside; the implication is any host (instance within your
VPC) has direct Layer 3 reachability to any other host within the same VPC. This behavior has implications for network
segmentation as route tables cannot contain more specific routes than the VPC CIDR block. Any instance within a VPC
is able to communicate directly with any other instance, and traditional network segmentation by subnets is not an op-
tion. An instance references the route table associated with its subnet for the default gateway but only for destinations
outside the VPC. Host routing changes on instances are not necessary to direct traffic to a default gateway, because
this is part of route table configuration. Routes external to your VPC can have a destination that directs traffic to a
gateway or another instance.

Route tables can contain dynamic routing information learned from Border Gateway Protocol (BGP). Routes learned
dynamically show in a route table as Propagated=YES.

Palo Alto Networks 10


Amazon Web Services Concepts and Services

A cursory review of route tables might give the impression of VRF-like functionality, but this is not the case. All route
tables contain a route to the entire VPC address space and do not permit segmentation of routing information less
than the entire VPC CIDR address space within a VPC. Traditional network devices must be configured with a default
gateway in order to provide a path outside the local network. Route tables provide a similar function without the need
to change instance configuration to redirect traffic. Route tables are used in these architectures to direct traffic to the
VM-Series firewall without modifying the instance configurations. Note in Figure 4 that both route tables 1 and 2
contain the entire VPC CIDR block entry, route table 1 has a default route pointing to an internet gateway (IGW), and
route table 2 has no default route. A route to 172.16.0.0/16 was learned via BGP, which is reachable via its virtual
private gateway (VGW). Subnets 1 and 2 are assigned to Availability Zone 2a, subnet 3 is assigned to Availability Zone
2b, and subnet 4 is assigned to Availability Zone 2c.

Figure 4 Subnets and route tables

VPC us-west-2 (Oregon)


Internet Gateway (IGW) VPN Gateway (VGW)

Destination Target Status Propagated Destination Target Status Propagated


10.1.0.0/16 local Active No 10.1.0.0/16 Local Active No
0.0.0.0/0 igw Active No 172.16.0.0/16 vgw Active Yes
Route Table 1 Route Table 2

10.1.1.0/24 10.1.2.0/24 10.1.3.0/24 10.1.4.0/24

Subnet 1 Subnet 2 Subnet 3 Subnet 4

us-west-2a us-west-2b us-west-2c


Availability Zone Availability Zone Availability Zone
10.1.0.0/16

There are limits to how many routes can be in a route table. The default limit of non-propagated routes in the table is
50, and this can be increased to a limit of 100; however, this might impact network performance. The limit of propagat-
ed routes (BGP advertised into the VPC) is 100, and this cannot be increased. Use IP address summarization upstream
or a default route to address scenarios where more than 100 propagated routes might occur.

Palo Alto Networks 11


Amazon Web Services Concepts and Services

Network Access Control Lists


Every route table contains a route to the entire VPC, so to restrict traffic between subnets within your VPC, you must
use Network Access Control Lists (ACLs). Network ACLs provide Layer 4 control of source/destination IP addresses and
ports, inbound and outbound from subnets. When a VPC is created, there is a default network ACL associated with it,
which permits all traffic. Network ACLs do not provide control of traffic to Amazon reserved addresses (first four ad-
dresses of a subnet) nor of link local networks (169.254.0.0/16), which are used for VPN tunnels.

Network ACLs:

• Are applied at subnet level.

• Have separate inbound and outbound policies.

• Have allow and deny action rules.

• Are stateless—bidirectional traffic must be permitted in both directions.

• Are order dependent—the first match rule (allow/deny) applies.

• Apply to all instances in the subnet.

• Do not filter traffic between instances within the same subnet.

Security Groups
Security groups (SGs) provide a Layer 4 stateful firewall for control of the source/destination IP addresses and ports that
are permitted. SGs are applied to an instance’s network interface. Up to five SGs can be associated with a network
interface. Amazon Machine Images (AMIs) available in the AWS Marketplace have a default SG associated with them.

Security groups:

• Apply to the instance network interface.

• Allow action rules only—there is no explicit deny action.

• Have separate inbound and outbound policies.

• Are stateful—return traffic associated with permitted traffic is also permitted.

• Rules are order independent.

• Enforce at layer 3 and layer 4 (standard protocol port numbers)

SGs define only network traffic that should be explicitly permitted. Any traffic not explicitly permitted is implicitly de-
nied. You cannot configure an explicit deny action. SGs have separate rules for inbound and outbound traffic from an
instance network interface. SGs are stateful, meaning that return traffic associated with permitted inbound or

Palo Alto Networks 12


Amazon Web Services Concepts and Services

outbound traffic rules is also permitted. SGs can enforce on any protocol that has a standard protocol number; to
enforce traffic at the application layer requires a next generation firewall in the path. When you create a new SG, the
default setting contains no Inbound rule and the Outbound rule permits all traffic. The effect of this default is to permit
all network traffic originating from your instance outbound, along with its associated return traffic, and to permit no
external traffic inbound to an instance. Figure 5 illustrates how network ACLs are applied to traffic between subnets
and SGs are applied to instance network interfaces.

Figure 5 Security groups and network access control lists

VPC us-west-2 (Oregon)

Network ACL Network ACL Network ACL


Subnet1 Subnet3 Subnet4

Security Group Security Group Security Group Security Group


(Public) (DB Servers) (App Servers) (Web Servers)

Host Host Host Host Host Host

Subnet 1 Subnet 2 Subnet 3

us-west-2a us-west-2b us-west-2c

10.1.0.0/16

Palo Alto Networks 13


Amazon Web Services Concepts and Services

Source and Destination Check


Source and destination checks are enabled by default on all network interfaces within a VPC. Source/Dest Check
validates whether traffic is destined to, or originates from, an instance and prevents any traffic that does not meet this
validation. A network device, like a firewall, that wishes to forward traffic between its network interfaces within a VPC
must have the Source/Dest Check disabled on all interfaces that are capable of forwarding traffic.

Figure 6 Source and destination check

src: 10.0.2.15
dst: 10.0.1.10
x
x
Network
Host Host
Function

src: 10.0.1.10
10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Enabled (default)

VIRTUAL COMPUTE
Computing services and related resources like storage are virtual resources that can be created or destroyed on de-
mand. A virtual machine or virtual server is assigned to an availability zone in a VPC that is located within a region. The
assignment of where the virtual machine is located can be configured by the customer or by AWS based on default
parameters. Most organizations plan where resources will be assigned according to an overall design of an application’s
access and availability requirements.

Instance
An instance, also known as Elastic Compute (EC2), represents a virtual server that is deployed within a VPC. Much like
their physical counterparts, the virtual server on which your instance resides has various performance characteristics:
number of CPUs, memory, and number of interfaces. You have the option to optimize the instance type based your
application performance requirements and cost. You can change the instance type for instances that are in the stopped
state. Hourly operating costs are based on instance type and region.

Amazon Machine Image


Amazon Machine Images are virtual machine images available in the Amazon Marketplace for deployment as a VPC in-
stance. AWS publishes many AMIs that contain common software configurations for public use. In addition, members
of the AWS developer community have published their own custom AMIs. You can also create your own custom AMI
or AMIs; doing so enables you to quickly and easily start new instances that have everything you need.

Palo Alto Networks 14


Amazon Web Services Concepts and Services

Amazon EC2 Key Pairs


AWS uses public key authentication for access to new instances. A key pair can be used to access many instances, and
key pairs are only significant within a region. You can generate a key pair within a VPC and download the private key, or
you can import the public key in order to create a key pair. Either way, it’s very important that the private key is protect-
ed, because it’s the only way to access your instance. When you create an instance, you specify the key pair to be used,
and you must confirm your possession of the matching key at the time of instance creation. New AMIs have no default
passwords, and you use the key pair to access your instance. AMIs have a default Security Group associated with them
and instructions for how to access the newly created instance.

Note

Ensure you save your key file in a safe place, because it is only way to access your in-
stances, and there is no option to download a second time. If you lose the file, the only
option is to destroy your instances and recreate them. Protect your keys; anyone with
the file can also access your instances.

Elastic IP Address
Elastic IP addresses are AWS-owned, public IP addresses that you allocate from a random pool and associate with an
instance network interface in your VPC. After they are associated, a network address translation is created in the VPC
IGW that provides 1:1 translation between the public IP address and the network interface VPC private IP address.
When an instance is in the stopped state the instance remains associated to it unless it is intentionally moved. When
the instance is started again, the same Elastic IP address remain associated unless it was intentionally moved, or the
instance is deleted. This permits predictable IP addresses for direct internet access to your instance network interface.
An Elastic IP address is assigned to a region and cannot be associated to an instance outside of that region.

Elastic Network Interface


Elastic network interfaces (ENIs) are virtual network interfaces that are attached to instances and appear as network
interfaces to its instance host. When virtual compute instances are initialized, they always have a single ethernet inter-
face (eth0) assigned and active. In situations where an instance needs multiple interfaces, like a firewall running on the
instance, ENIs are used to add more ethernet interfaces. There is a maximum number of network interfaces (default
+ ENI) that can be assigned per instance type. As an example, the c4.xlarge instance commonly used for the Palo Alto
Networks VM-Series firewall supports 4 network interfaces.

Palo Alto Networks 15


Amazon Web Services Concepts and Services

An elastic network interface can include the following attributes:

• A primary private IPv4 address from the address range of your VPC

• One dynamic or elastic public IPv4 address per private IP address

• One or more IPv6 addresses

• One or more SGs

• Source/destination check

• A MAC address

Within the same Availability Zone, ENIs can be attached and reattached to another instance up to the maximum
number of interfaces supported by the instance type, and the ENI characteristics are then associated with its newly
attached instance.

AWS VIRTUAL NETWORK ACCESS METHODS


Access to and from the VPC environment in AWS is required for many use cases. Virtual application servers in the VPC
need to download their applications, patches, and data from existing customer data centers or vendor sites on the in-
ternet. AWS-based application servers serving web content to users need inbound access from the internet or remote
private network access. There are many access methods available with AWS; the below sections cover the most used.

Internet Gateway
An internet gateway (IGW) provides a NAT mapping of an internal VPC IP address to a public IP address owned by AWS.
The IP address is mapped to an instance for inbound and/or outbound network access. The public IP address may be:

• Random and dynamic, assigned to an instance at startup and returned to the pool when the instance is stopped.
Every time the instance is turned up, a new address is assigned from a pool.

• Random and assigned to an instance as part of a process. This address stays with the instance, whether the
instance is stopped or running, unless it is intentionally assigned to another instance or deleted and returned to
the pool. This is known as an Elastic IP address.

This 1:1 private/public IP address mapping is part of a network interface configuration of each instance. After creat-
ing a network interface, you can then associate a dynamic or an Elastic IP to create the 1:1 NAT associated between
both public and VPC private IP addresses. For internet connectivity to your VPC, there must be an associated IGW. The
VPC internet gateway is a horizontally scaled, redundant, and highly available VPC component. After it is associated,
the IGW resides in all Availability Zones of your VPC, available to map to any route table/subnet where direct internet
access is required.

Palo Alto Networks 16


Amazon Web Services Concepts and Services

Network Translation Gateway


In contrast to an IGW, which also does network address translation for connections to the internet, the Network
Translation (NAT) gateway enables 1:many public to private IP address mapping. The NAT gateway is designed to allow
instances to connect out to the internet but prevents any internet hosts from initiating an inbound connection to the
instance.

Figure 7 IGW and VGW in route table

Destination Target Status Propagated


10.1.0.0/16 local Active No
172.16.0.0/16 vgw Active Yes
0.0.0.0/0 igw Active No
Route Table

Virtual Private Gateway


The VGW provides a VPN concentrator service attached to a VPC for creation of IPsec tunnels. The tunnels provide
confidentiality of traffic in transit and can be terminated on almost any device capable of supporting IPsec. A VGW
creates IPsec tunnels to a customer gateway; these constitute the two tunnel endpoints, each with a public IP address
assigned. Like with IGWs, the VGW resides in all Availability Zones of your VPC, available to map to any route table/
subnet where VPN network access is required. You must map to the remote-site routes in the route table statically or
they can be dynamically learned.

Customer Gateway

A customer gateway (CGW) identifies the target IP address of a peer device that terminates IPsec tunnels from the
VGW. The customer gateway is typically a firewall or a router; either must be capable of supporting an IPsec tunnel
with required cryptographic algorithms.

Palo Alto Networks 17


Amazon Web Services Concepts and Services

VPN Connections

VPN connections are the IPsec tunnels between your VGW and CGW. VPN connections represent two redundant IPsec
tunnels from a single CGW to two public IP addresses of the VGW in your subscriber VPC.

Figure 8 VPN connections

AWS
VPC
AZ-a
Router
Host

VGW Subnet 2

Customer Host
Gateway
Firewall or
Router
Subnet 1
VPN Router
AZ-b

When creating an VPN connection, you have the option of running dynamic routing protocol (BGP) over the tunnels or
using static routes.

AWS Direct Connect


AWS Direct Connect allows you to connect your network infrastructure directly to your AWS infrastructure using pri-
vate, dedicated bandwidth. You can connect from your datacenter or office via a dedicated link from a telecommunica-
tions provider, or you can connect directly in a colocation facility where AWS has a presence. This direct connection
provides some advantages:

• Hardware-firewalls support

• Higher-bandwidth network connections

• Lower-bandwidth costs

• Consistent network-transport performance

• Arbitrary inter-VPC traffic inspection/enforcement

AWS Direct Connect requires a network device, provided by you, to terminate the network connection from the AWS
backbone network. This same device also terminates your carrier connection, completing the path between your
private network and your AWS infrastructure. AWS Direct Connect provides dedicated port options for 1 Gbps and

Palo Alto Networks 18


Amazon Web Services Concepts and Services

10 Gbps, and AWS partners provide hosted connections with options for 50-500 Mbps. Your firewalls exchange BGP
routing information with the AWS network infrastructure. There is no static routing available. AWS charges for data
that egresses from your AWS infrastructure as data-out. Charges for data transfers are at a lower rate than using public
internet. Because carrier-provided bandwidth is dedicated to your use, this connection tends to have more consistent
performance than public internet transport.

For organizations requiring higher firewall performance and high-availability options offered by physical firewall appli-
ances, AWS Direct Connect might be the ideal solution. It provides a direct connection from your hardware firewalls
to each of your VPCs within the AWS Direct Connect region. The port is an 802.1Q VLAN trunk. Virtual interfaces are
created on the AWS Direct Connect, and these virtual interfaces are then associated with a VPC via its VGW. Layer 3
subinterfaces on the firewall are tagged with the VLAN of the virtual interface, completing the connection to your VPC.
All VPCs within the region of your AWS Direct Connect can have a connection to your firewall so that arbitrary security
policy between VPCs is an option.

Virtual interfaces are of two types: public and private. As the name implies, private virtual interfaces provide a private
communication path from the AWS Direct Connect port to a VPC. Public virtual interfaces provide access to other
AWS services via their public IP addresses such as S3 storage. Public virtual interfaces require the use of a public IP ad-
dress. They may be public address space that you already own, or the public IP address may be provided by AWS. Your
BGP peering connection between hardware firewalls and AWS requires the use of a public virtual interface.

An AWS Direct Connect port has a limit of 50 virtual interfaces. If you require more than 50 VPCs in a region, multiple
connections might be required. As an example, you can use link aggregation to Direct Connect, and each link can carry
50 virtual interfaces, providing up to 200 VPC connections. For redundancy, you can also use multiple connections
within a region.

AWS Direct Connect Gateway


The AWS Direct Connect Gateway compliments AWS Direct Connect by allowing you to connect one or more of your
VPCs to your on-premises network, whether those VPCs are in the same or different AWS Regions.

The Direct Connect gateway:

• Can be created in any public Region.

• Can be accessed from most public Regions.

• Is a globally available resource.

• Uses AWS Direct Connect for connectivity to your on-premises network.

A Direct Connect Gateway cannot connect to a VPC in another account. The Direct Connect Gateway is designed to
connect to/from on-premises networks to the VPCs, and VPCs connected to the gateway cannot communicate directly
with each other. You program your on-premise network to BGP peer to the Direct Connect Gateway and not to every
VPC; this reduces configuration and monitoring versus per-VPC BGP peering over Direct Connect. An alternative to
Direct Connect Gateway is to use the Transit VPC design, covered later in this guide, to connect to multiple VPCs with
a single BGP peering session and enables you to place a firewall inline for the traffic flows.

Palo Alto Networks 19


Amazon Web Services Concepts and Services

AWS VPC PEERING


VPC peering is a native capability of AWS for creating direct two-way peer relationships between two VPCs within the
same region. The peer relationship permits only traffic directly between the two peers and does not provide for any
transit capabilities from one peer VPC through another to an external destination. VPC peering is a two-way agree-
ment between member VPCs. It’s initiated by one VPC to another target VPC, and the target VPC must accept the VPC
peering relationship. VPC peering relationships can be created between two VPCs within the same AWS account, or
they can be created between VPCs that belong to different accounts. After a VPC peering relationship is established,
there is two-way network connectivity between the entire CIDR block of both VPCs, and there is no ability to segment
using smaller subnets.

At first glance, it might appear possible to create network connectivity that exceeds the native capabilities offered by
VPC peering if you manipulate route tables, even create daisy-chain VPC connections, and get creative with route
tables. However, VPC peering strictly enforces source and destination IP addresses of packets traversing the peering
connection to only source and destinations within the directly peered VPCs. Any packets with a source or destination
outside of the two-peer VPC address space are dropped by the peering connection.

VPC peering architecture uses network policy to permit traffic only between two directly adjacent VPCs. Two example
scenarios:

• Hub and spoke

• Multi-tier applications

In a hub-and-spoke model, Subscriber VPCs (spokes) use VPC peering with the Central VPC (hub) to provide direct
communications between the individual subscriber VPC instances and central VPC instances or services. Peer sub-
scriber VPCs are unable to communicate with each other, because this would require transit connectivity through the
central VPC, which is not a capability supported by VPC peering. Additional direct VPC peering relationships can be
created to permit communication between VPCs as required.

Figure 9 VPC peering hub and spoke

VPC Peering VPC Peering


VPC-A Connection
VPC-B Connection
VPC-C
10.1.0.0/16 10.2.0.0/16 10.3.0.0/16

10.1.1.16 10.2.1.77 10.3.1.60

Host Host Host

Subnet Subnet Subnet


10.1.1.0/24 10.2.1.0/24 10.3.1.0/24

Host X Host

10.1.1.17 10.3.1.61

For multi-tiered applications, VPC peering can use network connectivity to enforce the policy that only directly adja-
cent application tiers can communicate. A typical three tier application (web, app, database) might provide web servers
in a public-facing central VPC, with VPC peering to a subscriber VPC hosting the application tier, and the Application

Palo Alto Networks 20


Amazon Web Services Concepts and Services

VPC would have another VPC peering relationship with a third subscriber VPC hosting the database tier. Be aware that
network connectivity of VPC peering would provide no outside connectivity directly to application or database VPCs,
and that the firewalls have no ability to provide visibility or policy for traffic between the application and database
VPCs. Figure 10 illustrates how a central VPC with Multi-tier VPC connections could operate.

Figure 10 VPC peering multi-tier

App Tier DB Tier


10.2.0.0/16 10.3.0.0/16
VPC Peering 10.2.1.77 10.3.1.60
Central Services
Connection
10.1.0.0/16 Host Host

10.1.1.77
Subnet Subnet
10.2.1.0/24 10.3.1.0/24
Host

Subnet Host
10.1.1.0/24 X
10.3.1.61
Host
Subscriber VPC-1
10.1.1.78 10.11.0.0/16

Internet 10.11.1.13

Host
10.1.2.5

Host Subnet Subscriber VPC-2


10.11.1.0/24
10.12.0.0/16
Subnet
10.1.2.0/24 10.12.1.4

Host
Host
10.1.2.8
Subnet
10.12.1.0/24

AWS Transit Gateway


AWS Transit Gateway is a new service that enables you to control communications between your VPCs and to connect
to your on-premises networks via a single gateway. In contrast to VPC peering, which interconnects two VPCs only,
Transit Gateway can act as a hub in a hub-and-spoke model for interconnecting VPCs. In contrast to AWS Direct Con-
nect Gateway, the Transit Gateway provides VPC-to-VPC and VPC-to-On-premises communications.

The Transit Gateway provides the ability to centrally manage the connectivity policies between VPCs and from the VPC
to the an on-premises connection. In contrast to the AWS Transit VPC design, which is covered later in this document,
Transit Gateway is an AWS network function that replicates much of the Transit VPC design. Over time, Transit Gate-
way will offer more services such as Direct Connect integration.

Palo Alto Networks 21


Amazon Web Services Concepts and Services

The AWS Transit Gateway can connect to an Amazon VPC, on-premises data center, or remote office. Transit Gateway
is a hub-and-spoke design with the ability to control how traffic is routed between all the connected networks. The
spokes peer only to the gateway, which simplifies design and management overhead. You can add new spokes to the
environment incrementally as your network grows.

Figure 11 AWS Transit Gateway

AWS

Customer VPN Transit


Gateway Gateway
Firewall or
VPN Router

Route to
Local Service

AWS Transit Gateways supports dynamic and static layer 3 routing between Amazon VPCs and VPN. Routes determine
the next hop depending on the destination IP address of the packet, and they can point to an Amazon VPC or to a VPN
connection.

You can create connections between your AWS Transit Gateway and on-premises gateways using VPN for security.
Because AWS Transit Gateway supports Equal Cost Multipath (ECMP), you can increase bandwidth to on-premises
networks by:

• Creating multiple VPN connections between the Transit Gateway and the on-premises firewall.

• Using BGP to announce the same prefixes over each path.

• Enabling ECMP on both ends of the connections to load-balance traffic over the multiple paths.

AWS Transit Gateway can interoperate with central firewall deployments to direct VPC-to-VPC flows through firewalls
based on routing in the gateway. AWS Transit Gateway can also direct outbound traffic to a service like a firewall using
gateway route table settings.

AWS PrivateLink
Unlike VPC Peering, which extends an entire VPC access between VPCs, PrivateLink (an interface VPC endpoint) en-
ables you to connect to services. These services include some AWS services, services hosted by other AWS accounts
(referred to as endpoint services), and supported AWS Marketplace partner services.

Palo Alto Networks 22


Amazon Web Services Concepts and Services

The interface endpoints are created directly inside of your VPC, using elastic network interfaces and IP addresses in
your VPC’s subnets. The service is now in your VPC, enabling connectivity to AWS services or AWS PrivateLink-pow-
ered service via private IP addresses. VPC Security Groups can be used to manage access to the endpoints, and the
interface endpoint can be accessed from your premises via AWS Direct Connect.

An organization can create services and offer them for sale to other AWS customers, for access via a private connec-
tion. The provider creates a service that accepts TCP traffic, host it behind a network load-balancer (NLB), and then
make the service available, either directly or in AWS Marketplace. The organization is notified of new subscription
requests and can choose to accept or reject each one.

In the following diagram, the owner of VPC B is the service provider and has configured a network load balancer with
targets in two different Availability Zones. The service consumer (VPC A) has created interface endpoints in the same
two Availability Zones in their VPC. Requests to the service from instances in VPC A can use either interface endpoint.

Figure 12 AWS PrivateLink

AWS
VPC-A Service Consumer 10.1.0.0/16 VPC-B Service Provider 10.8.0.0/16

AZ-a AZ-a
EC2 Instance Endpoint
IP: 10.1.1.101 IP: 10.1.1.12 Target 1

Host Host
Network
us-west-2a subnet Load balancer
10.1.1.0 us-west-2a subnet

Target 2

Host
Endpoint Endpoint service
us-west-2b subnet
IP: 10.1.2.23 vpce-svc-1234 us-west-2b subnet
10.1.2.0

AZ-b AZ-b

The service provider and the service consumer run in separate VPCs and AWS accounts and communicate solely
through the endpoint, with all traffic flowing across Amazon’s private network. Service consumers don’t have to worry
about overlapping IP addresses, because the addresses are served out of their own VPC and then translated to the
service provider’s IP address ranges by the NLB. The service provided must run behind an NLB and must be accessible
over TCP. The service can be hosted on EC2 instances, EC2 Container Service containers, or on-premises (configured
as an IP target).

AWS RESILIENCY
Traditional data center network resilience was provided by alternate transport paths and high availability platforms like
switches, routers, and firewalls. The high availability platforms either had redundancy mechanisms within the chas-
sis or between chassis, which often introduced cost and complexity in the design. As networks scaled to meet higher

Palo Alto Networks 23


Amazon Web Services Concepts and Services

throughput demands of modern data centers, more chassis had to be included in the high availability group to provide
more throughput further complicating designs and increasing costs. The move to web-based, multi-tier applications al-
lows network designs to scale more gracefully and provide resiliency to the overall application through a stateless scale
out of resources.

Availability Zones
As discussed earlier in this guide, AWS provides resilience within regions by grouping data center facilities into Avail-
ability Zones of power, cooling, etc. When designing applications, the different tiers of the application should be de-
signed such that each layer of the application is in at least two Availability Zones for resilience. AWS provided network
resources like IGW and VGW are designed to be resilient in multiple Availability Zones as well providing consistency in
overall services. Customer provided platforms like firewalls should also be placed in each Availability Zone for a resilient
design.

Load Balancers
Load balancers distribute traffic inbound to an application across a set of resources based on IP traffic criteria such as
the DNS, Layer 4 or Layer 7 information. A common offering of the load balancers is to be able to check inbound traffic
targets for health, and remove unhealthy resources thereby enhancing resiliency of the application.

AWS offers three types of load balancers, and like other AWS resources mentioned above, they can exist in multiple
Availability Zones for resiliency, and they scale horizontally for growth.

Classic Load Balancer

The Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both
the request level and connection level. The Classic Load Balancer is intended for applications that were built within
the EC2-Classic network, not the VPC. AWS recommends using the Application Load Balancer (ALB) for Layer 7 and
Network Load Balancer for Layer 4 when using Virtual Private Cloud (VPC).

Application Load Balancer

The Application Load Balancer (ALB), as the name implies, operates at the application- or request-level (Layer 7), routing
traffic to targets—EC2 instances, containers and IP addresses—based on the content of the request. ALB supports both
HTTP and HTTPS traffic, providing advanced request routing targeted at delivery of modern application architectures.

Traffic destined for the ALB uses DNS names and not a discrete IP address. As the load balancer is created a fully
qualified domain name (FQDN) is assigned and tied to IP addresses, one for each Availability Zone. The DNS name
is constant and will survive a reload. The IP addresses that the FQDN resolves to could change; therefore, FQDN is
the best way to resolve the IP address of the front end of the load balancer. If the ALB in one Availability Zone fails or
has no healthy targets, and is tied to external DNS like the Amazon Route 53 Cloud DNS, then traffic is directed to an
alternate ALB in the other Availability Zone.

Palo Alto Networks 24


Amazon Web Services Concepts and Services

The ALB can be provisioned with the front end facing the internet and load balance using public IP addressing on the
outside or be internal-only using IP addresses from the VPC. You can load balance to any HTTP/HTTPS application
based:

• Inside your AWS region, or

• On-premises, using the IP addresses of the application resources as targets

This allows the ALB to load balance to any IP address, across Availability Zones, and to any interface on an instance.
ALB targets can also be in legacy data centers connected via VPN or Direct Connect, which helps migrate applications
to the cloud, and burst or failover to cloud.

The ALB offers content-based routing of connection requests based on either the host field or the URL of the HTTP
header of the client request. ALB uses a round-robin algorithm to balance the load and supports a slow start when in-
troducing adding targets to the pool to avoid overwhelming the application target. ALB directs traffic to healthy targets
based on IP probes or detailed error codes from 200-499. Target health metrics and access logs can be stored in AWS
S3 storage for analysis.

The ALB supports terminating HTTPS between the client and the load balancer and can manage SSL certificates.

Network Load Balancer

Network Load Balancer operates at the connection level (Layer 4), routing connections to targets—Amazon EC2 in-
stances, containers, and IP addresses—based on IP protocol data. Ideal for load balancing of TCP traffic, Network Load
Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load
Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability
Zone.

The NLB accepts incoming traffic from clients and distributes the traffic to targets within the same Availability Zone.
Monitoring the health of the targets, NLB ensures that only healthy targets get traffic. If all targets in an Availability
Zone are unhealthy, and you have set up targets in another Availability Zone, then the NLB automatically fails-over to
the healthy targets in the alternate Availability Zones.

The NLB supports a Static IP address assignment, including an Elastic IP address, for the front-end of the load balancer,
making it ideal for services that do not use FQDNs and rely on the IP address to route connections. If the NLB has
no healthy targets, and is tied to Amazon Route 53 Cloud DNS, then traffic is directed to an alternate NLB in another
region. You can load balance to any IP address target based inside of your AWS region or on-premises. This allows the
NLB to load balance to any IP address and any interface on an instance. NLB targets can also be in legacy data centers
connected via VPN or Direct Connect, which helps migrate applications to the cloud, and burst or failover to cloud.

The Flow Logs feature can record all requests sent to your load balancer, capturing information about the IP traffic
going to and from network interfaces in your VPC. CloudWatch provides metrics such as active flow count, healthy
host count, new flow count, and processed bytes. The NLB is integrated with other AWS services such as Auto Scaling,
Amazon EC2 Container Service, and Amazon CloudFormation.

Palo Alto Networks 25


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Palo Alto Networks Design Details:


VM-Series Firewall on AWS
The Palo Alto Networks VM-Series firewall is the virtualized form factor of our next-generation firewall that can be de-
ployed in a range of private and public cloud computing environments. The VM-Series on AWS enables you to securely
implement a cloud-first methodology while transforming your data center into a hybrid architecture that combines the
scalability and agility of AWS with your on-premises resources. This allows you to move your applications and data to
AWS while maintaining a security posture that is consistent with the one you may have established on your physical
network. The VM-Series on AWS natively analyzes all traffic in a single pass to determine application, content and user
identity. The application, content and user are used as core elements of your security policy and for visibility, reporting
and incident investigation.

VM-SERIES ON AWS MODELS


VM-Series firewalls on AWS are available in six primary models: VM-100, VM-200, VM-300, VM-500, VM-700, and
VM-1000-HV. Varying only by capacity, all of the firewall models use the same image. A capacity license configures the
firewall with a model number and associated capacity limits.

Table 1 VM-Series firewall capacities and requirements

VM-100/ VM-300/
VM-200 VM-1000-HV VM-500 VM-700
Capacities
Max sessions 250,000 800,000 2,000,000 10,000,000

Security rules 1,500 10,000 10,000 20,000

Security zones 40 40 200 200

IPSec VPN tunnels 1000 2000 4000 8000

SSL VPN tunnels 500 2000 6000 12,000

Requirements
CPU cores (min/max) 2 2/4 2/8 2/16

Min memory 6.5GB 9GB 16GB 56GB

Min disk capacity 60GB 60GB 60GB 60GB


BYOL BYOL/PAYG BYOL BYOL
Licensing options (VM-300)

Palo Alto Networks 26


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Although the capacity license sets the VM-Series firewalls limits, the size of the virtual machine on which you deploy
the firewall determines the firewall’s performance and functional capacity. In Table 2, the mapping of VM-Series firewall
to AWS virtual machine size is based on VM-Series model requirements for CPU, memory, disk capacity, and network
interfaces. When deployed on a virtual machine that provides more CPU than the model supports, VM-Series firewalls
do not use the additional CPU cores. Conversely, when you deploy a large VM-Series model on a virtual machine that
meets the minimum CPU requirements, it effectively performs the same as a lower model VM-Series.

Table 2 VM-Series mapping to AWS virtual machine sizes

Virtual machine size VM-100 VM-300 VM-500 VM-700


c3.xlarge (4,7.5,4)
Supported — — —
4 interfaces
c4.xlarge (4,7.5.4)
Recommended — — —
4 interfaces
c3.2xlarge (4,16,4) —
— Supported —
4 interfaces
c4.2xlarge (4,16,4) —
— Supported —
4 interfaces
m3.xlarge (4,16,4) — Supported — —
4 interfaces
m4.xlarge (4,16,4) — Recommended — —
4 interfaces
c3.4xlarge (8,32,4) — — Supported —
8 interfaces
c4.4xlarge (8,32,4) — — Supported —
8 interfaces
m4.2xlarge (8,32,4) — — Recommended —
4 interfaces
c4.8xlarge (32,8) — — — Supported
8 interfaces
m4.4xlarge (16,8) — — — Recommended
8 interfaces

In smaller VM-Series firewall models, it may seem that a virtual machine size smaller than those listed in Table 2 would
be appropriate; however, smaller virtual machine sizes do not have enough network interfaces. AWS provides virtual
machines with two, three, four, eight, or fifteen network interfaces. Because VM-Series firewalls reserve an interface
for management functionality, two-interface virtual machines are not a viable option. Four-interface virtual machines
meet the minimum requirement of a management, public, and private interface. You can configure the fourth interface
as a security interface for optional services such as VPN or a DMZ.

Palo Alto Networks 27


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Although larger models of VM-Series firewalls offer increased capacities (as listed in Table 1), on AWS, some through-
put is limited, and larger number of cores helps with scale more than throughput. For the latest detailed information,
see the VM-Series on AWS document. Many factors affect performance, and Palo Alto Networks recommends you do
additional testing in your environment to ensure the deployment meets your performance and capacity requirements.
In general, public cloud environments are more efficient when scaling out the number of resources versus scaling up to
a larger virtual machine size.

VM-SERIES LICENSING
You purchase licenses for VM-Series firewalls on AWS through the AWS Marketplace or through traditional Palo Alto
Networks channels.

Pay As You Go
A pay as you go (PAYG) license is also called a usage-based or pay-per-use license. You purchase this type of license
from the AWS public Marketplace, and it is billed annually or hourly. With the PAYG license, VM-Series are licensed and
ready for use as soon as you deploy it; you do not receive a license authorization code. When the firewall is stopped or
terminated on AWS, the usage-based licenses are suspended or terminated. PAYG licenses are available in the follow-
ing bundles:

• Bundle 1—Includes a VM-300 capacity license, Threat Prevention license (IPS, AV, and Anti-Spyware), and a
premium support entitlement

• Bundle 2—Includes a VM-300 capacity license, Threat Prevention license (IPS, AV, malware prevention), Global-
Protect™, WildFire®, PAN-DB URL Filtering licenses, and a premium support entitlement

An advantage of the Marketplace licensing option is the ability to use only what you need, deploying new VM-Series
instances as needed, and then removing them when they are not in use.

BYOL and VM-Series ELA


Bring your own license (BYOL) and the VM-Series Enterprise License Agreement (ELA) compose a license that you purchase
from a channel partner. VM-Series firewalls support all capacity, support, and subscription licenses in BYOL. When us-
ing BYOL, you license VM-Series firewalls like a traditionally deployed appliance, and you must apply a license autho-
rization code. After you apply the code to the device, the device registers with the Palo Alto Networks support portal
and obtains information about its capacity and subscriptions. Subscription licenses include Threat Prevention, PAN-DB
URL Filtering, AutoFocus™, GlobalProtect, and WildFire.

The VM-Series ELA provides a fixed-price licensing option allowing unlimited deployment of VM-Series firewalls with
BYOL. Palo Alto Networks offers VM-Series ELAs in one and three-year term agreements. An advantage of BYOL is
the ability to choose different capacity licenses for the VM-Series compared to the two VM-300 bundle options. As of

Palo Alto Networks 28


Palo Alto Networks Design Details: VM-Series Firewall on AWS

PAN-OS version 8.0, Palo Alto Networks supports use of VM-100/VM-200, VM-300/1000-HV, VM-500, and VM-700
on AWS, using instance types that support the requirements for each VM firewall. BYOL provides additional flexibil-
ity in that the license can be used on any virtualization platform (VMware, AWS, Azure) and is transferrable. For more
information about instance sizing and the capacities of Palo Alto Networks VM firewalls, see VM-Series for Amazon Web
Services.

Note

When creating a new instance from a Marketplace AMI, AWS offers only instance types
supported by the AMI. The instance type chosen determines the CPU, memory, and
maximum number of network interfaces available. For more information, see Amazon
EC2 Instance Types in the AWS documentation.

The VM-Series ELA includes four components:

• A license token pool that allows you to deploy any model of the VM-Series firewall. Depending on the firewall
model and the number of firewalls that you deploy, a specified number of tokens are deducted from your avail-
able license token pool. All of your VM-Series ELA deployments use a single license authorization code, which
allows for easier automation and simplifies the deployment of VM-Series firewalls.

• Threat Prevention, WildFire, GlobalProtect, DNS Security and PAN-DB Subscriptions for every VM-Series fire-
wall deployed as part of the VM-Series ELA.

• Unlimited deployments of Panorama as a virtual appliance.

• Support that covers all the components deployed as part of the VM-Series ELA.

VM-SERIES ON AWS INTEGRATION

Launching a VM-Series on AWS


The Amazon AWS Marketplace provides a wide variety of Linux, Windows, and specialized machine images like a Palo
Alto Networks VM-Series firewall. There, you can find AMIs for Palo Alto Networks VM-Series firewall with various licens-
ing options. After you select one, the AMI launch instance workflow provides a step-by-step guided workflow for all IP
addressing, network settings, and storage requirements. Customers can provide their own custom AMIs to suit design
needs. Automation scripts for building out large-scale environments usually include AMI programming.

Bootstrapping
One of the fundamental design differences between traditional and public cloud deployments is the lifetime of resourc-
es. One method of achieving resiliency in public cloud deployments is through the quick deployment of new resources
and quick destruction of failed resources. One of the requirements for achieving quick resource build-out and tear

Palo Alto Networks 29


Palo Alto Networks Design Details: VM-Series Firewall on AWS

down is current and readily available configuration information for the-resource to use during initial deployment. Using
a simple configuration file, for example, you need to provide only enough information to connect to Panorama. The
VM-Series firewall can load a license, connect to Panorama for full configuration and policy settings, and be operational
in the network in minutes. You can manually upgrade the software after deploying the virtual machine, or you can use
bootstrapping to update the firewall software as part of the bootstrap process.

Bootstrapping allows you to create a repeatable process of deploying VM-Series firewalls through a bootstrap pack-
age. The package can contain everything required to make the firewall ready for production or just enough information
to get the firewall operational and connected to Panorama. In AWS, you implement the bootstrap package through an
AWS S3 file share that contains directories for configuration, content, license, and software. On the first boot, VM-
Series firewalls mount the file share and use the information in the directories to configure and upgrade the firewall.
Bootstrap happens only on the first boot; after that it stops looking for a bootstrap package.

Management
At initial boot, the first interface attached to the virtual machine (eth0) is the firewall’s management interface. When
building the instance via the template, the firewall must be assigned a private IP address by placing it in a subnet in the
VPC. You can assign the address to be given to the firewall or allow AWS to assign it from the pool of available ad-
dresses in the subnet. The firewall’s management interface can then obtain its internal IP address through DHCP. In the
template, you can also assign a public IP address to the eth0 interface that is dynamically assigned from the AWS pool
of addresses for that region. The public IP address assigned in the template is dynamic and assigned at boot, meaning
that if you shut the instance down for maintenance, when you restart the instance, the private IP address is the same if
you assigned it to be, but the dynamically assigned public IP address is likely different. If you desire a statically assigned
public IP address for the management interface, assign it an Elastic IP address. An Elastic IP address is a public IP ad-
dress that remains associated to an instance and interface until it is manually disassociated or the instance is deleted.

Note

If you assign a dynamic public IP address on the instance eth0 interface and later assign
an Elastic IP address to any interface on the same instance, upon reboot, the dynamic
IP address on eth0 is replaced with the Elastic IP address, even if that Elastic IP address
is associated to eth1 or eth2. The best way to predictably assign a public IP address
to your eth0 management interface is to assign an Elastic IP address associated to the
eth0 interface.

If you are managing the firewalls with Panorama, you can place Elastic IP addresses on Panorama and then only dynam-
ic IP addresses on the firewall management interfaces, as long as Panorama is in the AWS environment and has reach-
ability to all private IP management subnets. If Panorama is on-premises, it must have VPN access to the VPC environ-
ment to manage firewalls that only use private IP addresses from the AWS virtual network.

Palo Alto Networks 30


Palo Alto Networks Design Details: VM-Series Firewall on AWS

MANAGING AWS DEPLOYMENTS WITH PANORAMA


The best method for ensuring up-to-date firewall configuration is to use Panorama for central management of firewall
policies. Panorama simplifies consistent policy configuration across multiple independent firewalls through its device
group and template stack capabilities. When multiple firewalls are part of the same device group, they receive a com-
mon ruleset. Because Panorama enables you to control all of your firewalls—whether they are on-premises or in the
public cloud, a physical appliance or virtual—device groups also provide configuration hierarchy. With device group
hierarchy, lower-level groups include the policies of the higher-level groups. This allows you to configure consistent
rulesets that apply to all firewalls, as well as consistent rulesets that apply to specific firewall deployment locations such
as the public cloud.

As bootstrapped firewalls deploy, they can also automatically pull configuration information from Panorama. VM-Series
firewalls use a VM authorization key and Panorama IP address in the bootstrap package to authenticate and register to
Panorama on its initial boot. You must generate the VM authorization key in Panorama before creating the bootstrap
package. If you provide a device group and template in the bootstrap package’s basic configuration file, Panorama as-
signs the firewall to the appropriate device group and template so that the relevant rulesets are applied, and you can
manage the device in Panorama going forward.

You can deploy Panorama in your on-site data center or a public cloud provider such as AWS. When deployed in your
on-site data center, Panorama can manage all the physical appliances and VM-Series firewalls in your organization. If
you want a dedicated instance of Panorama for the VM-Series firewalls inside of AWS, deploy Panorama on AWS. Pan-
orama can be deployed as management only, Log Collector only, or full management and log collection mode.

When you have an existing Panorama deployment on-site for firewalls in your data center and internet edge, you can
use it to manage the VM-Series firewalls in AWS. Beyond management, your firewall log collection and retention need
to be considered. Log collection, storage, and analysis is an important cybersecurity best practice that organizations
perform to correlate potential threats and prevent successful cyber breaches.

On-Premises Panorama with Dedicated Log Collectors in the Cloud


Sending logging data back to the on-premises Panorama can be inefficient, costly, and pose data privacy and residency
issues in some regions. An alternative to sending the logging data black to your on-premises Panorama is to deploy
Panorama dedicated log collectors on AWS and use the on-premises Panorama for management. Deploying a dedi-
cated Log Collector on AWS reduces the amount of logging data that leaves the cloud but still allows your on-site
Panorama to manage the VM-Series firewalls in AWS and have full visibility to the logs as needed.

Figure 13 Panorama Log Collector mode in AWS

VPC
Management

VPN Gateway

Panorama
Log Collector Mode Panorama

Palo Alto Networks 31


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Panorama Management in AWS with Logging Service


There are two design options when deploying Panorama management on AWS. First, you can use Panorama for man-
agement only and use the Palo Alto Networks Logging Service to store the logs generated by the VM-Series firewalls.
The Logging Service is a cloud-based log collector service that provides resilient storage and fast search capabilities for
large amounts of logging data. The Logging Service emulates a traditional log collector. Logs are encrypted and then
sent by the VM-Series firewalls to the Logging Service over TLS/SSL connections. The Logging Service allows you to
scale your logging storage as your AWS deployment scales as licensing is based on storage capacity and not the num-
ber of devices sending log data.

The benefit of using Logging Service goes well beyond scale and convenience when tied into the Palo Alto Networks
Application Framework. Application Framework is a scalable ecosystem of security applications that can apply ad-
vanced analytics in concert with Palo Alto Networks enforcement points to prevent the most advanced attacks. Palo
Alto Networks analytics applications such as Magnifier™ and AutoFocus, as well as third-party analytics applications
that you choose, use Logging Service as the primary data repository for all of Palo Alto Networks offerings.

Figure 14 Panorama management and the Logging Service

3rd
Party
Virtual Network
Management
Application
Framework

Panorama
Management Only Mode
Logging Service

Palo Alto Networks 32


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Panorama Management and Log Collection in AWS


Second, you can use Panorama for both management and log collection. Panorama on AWS supports high-availability
deployment if both virtual appliances are in the same VPC and region. You can deploy the management and log col-
lection functionality as a shared virtual appliance or on dedicated virtual appliances. For smaller deployments, you can
deploy Panorama and the Log Collector as a single virtual appliance. For larger deployments, a dedicated Log Collector
per region allows traffic to stay within the region and reduce outbound data transfers.

Figure 15 Panorama management and Log Collection in AWS

VPC Management

Panorama

Panorama is available as a virtual appliance for deployment on AWS and supports Management Only mode, Panorama
mode, and Log Collector mode with the system requirements defined in Table 3. Panorama on AWS is only available
with a BYOL licensing model.

Table 3 Panorama Virtual Appliance on AWS

Management only Panorama Log Collector


4 CPUs 8 CPUs 16 CPUs
Minimum system 8GB memory 32GB memory 32GB memory
requirements 81GB system disk 2TB to 24TB log storage capacity 2TB to 24TB log storage capacity
t2.xlarge m4.2xlarge m4.4xlarge
AWS instance sizing m4.2xlarge m4.4xlarge c4.8xlarge

Palo Alto Networks 33


Palo Alto Networks Design Details: VM-Series Firewall on AWS

SECURITY POLICY AUTOMATION WITH VM MONITORING


Public cloud application environments are typically built around an agile application development process. In the agile
environment, applications are deployed quickly, and new environments are often built out to accommodate a revised
application versus trying to upgrade the existing operational environment. When the new update application environ-
ment goes online, the older, now-unused application environment is deleted. This amount of change presents a chal-
lenge to enforcing security policy, unless your security environment is compatible with an agile development environ-
ment.

Palo Alto Networks firewalls, including the VM-Series, support dynamic address groups. A dynamic address group can
track IP addresses are assigned or removed from a security group that is then assigned to a security policy. You can
combine the flexibility of dynamic address groups and the VM monitoring capabilities on the firewall and Panorama to
dynamically apply security policy as VM workloads and their associated IP addresses spin up or down.

The VM Monitoring capability allows the firewall to be automatically updated with IP addresses of a source or destina-
tion address object referenced in a security policy rule. Dynamic address groups use the metadata about the cloud
infrastructure, mainly IP address-to-tag mappings for your VMs. Using this metadata (that is, tags) instead of static IP
addresses in the security policy makes them dynamic. So, as you add, change, or delete tags on your VMs, security
policy is always enforced consistently.

Figure 16 VM Monitoring AWS tag to dynamic address group mapping

AWS Compute Resources

Dynamic Address
Group Definition
Sharepoint Servers
10.1.1.2 10.1.6.4 10.3.7.6
VM-Monitoring DB Servers
XML API
Admin Servers

Public Web Servers


10.4.2.2 10.4.3.8 10.4.9.1
Public Web Servers DB Servers 10.4.2.2
10.1.1.2
Admin Servers Sharepoint Servers
10.8.7.3 Learned Group
Membership
Sharepoint Servers DB Servers
10.6.3.2 10.8.5.7 10.8.7.3

Depending on your scale and the cloud environments you want to monitor, you can choose to enable VM Monitoring
for either the firewall or Panorama to poll the evolving VM landscape. If you enable VM Monitoring on the firewall, you
can poll up to 10 sources (VPCs); however, each firewall will act and poll independently, limiting the flexibility and scale.
The Panorama plugin for AWS allows you to monitor up to 100 VM sources (VPCs) on AWS and over 6000 hosts.
You then program Panorama to automatically relay the dynamic address group mapping to an entire group of firewalls,
providing scale and flexibility.

Palo Alto Networks 34


Palo Alto Networks Design Details: VM-Series Firewall on AWS

REDLOCK INFRASTRUCTURE PROTECTION FOR AWS


RedLock provides cloud infrastructure protection across the following areas:

• Multi-cloud security—Provides a consistent implementation of security best practices across your AWS and
other public cloud environments. RedLock requires no agents, proxies, software, or hardware for deployment
and integrates with a variety of threat intelligence feeds. RedLock includes pre-packaged polices to secure mul-
tiple public cloud environments.

• Continuous compliance—Maintain continuous compliance across CIS, NIST, PCI, FedRAMP, GDPR, ISO and
SOC 2 by monitoring API-connected cloud resources across multiple cloud environments in real time. Compli-
ance documentation is generated with one-click exportable, fully prepared reports.

• Cloud forensics—Go back in time to the moment a resource was first created and see chronologically every
change and by whom. RedLock provides forensic investigation and auditing capabilities of potentially compro-
mised resources across your AWS environment, as well as other public cloud environments. Historical informa-
tion extends back to initial creation of each resource and the detailed change records includes who made each
change.

• DevOps and automation—Enable secure DevOps without adding friction by setting architecture standards that
provide prescribed policy guardrails. This methodology permits agile development teams to maintain their focus
on developing and deploying apps to support business requirements.

RedLock connects to your cloud via APIs and aggregates raw configuration data, user activities, and network traffic to
analyze and produce concise actionable insights.

AWS integration requires that you create an External ID in AWS to establish trust for the RedLock account to pull data.
An associated account role sets the permissions of what the RedLock account can see with read-only access plus a few
write permissions. AWS VPCs must be configured to send flow logs to CloudWatch logs, not S3 buckets. A RedLock
onboarding script automates account setup, and CloudFormation templates are available to assist with permissions
setup.

Palo Alto Networks 35


Palo Alto Networks Design Details: VM-Series Firewall on AWS

RedLock performs a five-stage assessment of your cloud workloads. Contributions from each stage progressively im-
prove the overall security posture for your organization:

• Discovery—RedLock continuously aggregates configuration, user activity, and network traffic data from dispa-
rate cloud APls. It automatically discovers new workloads as soon as they are created.

• Contextualization—RedLock correlates the data and applies machine learning to understand the role and behav-
ior of each cloud workload.

• Enrichment—The correlated data is further enriched with external data sources—such as vulnerability scanners,
threat intelligence tools, and SIEMs—to deliver critical insights.

• Risk assessment—The RedLock Cloud 360 platform scores each cloud workload for risk based on the severity
of business risks, policy violations, and anomalous behavior. Risk scores are then aggregated, enabling you to
benchmark and compare risk postures across different departments and across the entire environment.

• Visualization—the entire cloud infrastructure environment is visualized with an interactive dependency map that
moves beyond raw data to provide context.

RedLock Cloud Threat Defense

RedLock enables you to visualize your entire AWS environment, down to every component within the environment.
The platform dynamically discovers cloud resources and applications by continuously correlating configuration, user
activity, and network traffic data. Combining this deep understanding of the AWS environment with data from external
sources, such as threat intelligence feeds and vulnerability scanners, enables RedLock to produce context around risks.

The RedLock platform is prepackaged with policies that adhere to industry-standard best practices. You can also create
custom policies based on your organization’s specific needs. The platform continuously monitors for violations of these
policies by existing resources as well any new resources that are dynamically created. You can easily report on the com-
pliance posture of your AWS environment to auditors.

RedLock automatically detects user and entity behavior within the AWS infrastructure and management plane. The
platform establishes behavior baselines, and it flags any deviations. The platform computes risk scores—similar to credit
scores—for every resource, based on the severity of business risks, violations, and anomalies. This quickly identifies the
riskiest resources and enables you to quantify your overall security posture.

RedLock reduces investigation-time from weeks or months to seconds. You can use the platform’s graph analytics
to quickly pinpoint issues and perform upstream and downstream impact analysis. The platform provides you with a
DVR-like capability to view time-serialized activity for any given resource. You can review the history of changes for a
resource and better understand the root cause of an incident, past or present.

RedLock enables you to quickly respond to an issue based on contextual alerts. Alerts are triggered based on risk-
scoring methodology and provide context on all risk factors associated with a resource. You can send alerts, orchestrate
policy, or perform auto-remediation. The alerts can also be sent to third-party tools such as Slack, Demisto, and Splunk
to remediate the issue.

Palo Alto Networks 36


Palo Alto Networks Design Details: VM-Series Firewall on AWS

RedLock provides the following visibility, detection, and response capabilities:

• Host and container security—Configuration monitoring and vulnerable image detection.

• Network security—Real-time network visibility and incident investigations. Suspicious/malicious traffic


detection.

• User and credential protection—Account and access key compromise detection. Anomalous insider activity
detection. Privileged activity monitoring.

• Configurations and control plane security—Compliance scanning. Storage, snapshots and image configuration
monitoring. Security group and firewall configuration monitoring. IP address management configuration
monitoring.

NETWORKING IN THE VPC

IP Addressing
When a VPC is created, it is assigned a network range, as described earlier. It is common (but not required) to use a
private IP address range with an appropriate subnet scope for your organization’s needs.

The minimum CIDR block prefix length for a VPC remains /16. RFC1918 defines much larger space for 10.0.0.0/8 and
172.16.0.0/12, which must be further segmented into smaller address blocks for VPC use. Try to use a subnet range
sufficient for anticipated growth in this VPC. You can assign up to four secondary blocks after VPC creation. Avoid IP
address overlap within the AWS VPC environment and with the rest of your organization, or else you will require ad-
dress translation at borders.

Consider breaking your subnet ranges into functional blocks that can be summarized for ACLs, etc. The addressing is
broken up into a subnet for management, public, and compute. All addressing for the first VPC in this reference archi-
tecture is done inside of the 10.1.0.0/16 range. By default, all endpoints within the VPC private address range have
direct connectivity via the AWS VPC infrastructure.

Palo Alto Networks 37


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Multiple availability zones are used for resiliency: Zone-a and Zone-b. Each zone is assigned unique IP addresses from
within the VPC CIDR block. The ranges for your network should be chosen based on the ability to summarize for ACLs
or routing advertisements outside of the VPC.

Figure 17 VPC IP addressing

Single VPC 10.1.0.0/16

AZ-a

Management Public Compute

Management Public Web Business DB


Subnet Subnet Subnet Subnet Subnet
10.1.9.0/24 10.1.10.0/24 10.1.1.0/24 10.1.2.0/24 10.1.3.0/24

AZ-b

Management Public Compute

Management Public Web Business DB


Subnet Subnet Subnet Subnet Subnet
10.1.109.0/24 10.1.110.0/24 10.1.101.0/24 10.1.102.0/24 10.1.103.0/24

Palo Alto Networks 38


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Firewall Interfaces

All networking within the VPC is Layer 3; therefore, any firewall interfaces are configured for Layer 3 operation. As
discussed earlier, even if you use DHCP on the firewall for the interface IP address assignment, it is important to stati-
cally assign the addresses for routing next-hop assignments and other firewall functions. You can do this when creating
the instance for the VM-Series firewall. The VM-series firewall requires three Ethernet interfaces, management (eth0),
public-facing interface (eth1), and the compute or private-facing interface (eth2). When the VM-Series instance is cre-
ated, by default it has a single interface (eth0). You therefore have to create an Elastic Network Interface for eth1 and
eth2 and associate them to the VM-series instance. You also need to create two Elastic IP addresses; associate one
address for the management interface, and associate one for the firewall’s public IP address for internet traffic (eth1).

Figure 18 Firewall interfaces

Management
Instance eth0
Firewall mgmt

Public Compute
Instance eth1 Instance eth2
Firewall ethernet1/1 Firewall ethernet1/2

Caution

As mentioned earlier, if you want a public IP address for management access on eth0
and you assigned eth0 a dynamic public IP address when creating the instance, any
Elastic IP assigned to the instance will result in the dynamic address being returned
to the pool. eth0 will not have a public IP address assigned unless you create another
Elastic IP address and assign it to eth0.

Source and Destination Check

Source and destination checks are enabled by default on all network interfaces within your VPC. Source/Dest Check
validates whether traffic is destined to, or originates from, an instance and prevents any traffic that does not meet this
validation. A network device that wishes to forward traffic between its network interfaces within a VPC must have the
Source/Dest Check disabled on all interfaces that are capable of forwarding traffic. To pass traffic through the VM-
Series firewall, you must disable Source/Dest Check on all dataplane interfaces (ethernet1/1, ethernet1/2, etc.). Figure
19 illustrates how the default setting (enabled) of Source/Dest Check prevents traffic from transiting between inter-
faces of an instance.

Palo Alto Networks 39


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Note

SGs and Source/Dest Check can be changed at the instance level however these
changes apply only to the first interface (eth0) of the instance. For a VM-Series with
multiple Ethernet interfaces, the first interface represents the management interface
(ethernet1/0). To avoid ambiguity, SGs and Source/Dest Check should be applied to all
individual network interfaces (eth0, eth1, and eth2).

Figure 19 Source and destination check

src: 10.0.2.15
dst: 10.0.1.10
x
Host

src: 10.0.1.10
x Host

10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Enabled (default)

src: 10.0.2.15
dst: 10.0.1.10

Host Host

src: 10.0.1.10
10.0.1.10 10.0.2.15
dst: 10.0.2.15

Source/Dest. Check – Disabled

Internet Gateway

Create an IGW function in your VPC in order to provide inbound and outbound internet access. The internet gateway
performs network address translation of the private IP address space to the assigned public dynamic or Elastic IP ad-
dresses. The public IP addresses are from AWS address pools for the region that contains your VPC, or you may use
your organization’s public IP addresses assigned to operate in the AWS environment.

Route Tables

Although all devices in the VPC can reach each other’s private IP address across the native AWS fabric for the VPC,
route tables allow you to determine what external resources an endpoint can reach based on what routes or services
are advertised within that route table. After a route table is created, you choose which subnets are assigned to it. You
can create separate route tables for Availability Zone-a (AZ-a) versus Availability Zone-b (AZ-b), and you will want to for
the compute subnets.

Palo Alto Networks 40


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Consider creating the route tables for management, public, and compute:

• The management route table has the management subnets assigned to it and an IGW for the default route for
Internet access.

• The public route table has the public subnets assigned to it and the IGW for the Internet access.

• The compute route table for AZ-a has the compute private subnets for AZ-a assigned to it and no IGW. After
the firewall is configured and operational, you assign a default route in the table pointed to the ENI of the VM
firewall for AZ-a.

• The compute route table for AZ-b has the compute private subnets for AZ-b assigned to it and no IGW. After
the firewall is configured and operational, you assign a default route in the table pointed to the ENI of the VM
firewall for AZ-b.

Figure 20 Example route tables

Management Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.9.0, 10.1.109.0
0.0.0.0/0 Igw-991233991ab

Public Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.10.0, 10.1.110.0
0.0.0.0/0 Igw-991233991ab

Compute AZ-a Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.1.0
0.0.0.0/0 Eni-1182230012a

Compute AZ-b Destination Target Subnets Assigned

10.1.0.0/16 Local
10.1.11.0
0.0.0.0/0 Eni-1182230012b

AWS TRAFFIC FLOWS


There are three traffic-flow types that you might wish to inspect and secure:

• Inbound—Traffic originating outside and destined to your VPC hosts

• Outbound—Traffic originating from your VPC hosts and destined to flow outside

• East/West—Traffic between hosts within your VPC infrastructure

Palo Alto Networks 41


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Inbound Traffic from the Internet


Inbound traffic originates outside, destined to services hosted within your VPC, such as web servers. Enforcing firewall
inspection of inbound traffic requires Destination NAT on the VPC firewall. AWS provides a 1:1 NAT relationship with
public Elastic IP addresses and VPC private IP addresses and does not modify the source or destination ports. You can
assign public IP addresses to internal servers however this would bypass the firewall and expose the host. There are
two options for providing inbound inspection for multiple applications through the firewall:

• Destination IP address and port-mapping by using a single Elastic IP address

• Network interface secondary IP addresses and multiple Elastic IP addresses

Elastic IP addresses have a cost associated with their use. The first option above minimizes Elastic IP cost with an in-
crease in Destination NAT complexity on the firewall, as well as potential end-user confusion when they are accessing
multiple inbound services using the same DNS Name or IP address with a different port representing different servers.
A single service port (that is, SSL) may have its external IP address mapped to a single server internal IP address pro-
vided the service. Additional servers that provide the same service are represented externally as offering their service
on the same external IP address. The service port is used to differentiate between servers having the same external IP
address.

Figure 21 illustrates the inbound and return address translation:

• Packet address view 1 to 2—The IGW translates the packet’s destination address.

• Packet address view 3—The firewall translates the packet’s destination IP address (and optionally port).

• Packet address view 4 to 5—The firewall translates the source IP address to the firewall internet interface to
match the IGW NAT.

• Packet address view 6—The IGW translates source IP address to the public EIP address.

Figure 21 Inbound traffic inspection using Destination NAT address and port

VPC
AZ-a
15.0.201.45
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10
(eth1) (eth2)
10.1.1.100
AWS Public <-> Internal
52.1.7.85 ---> 10.1.10.10

1 2 3
Source: 15.0.201.45 Source: 15.0.201.45 Source: 15.0.201.45
Destination: 52.1.7.85:80 Destination: 10.1.10.10:80 Destination: 10.1.1.100:80

6 5 4
Source: 52.1.7.85 Source: 10.1.10.10 Source: 10.1.1.100
Destination: 15.0.201.45 Destination: 15.0.201.45 Destination: 15.0.201.45

Public IP Address Private IP Address Destination Target Status Propagated


52.1.7.85 10.1.10.10 10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active No
IGW NAT Table
Compute Subnet Route Table

Palo Alto Networks 42


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Outbound Traffic Inspection


Outbound traffic is originating from your VPC instances, destined to external destinations, typically the internet. Out-
bound inspection is useful for ensuring that instances are connecting to permitted services (such as Windows Update)
and permitted URL categories, as well as for preventing data exfiltration of sensitive information. Enterprise networks
often make use of Source NAT at their internet perimeter for outbound traffic, in order to conserve limited public IPv4
address space and provide a secure one-way valve for traffic. Similarly, outbound traffic from your VPCs that require
firewall inspection also makes use of Source NAT on the firewall, ensuring symmetric traffic and firewall inspection of
all traffic.

The application server learns the default gateway from DHCP and forwards traffic to the default gateway address, and
the route table controls the default gateway destination. Directing outbound traffic to the firewall therefore does not
require any changes on the application server. Instead, you program the route table with a default route that points to
the firewall’s eth2 Elastic Network Interface using the ENI address and not the IP address of the firewall’s interface.

Figure 22 illustrates outbound traffic flow. Packet address view 1 has its source IP address modified to that of the fire-
wall interface in packet address view 2. Note that the Compute subnet route table has a default route pointing to the
firewall.

Figure 22 Outbound traffic inspection using Source NAT

VPC
AZ-a
winupdate.ms
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10
(eth1) (eth2)
10.1.1.100
AWS Public <-> Internal
52.1.7.85 ---> 10.1.10.10

3 2 1
Source: 52.1.7.85 Source: 10.1.10.10 Source: 10.1.1.100
Destination: winupdate.ms Destination: winupdate.ms Destination: winupdate.ms

4 5 6
Source: winupdate.ms Source: winupdate.ms Source: winupdate.ms
Destination: 52.1.7.85 Destination: 10.1.10.10 Destination: 10.1.1.100

Public IP Address Private IP Address Destination Target Status Propagated


52.1.7.85 10.1.10.10 10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active No
IGW NAT Table
Compute Subnet Route Table
Destination Target Status Propagated
10.1.0.0/16 Local Active No
0.0.0.0/0 igw Active No
Public Subnet Route Table

East/West Traffic Inspection


East/West traffic is a traffic flow between instances within the same VPC and can be a flow within the same subnet.
Network segmentation using route policy between subnets is a well-understood principle in network design. You group
services of a similar security policy within a subnet and use the default gateway as a policy-enforcement point for

Palo Alto Networks 43


Palo Alto Networks Design Details: VM-Series Firewall on AWS

traffic entering and exiting the subnet. AWS VPC networking provides direct reachability between all instances within
the VPC, regardless of subnet association; there are no native AWS capabilities for using route policy-segmentation be-
tween subnets within your VPC. All instances within a VPC can reach any other instance within the same VPC, regard-
less of route tables. You can use network ACLs to permit or deny traffic based on IP address and port between subnets,
but this does not provide the visibility and policy enforcement for which many prefer VM-Series. For those wishing to
use the rich visibility and control of firewalls for East/West traffic within a single VPC, there are two options:

• For every instance within a subnet, configure routing information on the instance to forward traffic to the fire-
wall for inspection.

• Use a floating NAT configuration on the firewall to direct intra-VPC subnets through the firewall, and use net-
work ACLs to prevent direct connectivity.

You can use instance routing policy to forward traffic for firewall inspection by configuring the instance default gateway
to point to the firewall. An instance default gateway configuration can be automated at time of deployment via the
User Data field—or manually after deployment. Although this approach accomplishes the desired East/West traffic in-
spection, the ability for the instance administrator to change the default gateway at any time to bypass firewall inspec-
tion might be considered a weakness of this approach. For inspection of traffic between two instances within the same
VPC, both instances must be using the firewall as the default gateway in order to ensure complete visibility; otherwise,
there is asymmetric traffic through the firewall.

VPC route tables always provides direct network reachability to all subnets within the VPC; it is therefore not possible
to use VPC routing to direct traffic to the firewall for intra-VPC inspection. You can only modify route table behavior for
networks outside the local VPC CIDR block.

Using intra-VPC routing policy to forward traffic for firewall inspection requires the use of floating NAT configuration.
For both source and destinations servers, floating NAT creates a virtualized IP address subnet, which resides outside
the local CIDR block, thus permitting route table policy to forward all floating NAT IP address to the firewall for inspec-
tion. The design pattern for floating NAT is identical to the scenario where you want to interconnect to two networks
having overlapping IP address space; for overlapping subnets, you must use Source and Destination NAT. Network
ACLs are used between the two local subnets in order to prevent direct instance connectivity. For each destination
virtual server desired, floating NAT requires a NAT rule for both source and destination on the firewall, so it’s a suitable
design pattern for smaller deployments.

Note

To provide a less complex way to inspect East/West traffic, consider using multiple
VPCs, grouping instances with similar security policy in a VPC, and inspecting inter-VPC
traffic in a Transit VPC. The Transit VPC design model presented later in this guide pro-
vides a scalable design for East/West and outbound traffic inspection and control.

Palo Alto Networks 44


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Connections to On-Premises Networks


When migrating workloads to your cloud resources, it is convenient to direct connectivity between private IP ad-
dressed servers in your data center to the private IP addresses of your VPC-based servers. Direct connectivity also
helps the network and system programmers to reach resources that do not have public IP access. You should still take
care to control access between the data center and on-premises connections and the servers and data bases in the
cloud. At least one end of the connection should be protected with firewalls.

To provide VPN access to your VPC, you can either VPN to/from the firewall in the VPC or an AWS VGW. When creat-
ing an VPN connection, you have the option of running dynamic routing protocol (BGP) over the tunnels or using static
routes.

On-Premises Firewall to VGW

When using an on-premises firewall (customer gateway) to connect to the VGW, after creating the VPN connections
in the AWS guided workflow, you can download the following from the same AWS VPN configuration page used to
configure the firewall: a template configuration in CLI format containing IP addresses, IPsec tunnel security, and BGP
configuration for the CGW device.

Figure 23 On-premises VPN customer gateway

AWS
VPC
AZ-a
Router
Host

VGW Subnet 2

On-Premises Host
Customer Gateway
Firewall Router
Subnet 1
AZ-b

The baseline AWS VPN connection configuration template for a Palo Alto Networks VM-Series firewall provides mod-
erate security for IPsec tunnel and IKE gateway crypto profiles. The template makes use of SHA-1 hash for authenti-
cation of both IKE gateway and IPsec tunnels; this practice is now considered insecure and has been deprecated for
use in SSL certificates. AWS VPN Table 4 shows the template configuration provided by AWS for both IKE and IPsec
Crypto on the first line, and compatible versions as a top-down, ordered list of higher to lower security for IKE and
IPsec crypto profiles on a VM-Series running PAN-OS 8.1. For IPsec Diffie-Hellman protocol, only a single compatible
option can be selected within the firewall configuration. The VGW accepts any of the options in Table 4 for IKE crypto

Palo Alto Networks 45


Palo Alto Networks Design Details: VM-Series Firewall on AWS

settings and any options in Table 5 for IPsec crypto settings. Negotiation of crypto profile settings is done at the time
of tunnel establishment, and there is no explicit configuration of VGW crypto settings. You can change your firewall
crypto settings at any time, and the new settings reestablish the IPsec tunnels with the compatible new security set-
tings. Firewalls prefer options in top-down order. Your firewall uses more secure options first (as shown in Table 4 and
Table 5), or you can choose any single option you prefer.

Caution

Use of Galois/Counter Mode options for IPsec crypto profile (aes-128-gcm or aes-256-
gcm) in your firewall configuration prevents VPN tunnels from establishing with your
VPN gateway.

Table 4 Firewall and VGW compatible IKE crypto profile

IKE crypto profile Diffie-Hellman Hash Encryption Protocol


AWS template Group2 SHA-1 AES-128-CBC IKE-V1
Firewall compatible Group14 SHA-256 AES-256-CBC IKE-V1
Firewall compatible Group2 SHA-1 AES-128-CBC

Table 5 Firewall and VGW compatible IPsec crypto profile

IPsec crypto profile Diffie-Hellman Hash Encryption


AWS template Group2 SHA-1 AES-128-CBC
Firewall compatible Group14 SHA-256 AES-256-CBC
Firewall compatible Group5 — AES-128-CBC
Firewall compatible Group2 — —

AWS allows you to manually assign IP subnets to VPN connection tunnels, or you can have AWS automatically as-
sign them. It is recommended that you use manual assignment to prevent subnet collisions on the firewall inter-
face, as described below. When using manual assignment, you must use addressing in the link local reserved range
169.254.0.0/16 (RFC 3927) and typically use a /30 mask. When using manual assignment, you should keep track of
allocated addresses and avoid duplicate address ranges on the same firewall.

You should take care if you choose automatic assignment as AWS randomly assigns IP subnets to VPN connection tun-
nels from 256 available /30 subnets of the link local reserved range 169.254.0.0/16 (RFC 3927). For each tunnel sub-
net, the first available address is assigned to the VGW side and the second available address is assigned to the CGW.
Because the subnet assignments are random, an increasing number of VPN connections from subscriber VGWs results
in greater likelihood of a tunnel subnet collision on the terminating firewall (CGW). AWS guidance indicates that at 15
VPN connections (two tunnels for each), the probability of any two tunnel subnets colliding is 50%, and at 25 tunnels

Palo Alto Networks 46


Palo Alto Networks Design Details: VM-Series Firewall on AWS

the probability of subnet collision increases to 85%. The VM-Series does not support the assignment of the same IP
address on multiple interfaces. Overlapping tunnel subnets must be terminated on different firewalls. Because tunnel
subnet assignment is random if you experience a tunnel subnet collision during creation of a new VPN connection, you
have the option to delete the VPN connection and create a new one. The likelihood of subnet collisions continues to
increase as the number of VPN connections increase.

VPN to VGW Traffic Flow

When you use a single VPC, the challenge of using the VGW is that there is no easy way to force traffic coming into
the VPC environment through the firewall in the VPC for protection and visibility of traffic destined for the file servers.
This is because inbound packets to the VPC, destined to 10.1.0.0/16 addresses in the VPC entering through the VGW,
have an entry in the route table to go directly to the end system, thus bypassing the firewall. If the VPN gateway at the
on-premises location is not a firewall, then you have uncontrolled access from the on-premises network to the hosts in
the VPC, as shown in Figure 24.

Figure 24 VPN connections with VGW in VPC bypass firewall

AZ-a
172.16.20.101
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.10.10 10.1.1.10 10.1.1.100


(eth1) (eth2)
Host

Destination Target Status Propagated


10.1.0.0/16 Local Active No
172.16.0.0/16 vgw Active Yes
0.0.0.0/0 fw-e2 Active Yes
Compute Subnet Route Table

Palo Alto Networks 47


Palo Alto Networks Design Details: VM-Series Firewall on AWS

On-Premises VPN to VPC-Based Firewall Traffic Flow

The preferred design uses VPN connections to/from your VPC based firewall(s) to on-premises networks. With this
design, all inbound VPN traffic can be inspected and controlled for correct behavior as shown in Figure 25. This design
removes the risk that the on-premises VPN peer is not a firewall protected connection. You can use static or dynamic
routing for populating the route tables on the firewalls. The application and other end-point servers in the VPC can
remain pointed to the firewalls for their default gateway.

Figure 25 VPN connections to the VM-Series firewalls in the VPC

172.16.20.101

AZ-a
Public Web – 10.1.1.0/24
10.1.10.0/24

10.1.1.100
10.1.10.10 10.1.1.10
(eth1) (eth2)
Host

Destination Target Status Propagated


10.1.0.0/16 Local Active No
0.0.0.0/0 fw-e2 Active Yes
Compute Subnet Route Table

For single VPC and proof of concept designs, this peering method suffices. As environments scale to multiple VPCs,
you should consider a central pair of firewalls to provide access in and out of the VPCs to reduce complexity. This is
discussed further in the “Design Models” section of this document.

Resiliency
Traditional resilient firewall design uses two firewalls in a high availability configuration. In a high availability configura-
tion, a pair of firewalls shares configuration and state information that allows the second firewall to take over for the
first when a failure occurs. Although you can configure high availability so that both firewalls are passing traffic, in the
majority of deployments, the firewalls operate as an active/passive pair where only one firewall is passing traffic at a
time. This design is to make sure that traffic going through the firewalls is symmetric thereby enabling the firewalls to
analyze both directions of a connection. The VM-Series on AWS does support stateful high availability in active/passive
mode for traditional data center-style deployments done in the cloud; however, both VM-Series firewalls must exist
in the same availability zone, and it can take 60 seconds or longer for the failover to take place due to infrastructure
interactions beyond the control of the firewall.

Unlike traditional data center implementations, VM-Series resiliency for cloud-based applications is primarily achieved
through the use of native cloud services. The benefits of configuring resiliency through native public cloud services in-
stead of firewall high availability are faster failover and the ability to scale out the firewalls as needed. In a public cloud
resiliency model, configuration and state information is not shared between firewalls. Applications typically deployed in

Palo Alto Networks 48


Palo Alto Networks Design Details: VM-Series Firewall on AWS

public cloud infrastructure, such as web- and service-oriented architectures, do not rely on the network infrastructure
to track session state. Instead, they track session data within the application infrastructure, which allows the application
to scale out and be resilient independent of the network infrastructure.

The AWS resources and services used to achieve resiliency for the application and firewall include:

• Availability Zones—Ensure that a failure or maintenance event in an AWS VPC does not affect all VM-Series
firewalls at the same time.

• Load balancers—Distribute traffic across two or more independent firewalls that are members of a common
availability set. Every firewall in the load balancer’s pool of resources actively passes traffic allowing firewall ca-
pacity to scale out as required. The load balancer monitors the availability of the firewalls through TCP or HTTP
probes and updates the pool of resources as necessary.

AWS load balancers use Availability Zones for resilient operation as follows:

• The load-balancer function front end connection can live in multiple zones. This prevents an outage of one zone
from taking complete web server access down.

• The load-balancer function can address targets in multiple Availability Zones. This allows upgrades and migra-
tions to happen without shutting down complete sections.

Resiliency for Inbound Application Traffic

You can implement resiliency in AWS for firewall-protected inbound web server applications from the internet through
the use of an AWS load-balancer sandwich design. This design uses a resilient, public-facing load balancer and a second
resilient load balancer on the private side of the firewalls.

Palo Alto Networks 49


Palo Alto Networks Design Details: VM-Series Firewall on AWS

Traffic routing functions such as domain name servers and firewall next-hop configurations use FQDN to resolve load-
balancer IP addresses versus hard-coded IP addresses. Using FQDN allows the load balancers to be dynamic and scale
up or down in size, as well as remain resilient; one load balancer may go down, and the other can continue feeding
sessions to the web server farm.

To analyze the pieces of the load balancer design, you can walk through the steps in Figure 26. This scenario illustrates
a user accessing an application www.refarch.lb.aws.com located on the web servers:

• #1 in the figure shows the URL request from the end user being directed toward www.refarch.lb.aws.com. This
request is sent to a DNS. In this case, the AWS Route 53 cloud-based DNS resolves to an A record for the Pub-
lic load balancers. The DNS resolves to one of two public IP addresses for the Public load balancers, #2 in the
figure. There are two IP addresses, one for the load balancer in each Availability Zone that provides resilience for
the Public load balancer.

• The Public load balancer is programmed with two targets for the next hop. The two targets, #3 in the figure, are
the private IP addresses for eth1 (ethernet1/1), the public-facing interface on each of the VM-series firewalls. IP
addresses 10.1.10.10 and 10.1.110.10 provide two paths for the incoming connection. The Public load balancer
translates the packet’s destination address to one of the two VM-series target address and translates the source
IP address to the private IP address for the Public load-balancer so that traffic returns to it on the return flow.

• Each firewall is programmed to translate incoming HTTP requests for eth1 IP address with a new source IP ad-
dress and a new destination IP address. The source IP address is changed to the eth2 (ethernet1/2) IP address
of that firewall, #4 in the figure, so that the return response traffic from the web server travels back through the
same firewall to maintain state and translation tables. The destination IP address is changed to the next hop,
which is the IP address of the Internal load balancer, #5 in the figure. The firewall learns the IP addresses for the
redundant Internal load balancers by sending a DNS request for the FQDN assigned to the load balancers. This
requests one of two IP addresses, one for each of the Internal load balancers, one in each Availability Zone.

• The Internal load balancers are programmed with web servers in both Availability Zones and do a round-robin
load balance to the active servers in the target list, #6 in the figure.

Figure 26 Load-balancer sandwich of firewall

VPC
AZ-a
10.1.1.100

Web-server
Public 10.1.1.0/24
1 10.1.10.0/24
3
10.1.1.10
AWS Route53 (eth2)
6
refarch.lb.aws.com
tcp/80 Public LB 10.1.10.10
4 Internal LB
(eth1)
tcp/443

5
2
10.1.110.10
(eth1)
4
6
3 10.1.101.10
Public (eth2) Web-server
10.1.110.0/24 10.1.101.0/24

10.1.101.100

AZ-b

Palo Alto Networks 50


Palo Alto Networks Design Details: VM-Series Firewall on AWS

The redundant load-balancer design uses probes to make sure that each path is operational and that end systems are
operational. This liveliness check also serves to guarantee the return path is operational as the Public load balancer
probes through the firewall to the internal load balancer. The internal load balancers probe the end-point web servers
in their target group to make sure they are operational.

Figure 27 shows the network address translations as the packet moves from the outside to the inside and then returns.

Figure 27 Load balancer design with address translation

VPC
AZ-a Web-server
10.1.1.0/24

10.1.1.101

Public
10.1.10.0/24 10.1.1.10
AWS Route53 (eth2)
refarch.lb.aws.com
tcp/80 Public LB 10.1.10.10 Internal LB
(eth1)
tcp/443

10.1.110.10
(eth1)

10.1.101.10
Public (eth2)
10.1.110.0/24

10.1.101.100
Source: 52.1.7.85
Destination: refarch.lb.aws.com
Web-server

AZ-b 10.1.101.0/24

Source: 52.1.7.85 Source: 10.1.10.169 Source: 10.1.1.10 Source: 10.1.1.117


Destination: 34.209.84.167 Destination: 10.1.10.10 Destination: 10.1.1.117 Destination: 10.1.1.101

Source: 34.209.84.167 Source: 10.1.10.10 Source: 10.1.1.101 Source: 10.1.1.101


Destination: 52.1.7.85 Destination: 10.1.10.169 Destination: 10.1.1.10 Destination: 10.1.1.117

Resiliency for Outbound and On-premises Bound Traffic

The web server subnet route table is programmed for the default route to point to the firewall eth2 (ethernet1/2)
interface; thus, the instance’s default route results in transiting the firewall. The firewall can be programmed to protect
the outbound traffic flows and associated return traffic threats. The web servers in AZ-a are programmed to exit via the
firewall for AZ-a, and the web servers in AZ-b are programmed to firewall AZ-b. This effectively maintains resilience on
an Availability Zone basis.

Palo Alto Networks 51


Palo Alto Networks Design Details: VM-Series Firewall on AWS

To get on-premises traffic to/from the web servers, the VPN links terminate on the firewalls in each Availability Zone.
This design has the same resilience as outbound traffic, because the web servers use the default route to get anything
other than devices within the VPC. The Transit VPC design, covered later in this guide, offers a fully resilient outbound
and east-west design.

Figure 28 Outbound and on-premises flow

VPC
AZ-a
On-premises Destination Target Status Propagated
10.1.0.0/16 local Active No
0.0.0.0/0 igw-a Active No
172.16.20.101
Route Table Web-server-aza

Public
10.1.10.0/24 10.1.1.10 Web
(eth2) 10.1.1.0/24

10.1.10.10 Host
(eth1)

Microsoft.com
10.1.110.10
(eth1) Host

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24

Destination Target Status Propagated


10.1.0.0/16 local Active No
0.0.0.0/0 igw-b Active No
Route Table Web-server-azb
AZ-b

Palo Alto Networks 52


Design Models

Design Models
Now that you understand the basic AWS components and how you can use your VM-Series, you next use them to
build your secure infrastructure. The design models in this section offer example architectures that you can use to se-
cure your applications in AWS. The design models build upon the initial Single VPC design as you would likely do in any
organization; build the first VPC as a Proof of Concept, and as your environment grows, you grow into a more modular
design where VPCs may be purpose-built for the application tier that it houses.

The design models presented here differ in how they provide resiliency, scale, and services for the design. The designs
can be combined, as well, offering services like load balancing for the web front-end VPCs and common outbound
services in a separate module. The design models in this reference design are:

• Single VPC—Proof of Concept or small-scale multipurpose design

• Transit VPC—General purpose scalability with resilience and natural support for inbound, outbound, and East/
West traffic

CHOOSING A DESIGN MODEL


When choosing a design model, consider the following factors:

• Scale—Is this deployment an initial move into the cloud and thus will be a smaller scale with many services in a
Proof of Concept? Will the application load need to grow quickly and in a modular fashion? Are there require-
ments for inbound, outbound, and East/West flows? The Single VPC provides inbound traffic control and scale,
outbound control and scale on a per availability zone basis; however, East/West control is more limited. The
Transit VPC offers the benefits of a modular and scalable design where inbound loads can scale in the spokes,
and East/West and outbound can be controlled and scaled in a common transit block.

• Resilience and availability—What are the application requirements for availability? The Single VPC provides a
robust design with load balancers to spread the load, detect outages, and route traffic to operational firewalls
and hosts. The Transit complements the Single VPC design by providing a fully resilient outbound and East/West
design, and the Single VPC can be a spoke of the Transit VPC. Transit VPC is also able to take advantage of the
AWS VPN Gateway resilience in the spoke VPCs without sacrificing visibility and control of the traffic entering
and leaving the VPC.

• Complexity—Understanding application flows and how to scale and troubleshoot is important to the design.
Complex designs that require developers to use non-standard designs can result in errors. Placing all services in
a single compute domain (VPC) might seem efficient but could be costly in design complexity. Beyond the initial
implementation, consider the Transit VPC design with inbound scalability in the spoke VPCs for a more intuitive
and scalable design.

Palo Alto Networks 53


Design Models

SINGLE VPC MODEL


A single standalone VPC might be appropriate for small AWS deployments that:

• Provide the initial move to the cloud for an organization.

• Require a starting deployment that they can build on for a multi-VPC design.

• Do not require geographic diversity provided by multiple regions.

For application high-availability, the architecture consists of a pair of VM-Series in each of two Availability Zones within
your VPC Region. The firewalls are sandwiched between AWS load balancers for resilient inbound web application traf-
fic and the return traffic. The firewalls are capable of inbound and outbound traffic inspection that is easy to support
and transparent to DevOps teams.

You can use security Groups and network access control lists to further restrict traffic to individual instances and be-
tween subnets. This design pattern provides the foundation for other architectures in this guide.

Figure 29 Single VPC design model

Single VPC
AZ-a
10.1.9.20 10.1.9.21 DB – 10.1.3.1-2/24
Primary
Panorama

Management – 10.1.9.0/24

Business – 10.1.2.1-2/24

Public
10.1.10.0/24 10.1.1.10 Web – 10.1.1.1-2/24
Web
(eth2) 10.1.1.0/24

Public LB 10.1.10.10 Internal LB


(eth1)
lb.aws.com
tcp/80
tcp/443

10.1.110.10 Web – 10.1.101.1-2/24


(eth1)

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24
Business – 10.1.102.1-2/24

10.1.109.20 10.1.109.21 DB – 10.1.103.1-2/24


Standby
Panorama

Management – 10.1.109.0/24
AZ-b

Palo Alto Networks 54


Design Models

Inbound Traffic
For inbound web application traffic, a public-facing load balancer is programmed in the path of the two firewalls. The
load balancers are programmed to live in both availability zones and have reachability to the Internet Gateway Service.
The cloud or on-premises DNS responsible for the applications housed inside the VPC use FQDNs to resolve load-
balancer public IP addresses versus hard-coded IP addresses in order to provide failover capabilities. If one public load
balancer fails, it can be taken out of service, and if a load balancer changes IP address, it is discovered.

An internal load balancer is programmed in the path between the firewall’s inside interface and the pool of front-end
web servers in the application server farm. The internal load balancers are also programmed in both availability zones,
use the IP addresses in the web server address space, and can have targets (application web servers) in multiple Avail-
ability Zones.

The public load balancers are programmed with HTTP and/or HTTPS targets, which are the next-hop IP addresses of
the VM-Series firewall private IP address for public firewall interface, eth1(ethernet1/1). The firewalls are subsequently
programmed to perform destination NAT translation for the incoming HTTP/HTTPS sessions to an IP address on the
internal load balancers. Using Application load balancers as the inside load balancers, the firewalls can use FQDN to
dynamically learn the next hop for the address translation. The firewall is also programmed to perform a source NAT
translation to the firewall’s eth2(ethernet1/2) IP address so that return traffic is destined to the same firewall.

Health probes from the public load balancer probes through the firewall to the internal load balancer, ensuring that the
path through the firewall is operational. The internal load balancer does health probes for the servers in the web server
farm, and if it has no operational servers, does not respond to the probes from the public load balancer. If the internal
load balancer does not respond, that path is taken out of service.

The firewall security policy for inbound traffic should allow only those applications required (whitelisting). Firewall
security profiles should be programmed to inspect for malware and protect from vulnerabilities for traffic entering the
network allowed in the security policies.

Outbound Traffic
Program the web server route table so that the default route to points to the firewall eth2 (ethernet1/2) interface. Pro-
gram the firewall to protect the outbound traffic flows and associated return traffic threats. The web servers in Avail-
ability Zone-a (AZ-a) are programmed to exit via the firewall for AZ-a, and the web servers in AZ-b are programmed to
firewall AZ-b. This effectively maintains resilience for outbound and return traffic on an Availability Zone basis.

The firewalls apply source NAT to the outbound traffic, replacing the web server’s source IP address with the private
IP address of the firewall’s public interface eth1(ethernet1/1). As the traffic crosses the IGW, the private source IP ad-
dress is translated to the assigned AWS-owned public IP address. This ensures that the return traffic for the outbound
session returns to the correct firewall.

Palo Alto Networks 55


Design Models

The firewall outbound-traffic policy typically includes UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS). Set the
policy up for application-based rules using default ports for increased protection. DNS is not needed, because vir-
tual machines communicate to AWS name services directly through the AWS network fabric. You should use positive
security policies (whitelisting) to enable only the applications required. Security profiles prevent known malware and
vulnerabilities from entering the network in return traffic allowed in the security policy. URL filtering, file blocking, and
data filtering protect against data exfiltration.

East/West Traffic
The firewalls can be set up to inspect East/West traffic by using double NAT and programming applications to use NAT
addresses inside the VPC, however, complexity increases as subnets are added and can cause friction with the DevOps
teams who must customize IP next-hop addresses for applications versus using the assigned VPC addressing. Network
ACLs can be used to restrict traffic between subnets, however they are limited to standard layer 4 ports to control to
applications flowing between tiers. The Transit design model, covered later in this guide, offers a more elegant and scal-
able way to direct East/West traffic through the firewall for visibility and control.

Backhaul or Management Traffic


To get traffic from on-premises connections to the internal servers, the VPN connections from on-premises gateways
connect to the firewalls in AZ-a and AZ-b. Depending on the number of on-premises gateways, a single gateway could
connect to both VPC-based firewalls, an HA pair of firewalls could connect to both VPC-based firewalls, or you could
program a-to-a and b-to-b VPN connections. The Single VPC deployment guide uses a single HA pair for the on-prem-
ises gateway terminating on each VM-Series in the VPC. The VPN peers are programmed to the eth1 public IP address-
es, but the VPN connection tunnel interfaces are configured to terminate on a VPN security zone so that policy for
VPN connectivity can be configured differently than outbound public network traffic. Security policies on the firewalls
only allow required applications through the dedicated connection to on-premises resources to the VPN security zone.

Figure 30 VPN connection to Single VPC

Single VPC
10.1.9.20 10.1.9.21
Primary
Panorama

Management – 10.1.9.0/24
AZ-a
Public
10.1.10.0/24 10.1.1.10 Web
(eth2) 10.1.1.0/24

172.16.20.101 10.1.10.10
HA-Active (eth1) Host
VPN

HA-Standby 10.1.110.10
VPN
(eth1) Host

10.1.101.10 Web
Public (eth2) 10.1.101.0/24
10.1.110.0/24

10.1.109.20 10.1.109.21
Standby
Panorama
AZ-b
Management – 10.1.109.0/24

Palo Alto Networks 56


Design Models

Traffic from the private-IP-addressed web servers to the firewalls has the same resilience characteristics as the out-
bound traffic flows; servers in AZ-a use the firewall in AZ-a via the default route on the server pointing to the firewall,
and servers in AZ-b use the firewall in AZ-b as their default gateway. Static or dynamic route tables on the firewalls
point to the on-premises traffic destinations.

TRANSIT VPC MODEL


When your AWS infrastructure grows in number of VPCs due to scalability or the desire to segment East/West traffic
between VPCs, the question of how to secure your infrastructure arises. One option is to continue using the Single
VPC design with a VM-Series in each VPC. The cost and management complexity of firewalls required grows linearly
with the number of VPCs. A popular option is the Transit VPC architecture. This design uses a hub and spoke architec-
ture consisting of a central Transit VPC hub (which contains firewalls) and Subscriber VPCs as spokes. The Transit VPC
provides:

• Common security policy for outbound traffic of Subscriber VPCs.

• An easy way to insert East/West segmentation between Subscriber VPCs.

• A fully resilient traffic path for outbound and east-west traffic with dynamic routing.

The architecture provides VM-Series security capabilities on AWS while minimizing cost and complexity of deploying
firewalls in every VPC. You can place common infrastructure services such as DNS, identity, and authentication in the
Transit VPC; however, most chose to keep the Transit VPC less complicated and to locate these services in a Services
VPC, which is connected to the Transit VPC for ubiquitous access.

A Transit VPC easily accommodates organizations wanting to enable distributed VPC ownership to different business
functions (engineering, sales, marketing, etc.) while maintaining common infrastructure and security policies, as well
as accommodating organizations that wish to consolidate ad hoc VPCs infrastructure within a common infrastructure.
Transit VPC architecture provides a scalable infrastructure for East/West inspection of traffic between VPCs that mini-
mizes friction with DevOps. Within their VPC, the business or DevOps can move rapidly with changes, and when trans-
actions need to go between VPCs or to the internet, common or per-VPC policies can be applied in the Transit VPC.

Palo Alto Networks 57


Design Models

You can automate Transit VPC security policy assignment to endpoint IP addresses by assigning tags to AWS applica-
tion instances (VMs) and using Panorama to monitor the VMs in a VPC. Panorama maps the assigned tags to dynamic
address groups that are tied to the security policy, providing a secure, scalable, and automated architecture.

Figure 31 Panorama VM Monitoring for dynamic address group mapping

VM-Monitoring
XML API

Dynamic Address
AWS Compute Resources Group Definition
Sharepoint Servers • Address Group Assignment
• Security Policy Settings
DB Servers
Admin Servers
Public Web Servers
Policy Enforcement
10.4.2.2
Learned Group
10.1.1.2
Membership
10.8.7.3

Security Policy
Source Destination Application Rule

Public Web Servers DB Servers MSSQL-DB

Admin Servers Sharepoint Servers MS-RDP

Sharepoint Servers DB Servers SSH

Transit VPC builds upon the Single VPC, using the same design of two VM-Series firewalls, one in each of two Availabil-
ity Zones. Subscriber VPCs are connected via an IPsec overlay network consisting of redundant pair of VGW connec-
tions (in each VPC) that connect to each VM-Series firewall within the Transit VPC. Subscriber VPN tunnels terminate
in the Subscr security zone. Dynamic routing using BGP provides fault-tolerant connectivity between the Transit VPC
and Subscriber VPCs.

Palo Alto Networks 58


Design Models

Outbound traffic uses Source NAT as in the Single VPC design, and East/West traffic between VPCs passes through
the Transit VPC firewalls for policy enforcement.

Figure 32 Transit VPC design model

Subscriber VPC#1
Sub-2a
10.2.1.0/24

VGW
BGP-AS 64512
Transit VPC
Sub-2b
AZ-a 10.2.2.0/24

10.101.1.11
Management
10.101.1.0/24

fw1-2a Subscriber VPC#2


Subscr

BGP-AS 64827
Sub-2a
(eth1) 10.3.1.0/24
10.101.0.10
VGW
Public-2a BGP-AS 64513
10.101.0.0/24
Internet

Public-2b Sub-2b
10.101.2.0/24 10.3.2.0/24

(eth1)
10.101.2.10
Subscr

fw1-2b
BGP-AS 64828 Subscriber VPC#3
Management
10.101.3.0/24 Sub-2a
10.1.1.0/24
10.101.3.11

AZ-b
Sub-2b
VGW
10.1.2.0/24
BGP-AS 64514

Palo Alto Networks 59


Design Models

Differences from the Single VPC design model:

• VGW and BGP routing—The Subscriber VPCs communicate over IPsec VPN tunnels to the Transit VPC firewalls.
The VGWs in the VPC originate the VPN tunnels and connect to each Transit firewall. BGP is used to announce
VPC routes to the Transit VPC and learn the default route from the Transit VPC firewalls. Each VGW has an BGP
over IPSec peering session to one firewall in each availability zone.

• Inbound—Transit VPC does not use load balancers for inbound traffic in this design. Inbound traffic on the
Transit VPC can be designed to reach any VPC behind the Transit VPC; however, it is preferred that large scale
public-facing inbound web application firewalls and front ends be deployed in the spoke VPC with the applica-
tion front end and use the load-balancer sandwich design. In this way, the Single VPC can remain the template
design for VPCs hosting web front-ends, and the Transit VPC is used for outbound and East/West traffic for all
VPCs. In this design, the firewall load can be distributed; Transit firewalls can scale to handle more East/West
and outbound flows.

• Outbound—Outbound traffic from all VPCs use VPN links to the Transit VPC for common and resilient out-
bound access.

• East/West—This traffic follows routing to the Transit VPC, where security policy can control VPC-to-VPC traffic
for East/West.

• Backhaul—Backhaul to On Premises connections would be delivered to the Transit VPC for shared service ac-
cess to associated VPCs.

VGW and BGP Routing


The Transit VPC design relies on BGP routing to provide a resilient and predictable traffic flow to/from Subscriber VPCs
and the Transit VPC firewalls. Subscriber VGWs’ implementation of BGP evaluates several criteria for best path selec-
tion. After a path is chosen, all traffic for a given network prefix uses this single path; VGW’s BGP does not support
equal cost multipath (ECMP) routing, so only one destination route is installed; the other link is in standby. This behav-
ior is most evident for outbound internet traffic from a Subscriber VPC, because all internet traffic takes a single path
regardless of connectivity. There is the ability to influence the path selected to better distribute Subscriber VPC traffic
across each VM-Series firewall within the Firewall Services VPC; however, most designs favor one route for all traf-
fic over one Transit firewall and have the ability to failover to the other path and firewall in the event of an outage or
scheduled maintenance.

Note

The BGP timers that the AWS VGW uses for keepalive interval and hold-time are set
to 10 and 30 respectively. This provides an approximately 30 second failover for the
transit network versus the default BGP 180 seconds. The BGP timers on the firewalls
are set to match the AWS VGW settings.

Palo Alto Networks 60


Design Models

Intentionally biasing all traffic to follow the same route to the Transit VPC provides a measurable way to predict when
the overall load of the existing Transit firewall will reach its limit and require you to build another for scale; Scale is
discussed more later. Another benefit of deterministic traffic flow through the Transit VPC firewalls is that it eliminates
the need to use NAT on east/west traffic to ensure that return traffic comes through the same firewall(s). The Transit
VPC design follows an active/standby design in which all firewalls in Availability Zone-a are the preferred path, and all
firewalls in Availability Zone-b is the backup path by using BGP Path Selection.

BGP Path Selection is a deterministic process based on an ordered list of criteria. The list is evaluated top-down un-
til only one “best” path is chosen and installed into the VGW routing table. The VGW uses the following list for path
selection criteria:

• Longest prefix length

• Shortest AS Path length

• Path origin

• Lowest multi-exit discriminator (MED)

• Lowest peer ID (IP address)

The VPN connection templates provided by AWS have the first four criteria the same for every configuration, so the
default behavior is for path selection to choose the BGP peer (firewalls) with the lowest IP address. The peer IP ad-
dresses are those assigned to the firewall tunnel interfaces and specified by AWS. BGP Peer IP address is the least
significant decision criterion and one you cannot change to influence path selection. Left to the default peer IP address
determining path selection, you will have a random distribution of traffic paths, which is undesirable. There are ways to
influence the path selection and have a predictable topology.

Prefix Length

Network efficiency and loop avoidance require that hosts and routers always choose the longest network prefix for
destination path selection. Prefix length can provide information regarding desired routing policy but is most often re-
lated to actual network topology. This architecture uses prefix length and BGP AS Path to ensure symmetric path selec-
tion in the absence of multi-path support by the VGW for connectivity between Firewall Services VPC and Subscriber
VPCs.

The easiest way to consider this is to think of the two Availability Zones in your Firewall Services VPC as separate BGP
routing domains within a single autonomous system. VM-Series firewalls within the Firewall Transit VPC should an-
nounce only directly connected routes and a default route. You should avoid BGP peering between firewalls within
your Transit VPC because this creates an asymmetric path for traffic arriving on the tunnel not being used by the VGW
for outbound traffic.

Palo Alto Networks 61


Design Models

AS Path Length

Autonomous systems (AS) Path length is the number of autonomous systems through which a route path must traverse
from its origin. Because your BGP AS is directly adjacent to that of the VGW, AS Path length is always 1 unless modi-
fied by the firewall when advertised. If you could examine AS Path length, you would see your private AS associated
with all routes you announce to VGW {65000}. Prepending AS Path adds the local AS as many times as the configu-
ration indicates. A prepend of 3 results in {65000, 65000, 65000, 65000}. AS Path Length is modified as part of an
export rule under BGP router configuration in the firewall. There are three configuration tabs for an export rule:

• General tab—
Used by—Select BGP peer(s) for which you want to modify exported routes

• Match tab
AS Path Regular Expression—Type the private AS used in your BGP configuration (65000)

• Action tab
AS Path—Type=Prepend, type the number of AS entries you wish to prepend (1,2,3, etc.)

The Transit VPC reference design uses a unique BGP autonomous system per firewall in the Transit VPC and then uses
an AS prepend of 2 on the firewalls in the backup path Availability Zone-b, and no prepend from the firewalls in the
active path Availability Zone-a.

Path Origin

BGP path origin refers to how a route was originally learned and can take one of three values: incomplete, egp, or igp.
An interior gateway protocol (igp) learned route is always preferred over other origins, followed by exterior gateway
protocol (egp), and lastly incomplete. By default, AWS VPN connection templates announce routes with incomplete ori-
gin. Under BGP router configuration in the firewall, you can modify path origin as either an export rule or redistribution
rule. Default Route (0.0.0.0/0) is advertised to the VGW via redistribution rule in the firewall and is the easiest place
to modify path selection for the default route. Changing the origin to either igp or egp prefers this path over the other
firewall. AWS does not document the use of path origin as part of route selection and is left at default in this design.

Multi-Exit Discriminator

A multi-exit discriminator (MED) is a way for an autonomous system to influence external entities’ path selection into a
multi-homed autonomous system. MED is a 32-bit number, with a lower number used for preferred path. MED is an
optional configuration parameter, and a MED of <none> and 0 are equivalent. Under BGP router configuration in the
firewall, you can modify MED by export rules and redistribution rules. AWS does not document the use of MED as
part of route selection; however, it responds to MED values set by the firewalls when announcing routes. Because this
design is using a unique BGP AS per firewall in the transit and a unique BGP AS per VGW, MED is not used for path
selection.

Palo Alto Networks 62


Design Models

Lowest Peer-ID (IP address)

Given the selection criteria discussed above, if there are still multiple candidate paths remaining, the route selection
process of AWS BGP chooses the BGP peer with the lowest IP address. For IPsec tunnels, these are the IP addresses
assigned by AWS. The default behavior for the AWS-provided VPN connection configuration templates uses lowest
peer-ID for route selection.

Figure 33 BGP path selection process

Prefix AS Path Origin MED Peer-ID


10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1
10.8.0.0/24 65000,65000,7224 incomplete 0 169.254.49.145
10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30
10.8.0.0/16 65000,7224 incomplete 0 169.254.59.10

Longest Prefix

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,65000,7224 incomplete 0 169.254.49.145
10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

AS Path Length

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

Path Origin

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30
Multi Exit
Discriminator

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1


10.8.0.0/24 65000,7224 incomplete 0 169.254.58.30

Lowest Peer-ID

10.8.0.0/24 65000,7224 incomplete 0 169.254.49.1

Palo Alto Networks 63


Design Models

Inbound Traffic
The Transit VPC design model prefers to host inbound web application firewalls and load balancers in the subscriber
VPC with the application front end and use the load-balancer sandwich design. In this way, the Single VPC can remain
the template design for VPCs hosting web front-ends with the associated firewall pair in the Single VPC for inbound
traffic protection, and the Transit VPC is used for outbound and East/West for those VPCs and all others.

Figure 34 Transit VPC with inbound in Subscriber VPC

Inbound Web Server VPC

AZ-a Web-2a
10.1.1.0/24

Transit VPC lb.aws.com


Public-2a
tcp/80
10.1.10.0/24 (eth2)
AZ-a
tcp/443

(eth1)
10.101.1.11
Management
10.101.1.0/24
(eth1)

(eth2)
Public-2b
(eth1) 10.1.110.0/24
10.101.0.10
Public-2a
10.101.0.0/24 Web-2b
Internet
Outbound AZ-b 10.1.101.0/24

Public-2b
10.101.2.0/24
(eth1) Subscriber VPC#1
10.101.2.10

Sub-2a
10.2.1.0/24

Management
10.101.3.0/24
10.101.3.11

AZ-b Sub-2b
10.2.2.0/24

If required, inbound traffic on the Transit VPC can be designed to reach any VPC behind the Transit VPC. For inbound
traffic via the Transit VPC, the front-end private IP address on the Transit firewall is mapped to a public IP address
via the IGW. The IGW provides Destination NAT as usual to a firewall secondary IP address. When received by the
firewall, both Source and Destination NAT are applied. Source NAT is required in this scenario in order to ensure the
return traffic remains symmetric through the receiving firewall. If you did not apply Source NAT, the VGW routing table
in the VPC might have chosen the other firewall for its default route back to the internet, and the session fails due to
asymmetry. The return traffic from the Subscriber instance is sent back to the translated source address of the fire-
wall to match the existing session and the reciprocal Source and Destination NAT translations are applied. Lastly the
IGW modifies the source address to the public Elastic IP address. Note that for inbound traffic, the originating host
IP address is opaque to the destination Subscriber VPC instance, because it was source translated by the firewall. All
inbound traffic appears to be originating for the same source host.

Security policies on the firewalls control which applications can flow inbound through the transit firewalls.

Palo Alto Networks 64


Design Models

Outbound Traffic
Outbound traffic requires only Source NAT on the firewall in order to ensure traffic symmetry. The VGW BGP default
route determines the outbound direction of all internet traffic and, in so doing, the Transit VPC firewall and the associ-
ated Source NAT IP address used for outbound traffic. The outbound traffic from an endpoint is initiated to the direct
internet IP addresses desired and then:

• The VGW forwards it to the firewall.

• At the firewall, the outbound packet’s source IP address is translated to a private IP address on the
egress interface.

• At the AWS IGW, the outbound packet’s source IP address is translated to a public IP address and forwarded to
the internet.

The traffic policy required for outbound traffic includes UDP/123 (NTP), TCP/80 (HTTP), and TCP/443 (HTTPS). Set
the policy up for application-based rules using default ports for increased protection. DNS is not needed, because
virtual machines communicate to AWS name services directly through the AWS network fabric. You should use positive
security policies (whitelisting) to enable only the applications required. Security profiles prevent known malware and
vulnerabilities from entering the network in return traffic allowed in the security policy. URL filtering, file blocking, and
data filtering protect against data exfiltration.

You can statically map endpoint IP address assignment in a security policy to a VPC subnet or the entire VPC IP address
range. To reduce administration and adapt to a more dynamic continuous development environment, you can automate
endpoint IP address assignment in security policy by assigning tags to AWS application instances (VMs) and using Pan-
orama to monitor the VMs in a VPC. When an instance is initialized, Panorama maps the assigned tags to device ad-
dress groups that are tied to the security policy, and then Panorama pushes the mappings to the firewalls in the Transit
VPC. In this way, members of a security policy can exist in a number of VPCs or IP address ranges.

East/West Traffic
Functional segmentation by application tiers—such as web, business, database, or business units—are all done on a
per-VPC basis. In this way, East/West traffic, or traffic between VPCs, flows through the Transit VPC firewalls. The
East/West traffic follows the default route, learned by the VGW in the VPC, to the Transit firewalls where all internal
routes are known as well as the path to Internet access. It is possible to advertise each VPC’s routes to all other VPCs;
however, in large environments you can overload the route tables in the VPC. A VPC route table is limited to 100 BGP
advertised routes. Using aggregate routes or the default therefore may be required in larger environments.

As with outbound traffic, all East/West traffic flows over the same links and across the same Transit VPC firewalls for
predictable load measurement and troubleshooting. For security policy grouping, each VPC link to the Transit firewall
is placed into a common Subscr security zone on the firewall. This design uses a single security zone with multiple
dynamic address groups and VM Monitoring to automate address group assignment. You can base security policy on a
combination of security zone and dynamic address group.

Palo Alto Networks 65


Design Models

A positive control security policy should allow only appropriate application traffic between private resources. You
should place resources with a different security policy in separate VPCs, thereby forcing traffic between security groups
though the Transit VPC. You should also enable security profiles in order to prevent known malware and vulnerabilities
from moving laterally in the private network through traffic allowed in the security policy.

Backhaul to On-Premises Traffic


To get traffic from on-premises connections to the internal servers, the VPN connections from on-premises gateways
connect to the firewalls in AZ-a and AZ-b. Depending on the number of on-premises gateways, a single gateway could
connect to both VPC firewalls, an on-premises HA pair could connect to both VPC-based firewalls, or you could pro-
gram a-to-a and b-to-b VPN connections. This design uses an HA pair On Premises with VPN connections to both VPC
firewalls, and VPN connection tunnel interfaces are configured to terminate on the Public interface.

Connections from the Subscriber VPCs private IP addressed servers now have the same improved resilience charac-
teristics as the outbound traffic flows. The default route in the route table for the servers points to a resilient VGW in
that VPC. The VGW peers to both firewalls in the Transit VPC, and dynamic routes on the Transit firewalls point to the
remote on-premises traffic destinations.

Figure 35 VPN to Transit VPC

Transit VPC

On-Premises AZ-a Mgmt Subscriber VPC#1


OnPrem

Subscr

10.10.0.0/16 Sub-2a
BGP-AS 65001 10.2.1.0/24

HA-Active (eth1)
Public-2a
VPN

10.101.0.0/24

Sub-2b
HA-Standby 10.2.2.0/24
(eth1)

Public-2b
OnPrem

Subscr

10.101.2.0/24

AZ-b Mgmt

Security policies on the firewalls restrict to required applications being allowed through a dedicated connection to on-
premises resources. In this way, Subscriber-to-Public outbound traffic can be assigned different rules than a Subscriber-
to-On-premises security policy.

Palo Alto Networks 66


Design Models

Management Traffic
Management traffic from on-premises to the firewalls can pass through the VPN connection; however, there is the
need for one of the firewalls to be active for this to work. This design uses Panorama located in AWS to manage the
firewalls. A Panorama HA pair is housed in a separate VPC to simplify the Transit VPC and uses VPC Peering to the
Transit VPC for connectivity. The use of VPC peering for the connection to the Transit VPC means that the Transit fire-
wall does not have to be fully operational with VPN connectivity for Panorama to communicate with it; this is impor-
tant when initially setting up the firewalls or troubleshooting VPN connectivity.

Figure 36 Management connectivity

Subscriber VPC#1
Transit VPC
Sub-2a
AZ-a 10.1.1.0/24

10.101.1.11
Management
10.101.1.0/24

Sub-2b
10.1.2.0/24

(eth1)
10.101.0.10
Public-2a
10.101.0.0/24
Internet
Central Mgmt VPC
Public-2b
10.101.2.0/24 Sub-2a
(eth1) 192.168.100.10/25
10.101.2.10
Panorama
- Primary

Management Sub-2b
10.101.3.0/24 192.168.100.138/25
10.101.3.11
Panorama

AZ-b - Standby

You should use security groups and network ACLs to restrict Panorama to management communications to needed
ports and IP address ranges. Panorama should have dedicated public IP access so that you can always reach the fire-
walls in the Transit using private IP addressing within the VPC, and Security groups should be used to restrict outside
access to Panorama to only those ranges in use by your organization to manage the firewalls. If you are not using Pan-
orama, you need public IP addresses on the Transit firewall management interfaces or a jumphost in the Transit VPC.

Scaling the Transit VPC


As organizations grow the number of Subscriber VPCs connected to their Transit VPC, they will likely run into some
scaling challenges—firewall performance and IPsec tunnel performance being the most common. Large Transit VPC
architecture has two variations on the Transit VPC to address each of these challenges.

Palo Alto Networks 67


Design Models

To address performance scale, Large Transit VPC architecture augments the Transit VPC architecture by placing addi-
tional firewalls in the Transit VPC to distribute Subscriber VPC tunnels, eliminating performance issues where all active
traffic flows through a single firewall pair. Overall outbound traffic performance from a VPC is constrained by the VGW
IPsec tunnel performance, which AWS indicates multi-Gbps throughput can be expected.

Recall from the Transit VPC design that inbound web applications should have their firewalls in the same VPC as the
application front-end web servers and use the load-balancer sandwich design from the Single VPC design. In this
way the Single VPC can remain the template design for VPCs hosting web front-ends, and the Transit VPC is used for
outbound and East/West for those VPCs and all others. In this way the firewall load can be distributed; Transit firewalls
can scale to handle more East/West and outbound flows.

Figure 37 illustrates how the Large Transit VPC design mitigates firewall performance issues by distributing Subscriber
VPC connections across multiple VM-Series firewalls within the Transit VPC. The design pattern is identical to Transit
VPC for each pair of firewalls across your Transit VPC Availability Zones. In the example diagram, Subscriber VPC-1 and
Subscriber VPC-2 VPN tunnels are terminated on fw1-2a and fw1-2b, and Subscriber VPC-3 VPN tunnel subnets are
connected to a new pair of firewalls (fw2-2a and fw2-2b) in the Transit VPC to terminate the new tunnel subnets. In
this scenario the Transit firewall pair for AZ-a (fw1-2a and fw2-2a) have to peer to each other, and the Transit firewall
pair for AZ-b (fw1-2b and fw2-2b) peer to each other, exchanging only subscriber VPC routes so that all VPCs have
East/West connectivity. The Transit firewalls in AZ-a do not have eBGP peering sessions with the firewalls in AZ-b.

Figure 37 Large Transit VPC—firewall scale

Subscriber VPC#1
Sub-2a
Transit VPC 10.2.1.0/24

10.101.1.11-12 AZ-a BGP AS=64512

Management
10.101.1.0/24 Sub-2b
10.2.2.0/24
Public-2a
10.101.0.0/24

fw1-2a 10.101.0.10
BGP-AS 64827 (eth1)

Subscriber VPC#2
eBGP
Peers Sub-2a
10.101.0.11 10.3.1.0/24
(eth1)
fw2-2a
BGP AS=64829

Internet
Sub-2b
fw1-2b
BGP AS=64513 10.3.2.0/24
BGP AS=64828
(eth1)
10.101.2.10
eBGP
Peers

fw2-2b
Subscriber VPC#3
BGP AS=64830 (eth1)
Sub-2a
10.101.2.11 10.1.1.0/24
Public-2b
10.101.2.0/24

Management
10.101.3.0/24
Sub-2b
10.101.3.11-12 AZ-b BGP AS=64514 10.1.2.0/24

Palo Alto Networks 68


Design Models

Figure 38 below illustrates using the Transit VPC for outbound and East/West traffic loads, while keeping inbound traf-
fic into an individual VPC where larger scale inbound Webserver front-end applications exists. In this way, the Transit
VPC remains less complicated with requiring load balancers in the VPC, though this is not required and inbound traffic
via the Transit VPC is supported. In the VPC where the web server front-end application lives, the load-balancer sand-
wich design with firewalls in the VPC provides resilient operation for inbound and return traffic, and the VGW provides
resilient transport to Transit VPC for any outbound or East/West traffic.

Figure 38 Large Transit VPC with inbound web server in VPC

Inbound Web Server


Transit VPC

10.101.1.11-12 AZ-a AZ-a Web-2a


10.1.1.0/24

Management
10.101.1.0/24
lb.aws.com
Public-2a
tcp/80
Public-2a 10.1.10.0/24 (eth2)
tcp/443
10.101.0.0/24
10.101.0.10 (eth1)
fw1-2a
BGP-AS 64827 (eth1)

eBGP (eth1)
Peers
10.101.0.11 (eth2)
Public-2b
(eth1) 10.1.110.0/24
fw2-2a
BGP AS=64829

Web-2b
AZ-b 10.1.101.0/24
fw1-2b Internet
BGP AS=64828
(eth1)
10.101.2.10 Subscriber-1
eBGP
Peers
Sub-2a
10.2.1.0/24
fw2-2b
BGP AS=64830 (eth1)
10.101.2.11
Public-2b
10.101.2.0/24
Sub-2b
Management 10.2.2.0/24
10.101.3.0/24

10.101.3.11-12 AZ-b

AWS Direct Connect Model


As bandwidth to on-premises grows, AWS Direct Connect provides the ability to extend your private VPC infra-
structure and publicly available AWS services directly to your data center infrastructure by using dedicated network
bandwidth. You have the option of placing your network equipment directly in an AWS regional location, with a direct
cross-connect to their backbone network, or using a network provider service, such as LAN extension or MPLS, to
extend your AWS infrastructure to your network. In some AWS campus locations, AWS Direct Connect is accessible
via a standard cross connect at a carrier facility or a Carrier-Neutral Facility. An AWS Direct Connect campus intercon-
nect is provided over 1-gigabit Ethernet or 10-gigabit Ethernet fiber links. If an organization does not have equipment
at an AWS Direct Connect Facility or requires less than 1-gigabit interconnect speeds, they can use an AWS partner to
access Direct Connect.

Palo Alto Networks 69


Design Models

AWS Direct Connect virtual interfaces exchange BGP routing information with your firewall. Because AWS Direct Con-
nect paths are usually preferred over VPN connections, the route selection criteria reflect this preference and prefers
AWS Direct Connect routes over VPN Connection routes. All available path options are considered in order until there
exists a single “best” path. In the event there are multiple, equivalent paths for AWS Direct Connect routes, the traffic
is load balanced using equal cost multi-path. AWS Direct Connect dynamic routes of the same prefix length are always
preferred over VPN connection routes.

Private virtual interfaces programmed on the Direct Connect port are extended over 802.1q Trunk Links to the cus-
tomer on-premises firewall, one virtual interface (VLAN) each for every VPC to which you are mapping, as shown in
Figure 39. You specify the name of the virtual interface, the AWS account to which the virtual interface should be
associated, the associated VGW VLAN ID, your BGP peer IP address, the AWS BGP peer IP address, the BGP autono-
mous system, and the MD5 BGP authentication. Note that there is a maximum limit of 50 virtual interfaces per Direct
Connect port. You can increase your interconnect scale by adding multiple links to your peer point, up to 4, to form a
link aggregation group (LAG) and accommodate up to 200 virtual interfaces per LAG. There are alternative Direct Con-
nect options if you need to scale beyond 200.

Figure 39 Direct Connect private virtual interfaces

VPC-10
10.1.0.0/16
10.1.1.4

Host

169.254.1.2/30
Subnet
10.1.1.0/24
BGP Peering

VPC-20

169.254.1.1/30 Direct Connect 10.2.0.0/16


VLAN10 10.2.1.11
Eth1/5.10 802.1Q Trunk
169.254.2.10/30
169.254.2.9/30 Host
Eth1/5.20 VLAN20
169.154.3.2/30 Subnet
VLAN30 10.2.1.0/24
Eth1/5.30

BGP Peering VPC-30


10.3.0.0/16
169.254.3.1/30 10.3.1.22

Host

Subnet
10.3.1.0/24

Direct Connect provides many options for device and location redundancy. Two of these options are illustrated in
Figure 40. The first option provides redundant connections from a single firewall to your VPC. When multiple Direct
Connects are requested, they are automatically distributed across redundant AWS backbone infrastructure. The BGP
peering IP addresses must be unique across all virtual interfaces connected to a VGW. The VLAN is unique to each
AWS Direct Connect port because it is only locally significant, so virtual interfaces can share a common VLAN.

Palo Alto Networks 70


Design Models

The second option illustrates redundant firewalls distributed across to two separate AWS locations servicing the same
region. This option provides device, geographic, and (optionally) service provider redundancy, as well.

Figure 40 Direct Connect redundancy options

VPC-10
10.1.0.0/16
10.1.1.22

Host

Subnet
10.1.1.0/24

AWS Location 1

VPC-10
10.1.0.0/16
10.1.1.22

AWS Location 1
Host

Subnet
10.1.1.0/24

AWS Location 2

Palo Alto Networks 71


Summary

Summary
Moving applications to the cloud requires the same enterprise-class security as your private network. The shared-secu-
rity model in cloud deployments places the burden of protecting your applications and data on the customer. Deploy-
ing Palo Alto Networks VM-series firewalls in your AWS infrastructure provides scalable infrastructure with the same
protections from known and unknown threats, complete application visibility, common security policy, and native cloud
automation support. Your ability to move applications to the cloud securely helps you to meet challenging business
requirements.

The design models presented in this guide build upon the initial Single VPC design, as you would likely use in any
organization; build the first VPC as a proof of concept, and as your environment grows, you grow into a more modular
design where VPCs may be purpose-built for the application tier that it houses. Reuse the Single VPC design for a resil-
ient inbound design, and use the Transit design to scale your environment with more visibility and less complexity.

Palo Alto Networks 72


What’s New in This Release

What’s New in This Release


Palo Alto Networks made the following changes since the last version of this guide:

• The Transit VPC design model has more BGP peering information, for clarity.

• Many Transit VPC design model diagrams were updated with more information, for clarity.

• Added “Security Policy Automation with VM Monitoring” to the “Palo Alto Networks Design Details” section.

• Added VM Monitoring feed to dynamic address groups for policy automation to the Transit VPC design.

• Added AWS Direct Connect Gateway and AWS Transit Gateway to the “Amazon Web Services Concepts and
Services” section for more comprehensive coverage.

• Updated the design to reflect new branding.

Palo Alto Networks 73


You can use the feedback form to send comments
about this guide.

HEADQUARTERS
Palo Alto Networks Phone: +1 (408) 753-4000
3000 Tannery Way Sales: +1 (866) 320-4788
Santa Clara, CA 95054, USA Fax: +1 (408) 753-4001
http://www.paloaltonetworks.com info@paloaltonetworks.com

© 2019 Palo Alto Networks, Inc. Palo Alto Networks is a registered trademark of Palo Alto Networks. A list of our trade-
marks can be found at http://www.paloaltonetworks.com/company/trademarks.html. All other marks mentioned herein may
be trademarks of their respective companies. Palo Alto Networks reserves the right to change, modify, transfer, or other-
wise revise this publication without notice.

B-000110P-19A-1 04/19

Você também pode gostar