Você está na página 1de 58

An Analysis of

QUIC Use on Cloud

Mário Victor Gomes de Matos
- QUIC is a transport-layer protocol
- Created by Google
- Many researches about QUIC
- Focus on user-facing applications using HTTP/3
- Analize of QUIC use on cloud environments
- This analysis includes HTTP/3
- Simulate production configuration
- High availability through Kubernetes
- QUIC has a high CPU utilization
Background in Protocols
- User Datagram Protocol
- Message-oriented transport-layer protocol
- Unreliable
- Connectionless
- Real-time multiplayer games, Voice over Internet Protocol (VoIP), and live
- Transmission Control Protocol
- Connection-oriented protocol
- Reliable
- Assign a sequence number to each
- Requires an ACK from the receiver
- It also returns a “window” with every
- Establish a connection by means of a
three-way handshake
- UDP is faster than TCP
- Transport Layer Security
- Provides privacy and data integrity
- TLS can encrypt TCP traffic
- Requires a TLS Handshake
- HyperText Transfer Protocol
- Application-layer protocol
- Generic and stateless protocol
- Defines a set of patterns and rules for exchange of data
- HTTP/0.9
- Simple protocol
- No headers and message types are limited to text
- HTTP/1.0
- Extended message types and added headers
- HTTP/1.1
- Connections can be reused
- Pipelined requests
- Optimizes HTTP/1
- Keep support for HTTP semantics
- HTTP/1 Head of Line blocking
- HTTP/2 Multiplexing with streams
- HTTP/2 uses compressed binary
representation of headers
- HyperText Transfer Protocol Secure
- Extension of HTTP
- When HTTP client also acts as a TLS client
- Secure general-purpose protocol
- Transport-layer protocol
- Designed to improve HTTPS traffic
- Enable rapid deployment and rapid
evolution of transport mechanisms
- QUIC uses UDP as transport-protocol
- QUIC is an user-space protocol
- UDP and TCP is a kernel-space protocol
Some QUIC improvements
- HTTP/2 solves HOL blocking problem in the application layer with streams
- TCP still suffers from HOL blocking problem
- Many HTTP/2 streams share one TCP connection
- Slow segments may block the TCP “window”
- QUIC solves it in the transport layer
- QUIC implements stream multiplexing
Some QUIC improvements
- HTTPS requires both TCP and TLS
- QUIC improves handshake delays by
minimizing the steps required
- QUIC uses more compute resources in
exchange for performance
- HTTP over QUIC
- Provides HTTP semantics with QUIC as its transport-layer protocol
- Does not have an official RFC, only a draft
- Supported by 74% of web browsers
Background in Distributed Systems on Cloud
Background in Distributed Systems on Cloud
- Data Center
- Cloud Providers
- Kubernetes is an open source container orchestration engine
- Automate deployment, scaling, and management of containerized
- AWS (Amazon Web Services) offers EKS (Elastic Kubernetes Service)
- High Availability
Experiments: Objectives
- Compare QUIC with other transport-layer protocols

- Compare HTTP/3 with other application-layer protocols

- HTTP/1
- HTTP/2

- Collect latency, throughput, and usage of CPU and memory

Experiments Preparation
Benchmark Service
- Developed in the most efficient way possible to avoid any external noises
- Three types of client
- Ephemeral Client
- Represents a job
- Sequential Persistent Client
- Represent a service that needs to communicate with a database
- Parallel Persistent Client
- Represents a client similar to above’s, but is able to perform concurrent requests
Benchmark Service: Metrics Collection
- time command
- CPU time, memory usage and wall
clock time

- Benchmark service requests time

- Latency and throughput
K8s Cluster
- Common production environment
- Easy networking with Multi-AZ
- High demand for high availability
- Three client-server scenarios: Local, Same-AZ, and Multi-AZ
- Taint and Tolerations
- Only benchmark-service pods on node
AWS Pricing
- Pricing
- All Multi-AZ experiments transfers 9.77 GiB of data
- Data transfer alone cost is $5.28
- Payload
- Predetermined in compilation time
- 2KiB, 8KiB, 32KiB, 128KiB, and 512KiB
- Reasonable size amongst services running on cloud
- Kafka’s maximum package size’s default value is 1MB
- gRPC limits incoming messages to 4MB to help preventing excessive
memory consumption
AWS EC2 Instance Type
- Initially chosen instance type was m6i.large
- Most recent and smallest instance possible
- QUIC throttled on initial experiments
- Instance type was changed to m6i.xlarge
- 2 CPUs instead of 1
Linux UDP Restrictions
- QUIC’s Go implementation issued
a warning about UDP Receive
- Increased from 128KiB to 26MiB
Experiments Execution
- Even group of nodes are created
- Set of K8s manifests are applied
- Pod allocation depends on which scenario is being tested
- Local scenario requires one node
- Single-AZ requires two nodes in the same AZ
- Multi-AZ requires two nodes in different AZs
- Node groups deletion
- Repeat until finish
Experiments Results
Ephemeral and Persistent Client Experiments
- Ephemeral and Sequential Persistent clients are compared
- Referred as Ephemeral and Sequential Persistent clients
- Ephemeral clients: one connection per request
- Persistent clients: one connection for all requests
- Observe connection establishment impact
Transport-Layer Clients
E&P: Transport-Layer Clients: Latency

- Connection establishment impact

- Locals have lower latency than Same-AZ and Multi-AZ
- UDP limitations
- Payload size impact
E&P: Transport-Layer Clients: Throughput

- Local TCP 37Gb/s throughput

- EC2 instance networking capabilities of 12.5Gb/s
- QUIC’s Bad Performance
E&P: Transport-Layer Clients: CPU Usage

- QUIC’s high CPU usage

- TCP is more optimized than UDP
- TLS Cost: TCP vs TCP+TLS
E&P: Transport-Layer Clients: Memory Usage

- QUIC’s unusual memory usage

Application-Layer Clients
E&P: Application-Layer Clients: Latency

- Similar results to transport-layer protocols’

E&P: Application-Layer Clients: Throughput

- HTTP/1 outperformed HTTP/2

- TLS impact
- HTTP/3 poor performance
E&P: Application-Layer Clients: CPU Usage

- HTTP/3 high CPU usage

E&P: Application-Layer Clients: Memory Usage

- HTTP/3’s unusual memory usage

Parallel and Sequential Clients Experiments
- Parallel and Sequential Persistent clients are compared
- Referred as Parallel and Sequential Persistent clients
- Sequential clients: single connection and sequential requests
- Parallel clients: single connection and parallel requests
- Observe each protocol optimization when performing concurrent
Transport-Layer Clients
P&S Transport-Layer Clients: Latency

- High parallel client latency

P&S Transport-Layer Clients: Throughput

- Parallel scenarios similar results

- Parallel requests queue
- EC2 instance networking capabilities of 12.5Gb/s
- UDP Linux Restrictions
P&S Transport-Layer Clients: CPU Usage

- Parallel usually requires more CPU with large payloads

P&S Transport-Layer Clients: Memory Usage
Application-Layer Clients
P&S Application-Layer Clients: Latency

- High parallel client latency

P&S Application-Layer Clients: Throughput

- HTTP/1 vs HTTP/2
Pipelining vs Multiplexing
P&S Application-Layer Clients: Throughput

- HTTP/3 UDP Receive Buffer

P&S Application-Layer Clients: CPU Usage

- HTTP/3’s and QUIC’s CPU usage is almost the same

P&S Application-Layer Clients: Memory Usage

- HTTP/2 buffering
- QUIC is a better alternative for unreliable networks
- Forces all connections to be encrypted
- Compatible with existing network equipment
- QUIC performed worse than TCP on a cloud environment
- Tuning kernel parameters to improve UDP traffic did not bring a
significant advantage to QUIC
- HTTP/3 suffers similar problems
- QUIC and HTTP/3 are still a great solution for user-facing applications
- Bad fit with internal networks
- TCP+TLS has a better fit for cloud environments
- QUIC is relatively new

Você também pode gostar