Escolar Documentos
Profissional Documentos
Cultura Documentos
Ed Austin 12-09-09
(according to ed)
Jedis build their own lightsabres (the MS Eat your own Dog Food) Parallelize Everything Distribute Everything (to atomic level if possible) Compress Everything (CPU cheaper than bandwidth) Secure Everything (you can never be too paranoid) Cache (almost) Everything Redundantize Everything (in triplicate usually) Latency is VERY evil
Ed Austin
{ed, edik} @i-dot.com
BigTable
THE PERIMETER How does your data enter the Google empire?
Firewall
80/443
DMZ
Perimeter
Firewall
DNS
Load Balanced (.COM = 3, UK only one) [ed@d800 ~]$ dig google.com ..... ;; ANSWER SECTION: google.com. 223 IN A 74.125.45.100 google.com. 223 IN A 74.125.53.100 google.com. 223 IN A 74.125.67.100 [ed@d800 ~]$
Squid NetScalar
http multiplexing Reverse Proxy
GWS
Web Server Farm
Cell
Interior Network GFS II etc
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
DNS Load Balanced splits traffic (country, .com multiple DNS, other X1) to FW Firewall filters traffic (http/s, smtp,pop etc) Netscalar Load Balancers take Request from FW blocks DOS attacks, ping floods (DOS) blocks non IPv4/6 and none 80/443 ports and http multiplexes (limited caching capability) User Request forwarded to Squid (Reverse Proxy) probably HUGE cache (Petabytes?) If not in Cache forwarded to GWS (Custom C++ Web Server) now not using Custom apache? GWS sends the Request to appropriate internal (Cell) servers Request is processed exterior https via thawte certs Dedicated Crawler Architecture separate from other infrastructure
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
80/443
80/443
Squid
Reverse Proxy
BigTable
-Uses Squid Reverse Proxy -Perimeter Cache hit rates 30-60% = Huge!
- Dependent on search complexity/user preferences/traffic type
GFS / GFS II
INTERIOR NETWORK IPv6
All Image Thumbnails caches, much Multimedia cached Expensive common queries cached (common words i.e. Obama, edinburgh) as they require significant back-end processing. On cache flush/update big latency spike and capacity drop - Index servers need to do significant work to rebuild cache
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
Where is Google Located? Last estimated were 36 Data Centers, 300+ GFSII Clusters and upwards of 800K machines. US (#1) Europe (#2) Asia (#3) South America/Russia (#4) Australia on Hold Future:
Taiwan, Malaysia, Lithuania, and Blythewood, South Carolina.
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
Standard Google Modular DC (Cell) holds 1160 Servers / 250KW Power Consumption in 30 racks (40U). This is the Atomic Data Centre Building Block of Google. A Data Centre would consist of 100s of Modular Cells.
DC architecture then being the aggregation of smaller Cell level infrastructures in their own container some being pure GFS, other BT, other Map, some mixed etc.
THE RACK
How is a server stored in the Data Centre?
11
Why interesting?
The rack Implementation!
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
EVERYTHING custom!
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
12
13
Server Hardware
Architecture
GOOGLE APP ENGINE GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
14
15
OPERATING SYSTEM
Architecture
BigTable
- RHEL (Why not CentOS?) - 2.6.X Kernel - PAE - Custom glibc.. rpc... ipvs... - Custom FS (GFS II) - Custom Kerberos - Custom NFS - Custom CUPS - Custom gPXE bootloader - Custom EVERYTHING.....
GFS / GFS II
INTERIOR NETWORK IPv6
Kernel/Subsystem Modifications
tcmalloc replaces glibc 2.3 malloc much faster! works very well with threads... rpc the rpc layer extensively modified to provide > perf increase < latency (52%/40%) Significantly modified Kernel and Subsystems all IPv6 enabled
Use Python as the primary scripting language Deploy Ubuntu internally (likely for the Desktop) also Chrome OS base
16
THE INTERIOR NETWORK How does your datatravel around the Google empire?
17
INTERIOR NETWORK
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
ROUTING PROTOCOL Internal network is IPv6 (exterior machines can be reached using IPv6) Heavily Modified Version of OSPF as the IRP Intra-rack network is 100baseT Inter-rack network is 1000baseT Inter-DC network pipes unknown but very fast
BigTable
Technology:
GFS / GFS II
INTERIOR NETWORK IPv6
THE MAJOR GLUE The three foundation blocks of Googles Secret Sauce
19
BigTable
GFS / GFS II
20
GOOGLE FILE SYSTEM Manages the underlying Data on behalf of the upper layers and ultimately the applications
21
BigTable
The GFS II cell is Googles fundamental building block everything can be layered on top of this
Consists of (Highly distributed Linux based) Master Servers and Chunk Servers
GFS / GFS II
INTERIOR NETWORK IPv6
Chunk Servers serve the Data in 64MB Chunks to the client directly via Master arbitration DATA REDUNDANCY/FAULT TOLERANCE? Triplicate Copies of Chunks are kept often in other clusters / DC Chunks can be pulled from outside the DC! Expensive.... And try not to do! However apps built on top of GFS/BT do this on an ad-hoc basis (i.e. Gmail) On Chunk loss the Master handles the Recovery by sourcing a chunk copy Data is compressed using BMDiff/Zippy Chunk Server Fault-Tolerance achieved by Heart-beat to the Master (I am alive..) 22 Master Failure was problematic for Google (finally down from 2 minutes to 10 seconds)
BigTable
Chunk Size is now 1MB likely to improve latency for serving data other than Indexing for example GMail this was the rationale behind the change Master can store more Chunk Metadata (therefore more chunks addressable up to 100 million) = also more Chunk Servers
However according to Google Engineer they have only ever lost one 64MB chunk (in GFS I) during its entire production deployment (2004 2008?) so assumed extremely reliable
23
GOOGLE DATABASE Accesses the underlying Data on behalf of the upper layers and ultimately the applications
24
Bigtable I - Introduction
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
Used internally for all large scale (Search, Indexing, GMail etc)
Similar to a sharded Database implemention GOALS
BigTable
Mapreduce
BigTable
Chubby Lock
Huge Scalability to many PBs (Web Database currently around 40 Billion URLs)
Tight Latency
Highly efficient scans over Textual Data
GFS / GFS II
INTERIOR NETWORK IPv6
25
Bigtable II
Architecture
BigTable
Mapreduce
BigTable
Chubby Lock
- Rows (arbitrary length usually 10-100 Bytes Max <=64KB) - Rows stored lexographically - example row (URL))
GFS / GFS II
INTERIOR NETWORK IPv6
COLUMNS
- example column (contents:, PR, anchor1: ..)
TIMESTAMP (OPTIONAL?)
- timestamp (various API func args, i.e. ALL, LATEST) .
26
ROW
10-100 Bytes <=64KB
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
au.aaa
tablet 1
100-200MB Size
tablet ...
100-200MB Size
BigTable
Mapreduce
BigTable
Chubby Lock
html.test/za.zzzzz
Studying contents: column shows three versions of contents of a page (current, cached and ?) presumably all other columns are timestamped so could be used in a comparitive way (such as anchor increase/decrease) OTF in SERPS alg must use a combo of TimeSt diff between n(=3 rest garbage collected) page Versions and crawled anchors what else does table hold? Possibly PR (or OTF) and other search related weightings Google keeps much more info for ranking purposes than it did in 1999 Webtable hinted at 100 columns+! How do page units affect the URL reversal of the URL bigtable? -Does a Tables Tablets Cross a Clusters namespace (yes if unified else no?)
GFS / GFS II
INTERIOR NETWORK IPv6
27
Bigtable IV
Architecture
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
How tables are broken down in storage ? For example Webtable is billions of pages! -Large Tables broken (split) into tablets at row boundaries
-Tablets discontiguous (assists in fault-tolerance) spread over DC but try to keep one copy in same rack -Tablet Size is approximately 100-200MB of compressed Data -Load Balanced migrate tablets from heavily loaded machines to lightly loaded ones - Heavily used tablets probably stay in working set (cached)
BigTable
Mapreduce
BigTable
Chubby Lock
GFS / GFS II
INTERIOR NETWORK IPv6
28
29
Mapreduce I
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
Map Reduction can be seen as a way to exploit massive parallelism by breaking a task down into constituent parts and executing on multiple processors
The Major Functions are MAP & REDUCE (with a number of intermediata steps) MAP REDUCE Break task down into parallel steps Combine results into final output
BigTable
Mapreduce
BigTable Chubby Lock
GFS / GFS II
INTERIOR NETWORK IPv6
30
Mapreduce II
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
Mapreduce
BigTable Chubby Lock
STATISTICS
-In September 2009 Google ran 3,467,000 MR Jobs with an average 475 sec completion time averaging 488 machines per MR and utilising 25.5K Machine years
-Technique extensively used by Yahoo with Hadoop (similar architecture to Google) and Facebook (since 06 multiple Hadoop clusters, one being 2500CPU/1PB with HBase).
31
Chubby Lock
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
Mapreduce BigTable
Chubby Lock
- Consists of a Master and Slaves (designated by election) - Failover consists of a Slave replacing the functionality of a Master -- Also servers as an ultra-fast high availability File Server for small fines (100s bytes)
GFS / GFS II
32
33
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
- Workqueue can manage many tens of thousands of machines Launched via API or command line (sawzall example shown)
saw --program code.szl --workqueue testing --input_files /gfs/cluster1/2005-02-0[1-7]/submits.* \ --destination /gfs/cluster2/$USER/output@100
34
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
7. DIY Google
35
DEVELOPMENT LANGUAGES
Architecture
Usual Suspects
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
Early versions could not write into Bigtable. Now implemented? Output sometimes pipelined into MySQL for further analysis
36
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
-Code exposed to Google - No support for subprocess spawning more importantly none of the google mapreduce library made available
- isolates computational aspects to single servers but the I/O is probably the google standard implementation underneath - therefore computationally intensive tasks more problematic = keeping your resource usage under control
37
Security
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
02/09 Google Engineer didnt dispute this and seemed to concur adding that incore encryption might be a possibility (R/T decryption might not be that expensive) this possibily means cryptology is used throughout the lifetime of the image including components outside the working-set but sensitive parts of the in-core OS (OTF decrypted)
GFS / GFS II
INTERIOR NETWORK IPv6
Enterprise
Kerberos is used throughout the enterprise
They have an Automated issuance system for SSL certificates, used by internal
(secure) infrastructure to validate https/TLS and generic SSL connections. Complete internal network encryption unlikely due to latency introduced?
Likely that one of the reasons failover between DCs problematic is the latency introduced due to the expense of Wide Area Encryption (essential)
38
- 99%ile latency for all data <50ms is a key speed metric -Single global namespace
-Spanning multiple data centers is still an unsolved problem. Most websites are in one and at most two data centers. How to fully distribute a website across a set of data centers.
-Spanner
BigTable
Mapreduce BigTable Chubby Lock
- automatic, dynamic world-wide placement of data & computation to minimize latency or cost.
GFS / GFS II
INTERIOR NETWORK IPv6
Allegedly used to reduce heat issues at DCs by moving the load when the heat issue becomes a problem at the new chillerless DCs (i.e. Belgium DC) not using chillers introducess significant savings.
- GDrive Servers
SERVER HARDWARE RACK DC Exterior Network
39
Odds n Sods
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
borg google technology/architecture (is a cluster..) Borg: a hybrid protocol for scalable application-level multicast in peer-to-peer networks (WAN multimedia steaming) data cube google technology Have a global loadbalancer assume load balances across a unified namespace probably worldwide gmail designers implemented application level failover to move your session to an alternate DC in a seamless fashion to the end user. Probably all Google Apps will be able to migrate to an alternate DC cell (the application, and its GFS data if need be) MySQL is used for back-end sys admin stuff (high availability master-slave implementations) and post Bigtable processing Remote employee access is via VPN
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
40
41
APP ENGINE
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
CLIENT APPLICATION
BigTable
Hadoop Framework
Mapreduce Hbase (Bigtable equiv.)
GFS / GFS II
INTERIOR NETWORK IPv6
HDFS (hadoop)
INTERIOR NETWORK IPv6
42
END
(Thankyou)
43
Open Source
(Yahooish) Architecture
DIY GOOGLE What you require: Preferably 2 Machines + 100BT CentOS/RHEL (squid) Apache Hadoop (HDFS, Mapreduce, Pig, HBase) HDFS bmdiff/zippy compression library Google glibc/tcmalloc perftools Supporting stuff JRE etc Browser with Search Box pig mr call to scan a few files print results
44
CLIENT APPLICATION
Python, PHP, Java .... anything Job Tracker (Work Queue equiv.)
Hadoop Framework
Mapreduce Hbase (Bigtable equiv.)
HDFS (hadoop)
INTERIOR NETWORK IPv6
Open Source
(Yahooish) Architecture
DIY GOOGLE
CLIENT APPLICATION
Python, PHP, Java .... anything Job Tracker (Work Queue equiv.)
Hadoop Framework
Mapreduce Hbase (Bigtable equiv.)
HDFS (hadoop)
INTERIOR NETWORK IPv6
Install Hadoop and Pig on Cluster Install eclipse and dependencies Install PigPen for eclipse and configure to cluster (NFS)
45
TEMPLATE
Architecture
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
GFS / GFS II
INTERIOR NETWORK IPv6
46
GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
A rare shot of some concrete google internal stuff (this of a GFS Master Server code execution found as a perftools profiling example)
Agile Methodologies Used (development iterations, teamwork, collaboration, and process adaptability throughout the life-cycle of the project) Libraries are the predominant way of building programs
INTERIOR NETWORK IPv6
GFS / GFS II
An infrastructure handles versioning of applications so they can be release without a fear of breaking things = roll out with minimal QA - Internal Code uses replacement libraries - Google as youd expect rewrites everything! - Hungarian Notation? - Work in small teams 3-5 people likely few scutters know the big picture
47
Internal Linux development and deployment Served as technical lead of team responsible for customizing and deploying Linux to internal systems and workstations. Fixed bugs and added enterprise features to several Linux components, including NFS, Kerberos, CUPS. All relevant patches were pushed to upstream maintainers, and most are in current released distributions. Developed and maintained systems to automate installation, updates, and upgrades of Linux systems. Developed IPv6 support for Linux load-balancing (ipvs).
loadbalancing user accounts within a datacenter, and coordinating with the global loadbalancer, which uses linear programming to optimally allocate users. In particular, this avoids "shared fate" risks and reduces latency and costs incurred due to excessive transatlantic data traffic. Learned Sketchup so as to document the four dimensional data structures effectively
The testing, evalulation, deployment, operations, and maintenance of Netscaler load balancers.
automated Apache configuration reloader gPXE open-source network booting software GWS custom C++ webserver = not apache?
Google 02/09 talk example was a Cluster is 30 racks (I believe this refers to Google). At a 40U rack 40Ux30racks = 1200 = approximately a MDC can assume each MDC is a Cluster/cell at architectural level
Google engineer stated a DC is a collection of Modular Units (MDCs?) the picture (not above) illustrated suggested this.
48
49
Numbers are approximate but certainly are ball-park Google often delivers contradictory figures and uses many terms for some items - cell/cluster scheduler/workqueue (obfuscation?)
Googles philosophy/paranoia of tell as little as possible (pausing presenters and sideways answers) makes it hard to fill in some (significant) gaps inferences are sometimes drawn (in red) Google seem to design absolutely EVERYTHING themselves from HW MB build, Racks, Switches(?), Software... So its hard to find sources of information beyond broad concepts
50
Bigtable VI
Architecture GOOGLE APPS SEARCH INDEX CRAWL GMAIL... Python, Java, C++, Sawzall, other GWQ
BigTable
Mapreduce
BigTable
Chubby Lock
Current Status
- Many Hundreds may be thousands of Bigtable Cells - Late 2009 stated 500 Bigtable clusters - At minimum scaled to many thousands of machines per cell in production - Cells manage Managing 3-figure TB data (0.X PB)
51