Escolar Documentos
Profissional Documentos
Cultura Documentos
5, MAY 2014
Abstract— Caching is a technique to reduce peak traffic rates phase is the delivery phase, which is performed once the
by prefetching popular content into memories at the end users. actual demands of the users have been revealed. In this phase,
Conventionally, these memories are used to deliver requested the network is congested, and the main limitation is the rate
content in part from a locally cached copy rather than through
the network. The gain offered by this approach, which we term required to serve the requested content.
local caching gain, depends on the local cache size (i.e., the Various versions of this problem have been studied, with
memory available at each individual user). In this paper, we the focus being mainly on exploiting the history or statistics
introduce and exploit a second, global, caching gain not utilized of the user demands [1]–[7]. In these papers, the operation
by conventional caching schemes. This gain depends on the of the delivery phase is fixed to consist of simple orthogonal
aggregate global cache size (i.e., the cumulative memory available
at all users), even though there is no cooperation among the unicast or multicast transmissions. Assuming this method of
users. To evaluate and isolate these two gains, we introduce delivery, the content placement is then optimized. The gain of
an information-theoretic formulation of the caching problem caching in this approach results from making popular content
focusing on its basic structure. For this setting, we propose a novel available locally. In another line of research, the objective is
coded caching scheme that exploits both local and global caching to optimize the delivery phase for fixed known cache contents
gains, leading to a multiplicative improvement in the peak rate
compared with previously known schemes. In particular, the and for specific demands [8], [9] (see also the discussion in
improvement can be on the order of the number of users in Section VIII-A).
the network. In addition, we argue that the performance of the As pointed out above, the gain from traditional uncoded
proposed scheme is within a constant factor of the information- caching approaches derives from making content available
theoretic optimum for all values of the problem parameters. locally: if a user requests some content that is stored in its
Index Terms— Caching, coded caching, content distribution, cache, this request can be served from its local memory. We
prefetching. hence call this the local caching gain. This gain is relevant
if the local cache memory is large enough such that a sizable
I. I NTRODUCTION fraction of the total (popular) content can be stored locally. On
the other hand, if the size of the local caches is small compared
T HE HIGH temporal variability of network traffic results
in communication systems that are congested during
peak-traffic times and underutilized during off-peak times.
to the total amount of content, then this gain is insignificant.
In this paper, we propose a novel coded caching approach
One approach to reduce peak traffic is to take advantage of that, in addition to the local caching gain, is able to achieve a
memories distributed across the network (at end users, servers, global caching gain. This gain derives from jointly optimizing
routers, …) to duplicate content. This duplication of content, both the placement and delivery phases, ensuring that in the
called content placement or caching, is performed during delivery phase several different demands can be satisfied with
off-peak hours when network resources are abundant. During a single coded multicast transmission. Since the content place-
peak hours, when network resources are scarce, user requests ment is performed without knowledge of the actual demands,
can then be served from these caches, reducing network in order to achieve this gain the placement phase must be
congestion. In this manner, caching effectively allows to shift carefully designed such that these multicasting opportunities
traffic from peak to off-peak hours, thereby smoothing out are created simultaneously for all possible requests in the
traffic variability and reducing congestion. delivery phase. We show that this global caching gain is
From the above discussion, we see that the caching problem relevant if the aggregate global cache size is large enough
consists of two distinct phases. The first phase is the placement compared to the total amount of content. Thus, even though the
phase, which is based solely on the statistics of the user caches cannot cooperate, the sum of the cache sizes becomes
demands. In this phase, the network is not congested, and the an important system parameter.
main limitation is the size of the cache memories. The second To formally analyze the performance of the proposed coded
caching approach, and in order to evaluate and isolate these
Manuscript received September 25, 2012; accepted November 19, 2013.
Date of publication March 11, 2014; date of current version April 17, 2014. two gains, we introduce a new, information-theoretic formu-
This paper was presented at the 2013 IEEE International Symposium on lation of the caching problem focusing on its basic structure.
Information Theory. In our setting, depicted in Fig. 1, K users are connected to
The authors are with Bell Labs, Alcatel-Lucent, Holmdel, NJ 08854
USA (e-mail: mohammadali.maddah-ali@alcatel-lucent.com; urs.niesen@ a server through a shared, error-free link. The server has a
alcatel-lucent.com). database of N files of equal size. Each of the users has access
Communicated by Y. Kim, Associate Editor for Shannon Theory. to a cache memory big enough to store M of the files. During
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. the placement phase, the caches are filled as a function of the
Digital Object Identifier 10.1109/TIT.2014.2306938 database. During the delivery phase, each user may ask for
0018-9448 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
MADDAH-ALI AND NIESEN: FUNDAMENTAL LIMITS OF CACHING 2857
any one of the N possible files. The objective is to design Fig. 2. Rate R required in the delivery phase as a function of memory
the placement and delivery phases such that the load of the size M for N = 30 files and K = 30 users. The figure compares the
performance of the proposed coded caching scheme to that of conventional
shared link in the delivery phase is minimized. For simplicity, uncoded caching.
we restrict the discussion in the introduction section to the
most relevant case, in which the number of files N is larger store M = 10 files, the rate of the uncoded caching scheme
than or equal to the number of users K. The main results are corresponds to sending 20 files over the shared link, while the
presented later for the general case. proposed coded caching scheme achieves a rate corresponding
In this setting, the rate, i.e. the load of the shared link nor- to sending only 1.8 files: a reduction by a factor 11 in rate.
malized by the file size, in the delivery phase of conventional The remainder of this paper is organized as follows.
uncoded caching schemes is Section II formally introduces our information-theoretic for-
mulation of the caching problem. Section III presents main
K · (1 − M/N ). results, which are illustrated with examples in Section IV.
Sections V–VII contain proofs. Section VIII discusses some
Here K is the rate without caching, and 1 − M/N is the local
follow-up results and directions for future research.
caching gain. In contrast, the coded caching scheme proposed
in this paper attains a rate of II. P ROBLEM S ETTING
1 Before formally introducing the problem in Section II-B,
K · (1 − M/N ) · .
1 + KM/N we start with an informal description in Section II-A.
Thus, in addition to the local caching gain of 1 − M/N , coded
1 A. Informal Problem Description
caching also achieves a global caching gain of 1+KM/N . Both
of these gains indicate the multiplicative reduction in rate of We consider a system with one server connected through
the shared link, so that a smaller factor means a larger rate a shared, error-free link to K users, as shown in Fig. 1. The
reduction. server has access to a database of N files W1 , . . . , WN each
Observe that the local caching gain depends on the nor- of size F bits. Each user k has an isolated cache memory Zk
malized local cache size M/N and is relevant only if the of size M F bits for some real number M ∈ [0, N ].
cache size M is on the order of the number of files N . On the The system operates in two phases: a placement phase and
other hand, the global caching gain depends on the normalized a delivery phase. In the placement phase, the users are given
cumulative cache size KM/N and is relevant whenever the access to the entire database W1 , . . . , WN of files. Each user
cumulative cache size KM is on the order of (or larger than) k is then able to fill the content of its cache Zk using the
the number of files N . database. In the delivery phase, only the server has access to
By deriving fundamental lower bounds on the rate required the database of files. Each user k requests one of the files
in the delivery phase, we show that the rate of the proposed Wdk in the database. The server is informed of these requests
coded caching scheme is within a factor 12 of the information- and proceeds by transmitting a signal X(d1 ,...,dk ) of size RF
theoretic optimum for all values of N , K, and M . bits over the shared link for some fixed real number R. The
To obtain some intuition for the effect of these two gains, quantities RF and R are referred to as the load and the rate
let us compare the rates of the conventional uncoded scheme of the shared link, respectively. Using the content Zk of its
(achieving only the local gain) versus the proposed coded cache and the signal X(d1 ,...,dk ) received over the shared link,
scheme (achieving both the local and global gains) for a each user k aims to reconstruct its requested file Wdk .
system with N = 30 files and K = 30 users as shown in A memory-rate pair (M, R) is achievable for requests
Fig. 2. When each user has a cache memory large enough to d1 , . . . , dK if every user k is able to recover its desired
2858 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 5, MAY 2014
file Wdk (with high probability for F large enough). of the shared link responding to the requests (d1 , . . . , dK ) ∈
A memory-rate pair (M, R) is said to be achievable if this [N ]K during the delivery phase. Finally, the KN K decoding
pair is achievable for every possible request d1 , . . . , dK in functions
the delivery phase. Finally, we denote by R (M ) the smallest
μ(d1 ,...,dK ),k : [2RF ] × [2F M ] → [2F ]
rate R such that (M, R) is achievable. The function R (M )
describes the memory-rate tradeoff for the caching problem. map the signal received over the shared link X(d1 ,...,dK ) and
The aim of this paper is to characterize this memory-rate the cache content Zk to the estimate
tradeoff. In other words, we aim to find the minimum rate
of communication over the shared link at which all possible Ŵ(d1 ,...,dK ),k μ(d1 ,...,dK ),k (X(d1 ,...,dK ) , Zk )
demand tuples can be satisfied. of the requested file Wdk of user k ∈ [K]. The probability of
We illustrate these definitions with the example of the error is defined as
caching strategy employed by conventional uncoded caching
systems, which will serve as a baseline scheme throughout the max max P Ŵ(d1 ,...,dK ),k = Wdk .
(d1 ,...,dK )∈[N ]K k∈[K]
remainder of this paper.
Example 1 (Uncoded Caching). For a memory size of M F Definition. The pair (M, R) is achievable if for every ε > 0
bits, one possible strategy is for each user to cache the same and every large enough file size F there exists a (M, R)
M/N fraction of each file in the placement phase. In the caching scheme with probability of error less than ε. We define
delivery phase, the server simply transmits the remaining the memory-rate tradeoff
1 − M/N fraction of any requested file over the shared link.
R (M ) inf R : (M, R) is achievable .
Clearly, each user can recover its requested file from the
content of its local cache and the signal sent over the shared III. M AIN R ESULTS
link. In the worst case the users request different files—the
delivery rate for this caching scheme is thus The first theorem presents an achievable rate RC (M ),
yielding an upper bound on the memory-rate tradeoff R (M ).
RU (M ) K · (1 − M/N ) · min{1, N/K} (1) Theorem 1. For N ∈ N files and K ∈ N users each with
cache of size M ∈ {0, N/K, 2N/K, . . . , N },
We refer to this caching strategy as uncoded caching, since
both content placement and delivery are uncoded. 1 N
R (M ) ≤ RC (M ) K ·(1−M/N )·min ,
The first factor K in (1) is the rate without caching. The 1 + KM/N K
second factor in (1) is 1−M/N . We call this the local caching is achievable. For general 0 ≤ M ≤ N , the lower convex
gain, since it arises from having a fraction M/N of each file envelope of these points is achievable.
available locally at the user. If N < K, the system enjoys an The rate RC (M ) is achieved by a coded caching scheme
additional gain, reflected in the third factor in (1). In this case, that is described and analyzed in detail in Section V. For
some users will necessarily request the same file, resulting in ease of exposition, we first focus on the case N ≥ K, in
a natural multicasting gain of N/K. ♦ which
1
RC (M ) = K · (1 − M/N ) · . (2)
B. Formal Problem Statement 1 + KM/N
We now provide the formal definition of the information- The achievable rate RC (M ) consists of three distinct factors.
theoretic caching problem. Let (Wn )Nn=1 be N independent The first factor in RC (M ) is K. This is the worst-case rate
random variables each uniformly distributed over without caches at the users (i.e., M = 0).
[2F ] {1, 2, . . . , 2F } The second factor in RC (M ) is 1 − M/N . Referring to (1)
in Example 1, we see that this term, capturing the local caching
for some F ∈ N. Each Wn represents a file of size F bits. gain, appears also in the rate expression of the uncoded
A (M, R) caching scheme consists of K caching functions, caching scheme. Observe that this local gain is a function
N K encoding functions, and KN K decoding functions. of the normalized local memory size M/N , and it is relevant
The K caching functions whenever M is on the order of N .
1
Finally, the third factor in RC (M ) is 1+KM/N , which we
φk : [2F ]N → [2F M ]
call the global caching gain. This gain is a function of the
map the files W1 , . . . , WN into the cache content normalized global or cumulative memory size KM/N , and
it is relevant whenever KM is on the order of (or larger
Zk φk (W1 , . . . WN ) than) N . This global gain is to be interpreted as a multicasting
for each user k ∈ [K] during the placement phase. The N K gain available simultaneously for all possible demands. Note
encoding functions that, since the number of users is smaller than the number of
files, in the worst case all users request different files. Hence,
ψ(d1 ,...,dK ) : [2F ]N → [2F R ] there are no natural multicasting opportunities. The scheme
map the files W1 , . . . , WN to the input proposed in Theorem 1 carefully designs the content place-
ment in order to create coded multicasting opportunities in the
X(d1 ,...,dK ) ψ(d1 ,...,dK ) (W1 , . . . , WN ) delivery phase even among users that request different files.
MADDAH-ALI AND NIESEN: FUNDAMENTAL LIMITS OF CACHING 2859
Together, these two cut-set bounds result in the dashed For each such subset S ⊂ [K] of cardinality |S| = t + 1,
red curve in Fig. 5. This coincides with the lower bound in the server transmits
Theorem 2. ♦
⊕s∈S Wds ,S\{s} ,
V. C ODED C ACHING S CHEME (P ROOF OF T HEOREM 1) ⊕ denotes bitwise XOR. Each of these sums results in
where
F/ Kt bits being
Ksent
over the shared link. Since the number
We now present the general achievable scheme. We
of subsets S is t+1 , the total number of bits sent over the
first describe the algorithm in words, focusing on the
shared link is
corner points of RC (M ). Consider cache size M ∈
K F
{0, N/K, 2N/K, . . . , N }, and set F R = t+1 K
t
t M K/N. K −t
=F
Observe that t is an integer between 0 and K. t+1
If M = 0, then in the delivery phase the server can simply K(1 − M/N )
=F ,
transmit the union of all requested files over the shared link, 1 + KM/N
resulting in F min{N, K} bits being sent. Hence where we have used that t = M K/N .
R (0) ≤ min{N, K}.
Example 7. Let N = K = 3 and M = 1. Then t = 1
and the content placement is Zk = (Wn,{k} )3n=1 . For request
If M = N , then all files in the database can be cached at (d1 , d2 , d3 ) = (1, 2, 3), the signal transmitted in the delivery
every user in the placement phase. Hence phase is
R (N ) = 0. X(1,2,3)
Assume in the following that M is strictly larger than zero = W1,{2} ⊕ W2,{1} , W1,{3} ⊕ W3,{1} , W2,{3} ⊕ W3,{2} ,
and strictly less than N , so that t ∈ {1, 2, . . . , K −1}. as we have already seen in Example 5 in Section IV. Here,
In the placement phase, each file is split into Kt nonover- the three sums correspond to S = {1, 2}, S = {3, 1}, and
lapping subfiles of equal size. It will be convenient to label S = {3, 2}, respectively. ♦
the subfiles of file Wn as We now argue that each user can successfully recover its
requested message. Consider again a subset S ⊂ [K] with
Wn = (Wn,T : T ⊂ [K], |T | = t),
|S| = t + 1. Since each user k ∈ S already has access to
where we recall the notation the subfiles Wds ,S\{s} for all s ∈ S \ {k}, it can solve for
Wdk ,S\{k} from the signal
[K] {1, . . . , K}.
⊕s∈S Wds ,S\{s}
For each n ∈ [N ], subfile Wn,T is placed in the cache
of user
k if k ∈ T . Thus, each user caches a total of N K−1 subfiles. sent over the shared link. Since this is true for every such
K t−1 subset S, each receiver k is able to recover all subfiles of the
Since each of these subfiles has size F/ t , this requires
form
K−1 F Nt Wdk ,T : T ⊂ [K] \ {k}, |T | = t
N t−1 K = F K = F M
t of the requested file Wdk . The remaining subfiles are of the
bits of cache memory at each user, satisfying the memory form Wdk ,T for T such that k ∈ T . But these subfiles are
constraint. already available in the cache of user k. Hence each user k
Example 6. Let N = K = 3 and M = 2. Then t = 2 and can recover all subfiles of its requested file Wdk .
the content placement is This shows that, for M ∈ {N/K, 2N/K, . . . , (K −
1)N/K},
Z1 = (Wn,{1,2} , Wn,{1,3} )3n=1 ,
1
Z2 = (Wn,{1,2} , Wn,{2,3} )3n=1 , R (M ) ≤ K · (1 − M/N ) · .
1 + KM/N
Z3 = (Wn,{1,3} , Wn,{2,3} )3n=1 ,
As was pointed out earlier (see Example 4 in Section IV), the
as we have already seen in Example 5 in Section IV. ♦ points on the line connecting any two achievable points are
We next describe the delivery phase. Consider the request also achievable. Finally, taking the minimum between the rate
vector (d1 , . . . , dK ) ∈ [N ]K , i.e., user k requests file Wdk . derived here and the rate
We focus on a subset S ⊂ [K] of |S| = t + 1 users. Observe
K · (1 − M/N ) · min{1, N/K}
that every t users in S share a subfile in their caches that is
needed at the remaining user in S. More precisely, fix a user achieved by the conventional uncoded scheme described in
s ∈ S, and note that |S \ {s}| = t. The subfile Wds ,S\{s} is Example 1 in Section II-A proves Theorem 1.
requested by user s since it is a subfile of Wds . At the same For completeness, in Algorithm 1, we formally describe the
time, it is missing at user s since s ∈ / S \ {s}. Finally, it is placement and the delivery procedures of the coded caching
present in the cache of any user k ∈ S \ {s}. scheme for N files, K users. We focus on the corner points
MADDAH-ALI AND NIESEN: FUNDAMENTAL LIMITS OF CACHING 2863
procedure D ELIVERY(W1 , . . . , WN , d1 , . . . , dK )
t ← M K/N
S ← {S ⊂ [K] : |S| = t + 1}
X(d1 ,...,dK ) ← (⊕k∈S Wdk ,S\{k} : S ∈ S)
Fig. 6. Cut corresponding to parameter s = 2 in the proof of the converse.
end procedure In the figure, N = K = 4, and X1 = X(1,2,3,4) , X2 = X(3,4,2,1) .
Combining (3), (6), (9), and (12) yields caches a randomly selected subset of the bits. We show that
RC (M ) the rate of this scheme is within a constant factor of optimal
≤ 12 universally for any number of users K. Using this universality
R (M )
property allows to address the problem of asynchronous user
for all N, K and all 0 ≤ M ≤ N . requests and of differing networks during the two phases.
F. Sharper Approximations we split each file into two subfiles, i.e., A = (A1 , A2 ) and
The upper and lower bounds on the memory-rate tradeoff B = (B1 , B2 ). In the placement phase, we choose the cache
derived in this paper are shown here to be within a factor contents as Z1 = A1 ⊕ B1 and Z2 = A2 ⊕ B2 . Assume that
12 from each other. Sharpening this approximation guarantee user one requests file A and user two requests file B. The
is of interest. As mentioned, numerical simulations suggest server can satisfy these requests by transmitting (B1 , A2 ) at
that a more careful analysis should be able to decrease the rate R = 1. The other three possible requests can be satisfied
gap to within a factor of 5. To go beyond that, better lower in a similar manner. This shows that (M, R) pair (1/2, 1)
and upper bounds are needed, both of which we know from is achievable. This new point improves the boundary of the
Example 4 can be improved (see also the Appendix). Some achievable region from the solid blue to the dotted black curve
further questions in this context are as follows. in Fig. 3 in Section IV.
• Linear versus Nonlinear Coding: As pointed out above,
We now argue that the lower bound on can also be
nonlinear coding schemes can offer unbounded gain in improved. We have for any achievable memory-rate pair
network and index coding [14], whereas for the caching (M, R),
problem linear coding schemes are sufficient to achieve 2RF + 2M F ≥ H(X(1,2) , Z1 ) + H(X(2,1) , Z2 )
optimality within a constant factor. This raises the ques-
= H(X(1,2) , Z1 |W1 ) + H(X(2,1) , Z2 |W1 )
tions if we can improve the gap using nonlinear schemes
or if there exist caching networks for which nonlinear +I(W1 ; X(1,2) , Z1 ) + I(W1 ; X(2,1) , Z2 )
schemes can offer unbounded gain. ≥ H(X(1,2) , Z1 , X(2,1) , Z2 |W1 )
• Larger Field Size: All operations in this paper are over +I(W1 ; X(1,2) , Z1 ) + I(W1 ; X(2,1) , Z2 )
the binary field. In contrast, in network and index coding
≥ I(W2 ; X(1,2) , Z1 , X(2,1) , Z2 |W1 )
larger field sizes are useful. The question is thus if larger
field sizes can improve the performance of coded caching. +I(W1 ; X(1,2) , Z1 ) + I(W1 ; X(2,1) , Z2 ).
• Coded Content Placement: Example 4 presents a scenario
Now, since W1 can be decoded from (X(1,2) , Z1 ) and also
with two users, two files, and cache size 1/2, where from (X(2,1) , Z2 ), and since W2 can be decoded from
coded content placement can improve the achievable rate. (X(1,2) , Z1 , X(2,1) , Z2 ), we obtain using Fano’s inequality that
While this type of coded content placement is not needed
for a constant-factor approximation of the memory-rate I(W1 ; X(1,2) , Z1 ) ≥ F − εF,
tradeoff, it might lead to a smaller constant factor. I(W1 ; X(2,1) , Z2 ) ≥ F − εF,
• Zero Error versus Vanishing Error: The problem set-
I(W2 ; X(1,2) , Z1 , X(2,1) , Z2 |W1 ) ≥ F − εF
ting as described in Section II allows for a vanish-
ing error probability. However, the proposed achievable for any ε > 0 as long as F is large enough. Hence,
scheme has zero error probability, while still being
approximately optimal. Schemes with vanishing error 2RF + 2M F ≥ 3F − 3εF.
probability or lower bounds making explicit use of the Since ε > 0 is arbitrary, this shows that
zero-error requirement might be used to find sharper
approximations. R ≥ 3/2 − M.
Since this is true for any achievable (M, R) pair, this implies
G. Implementation Complexity
that
Compared to uncoded caching, the coded caching approach R (M ) ≥ 3/2 − M.
suggested in this paper imposes additional computational
burden on the server and the users, especially for large values This bound, together with the two cut-set bounds derived
of K. One approach to deal with this burden is to use coded earlier, proves that the dotted black curve depicted in Fig. 3
caching only among smaller subgroups of users. This results in in Section IV is indeed equal to R (M ).
lower computational load at the expense of higher rates over
the shared link. Deriving the fundamental tradeoff between ACKNOWLEDGMENT
rate and memory subject to complexity constraints is of great
The authors would like to thank S. Borst and S. Kennedy
interest.
for their comments on an earlier draft of this paper as well as
the reviewers for their careful reading of the manuscript.
A PPENDIX
Example 4 in Section IV derives upper and lower bounds
R EFERENCES
on the memory-rate tradeoff R (M ) claimed in Theorems 1
and 2 for K = N = 2. While these two bounds are sufficient [1] L. W. Dowdy and D. V. Foster, “Comparative models of the file assign-
ment problem,” ACM Comput. Surv., vol. 14, pp. 287–313, Jun. 1982.
to characterize R (M ) to within a constant multiplicative gap, [2] K. C. Almeroth and M. H. Ammar, “The use of multicast delivery
as we show in what follows, neither of the two bounds are tight to provide a scalable and interactive video-on-demand service,” IEEE
in general. J. Sel. Areas Commun., vol. 14, no. 6, pp. 1110–1122, Aug. 1996.
[3] A. Dan, D. Sitaram, and P. Shahabuddin, “Dynamic batching policies
We start by arguing that the achievable scheme can be for an on-demand video server,” Multimedia Syst., vol. 4, pp. 112–121,
improved by focusing on the case M = 1/2. As before, Jun. 1996.
MADDAH-ALI AND NIESEN: FUNDAMENTAL LIMITS OF CACHING 2867
[4] M. R. Korupolu, C. G. Plaxton, and R. Rajaraman, “Placement algo- [18] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform
rithms for hierarchical cooperative caching,” in Proc. ACM-SIAM SODA, demands,” arXiv: 1308.0178 [cs. IT], Aug. 2013.
Jan. 1999, pp. 586–595. [19] M. Ji, G. Caire, and A. Molisch, “Fundamental limits of distributed
[5] A. Meyerson, K. Munagala, and S. Plotkin, “Web caching using access caching in D2D wireless networks,” in Proc. IEEE ITW, Sep. 2013,
statistics,” in Proc. ACM-SIAM SODA, 2001, pp. 354–363. pp. 1–5.
[6] I. Baev, R. Rajaraman, and C. Swamy, “Approximation algorithms for
data placement problems,” SIAM J. Comput., vol. 38, pp. 1411–1429,
Jul. 2008.
[7] S. Borst, V. Gupta, and A. Walid, “Distributed caching algorithms for Mohammad Ali Maddah-Ali (S’02–M’08) received the B.Sc. degree from
content distribution networks,” in Proc. IEEE INFOCOM, Mar. 2010, Isfahan University of Technology, Isfahan, Iran, in 1997 and the M.A.Sc.
pp. 1478–1486. degree from the University of Tehran, Tehran, Iran, in 2000, both in electrical
[8] Y. Birk and T. Kol, “Coding on demand by an informed source (ISCOD) engineering with highest rank in classes. From 2002 to 2007, he was with
for efficient broadcast of different supplemental data to caching clients,” the Coding and Signal Transmission Laboratory (CST Lab), Department of
IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2825–2830, Jun. 2006. Electrical and Computer Engineering, University of Waterloo, Waterloo, ON,
[9] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side Canada, working toward the Ph.D. degree. From March 2007 to December
information,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1479–1494, 2007, he was a Post-Doctoral Fellow at the Wireless Technology Laboratories,
Mar. 2011. Nortel Networks, Ottawa, ON, Canada, in a joint program between CST
[10] T. M. Cover and J. A. Thomas, Elements of Information Theory. Lab and Nortel Networks. From January 2008 to August 2010, he was a
New York, NY, USA: Wiley, 1991. Post-Doctoral Fellow at the Wireless Foundations Center, the Department of
[11] M. Effros, S. El Rouayheb, and M. Langberg, “An equivalence between Electrical Engineering and Computer Sciences in the University of California
network coding and index coding,” arXiv: 1211.6660 [cs.IT], Nov. 2012. at Berkeley. Since September 2010, he has been at Bell Laboratories, Alcatel-
[12] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, “Network infor- Lucent, Holmdel, NJ, as a communication network research scientist. His
mation flow,” IEEE Trans. Inf. Theory, vol. 46, no. 4, pp. 1204–1216, research interests include wireless communications and multiuser information
Apr. 2000. theory. Dr. Maddah-Ali received several awards including Natural Science and
[13] M. Langberg and A. Sprintson, “On the hardness of approximating Engineering Research Council of Canada (NSERC) Post-Doctoral Fellowship.
the network coding capacity,” IEEE Trans. Inf. Theory, vol. 57, no. 2,
pp. 1008–1014, Feb. 2011.
[14] A. Blasiak, R. Kleinberg, and E. Lubetzky, “Lexicographic products and
the power of non-linear network coding,” in Proc. IEEE 22nd FOCS,
Oct. 2011, pp. 609–618. Urs Niesen (S’03–M’09) received the M.S. degree from the School of
[15] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching Computer and Communication Sciences at the École Polytechnique Fédérale
attains order-optimal memory-rate tradeoff,” arXiv: 1301.5848 [cs.IT], de Lausanne (EPFL) in 2005 and the Ph.D. degree from the department of
Jan. 2013. Electrical Engineering and Computer Science at the Massachusetts Institute
[16] D. D. Sleator and R. E. Tarjan, “Amortized efficiency of list update and of Technology (MIT) in 2009. Currently, he is a member of technical staff
paging rules,” Commun. ACM, vol. 28, pp. 202–208, Feb. 1985. in the Mathematics of Networks and Communications Research Department
[17] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded at Bell Labs, Alcatel-Lucent. His research interests are in the general area of
caching,” arXiv: 1311.3646 [cs.IT], Nov. 2013. information theory with applications in networking and communications.