CMSC 39600: PCPs, codes and inapproximability Oct 2, 2007

Lecture 3: Linearity Testing
Lecturer: Prahladh Harsha Scribe: Joshua A. Grochow
In today’s lecture we’ll go over some preliminaries to the coding theory we will need
throughout the course. In particular, we will go over the Walsh-Hadamard code and the
Blum-Luby-Rubinfeld linearity test. In the following lecture we will use these tools to show
that NP ⊆ PCP[poly, O(1)] (recall that our goal is ultimately to bring the polynomial
amount of randomness down to a logarithmic amount). A common feature throughout the
course is that we will use codes to construct PCPs and vice-versa, construct useful codes
from PCPs.
3.1 Coding Theory – Preliminaries
Coding theory is the study of how to communicate reliably over a noisy channel. The most
common setting is as follows: A message m is put through an encoder E, yielding a value
E(m), also called the codeword (typically much longer than m). The codeword E(m) is
then sent through the noisy channel and arrives at the other end with some noise introduced
η by the channel as E(m) + η. The decoder D then takes E(m) + η as input, and ideally
outputs m as long as the noise is not too much. (Note that it is expected of the decoder
that when applied to E(m) it outputs m.)
Formally, a code is specified by an encoding function C : ¦0, 1¦
k
→ ¦0, 1¦
n
; the outputs
of C are called codewords. The rate of the code C is k/n, i.e. the ratio of input bits to
codeword bits. Heuristically, this is the amount of information of the input message per bit
of the codeword.
An important notion in coding theory is that of the distance between two codewords;
here (and typically) we use either the Hamming distance ∆(x, y) = #¦i[x
i
= y
i
¦ or the
normalized (Hamming) distance δ(x, y) = ∆(x, y)/n (where n = [x[ = [y[).
The distance d of a code C is the minimum distance between two distinct codewords,
i.e., min
x=y
¦∆(C(x), C(y))¦. For a code of distance d, if the number of bits flipped is
strictly less than d/2 (i.e., weight(η) ≤ d/2), then E(m) +η can be uniquely decoded to m.
A code C : ¦0, 1¦
k
→ ¦0, 1¦
n
with distance d is called an (n, k, d)
2
-code, where n is the size
of the codewords, k the size of the input, d the distance of the code, and 2 the size of the
alphabet ¦0, 1¦.
A code C is called linear if for all x and y, if x and y are codewords then so is x + y
(where addition is performed bitwise modulo 2 – i.e. XOR)
1
. To indicate that a (n, k, d)
2
-
code is also linear, we use the notation [n, k, d]
2
-code (with square brackets). The word
“code,” is commonly used to refer to both the encoding function C (as stated above), as
well as the set of all codewords Im(C).
Typically, the most significant trade-off in coding theory is that between the rate and
distance of a code. For example, given a particular rate, we might ask what is the best
1
Note that if we work over a larger alphabet than binary – e.g. over a larger finite field, we require the
additional constraint that x is a codeword then so are all scalar multiples of x (i.e., αx)
3-1
distance that can be achieved.
3.1.1 Algorithmic Questions and Sublinear Time Algorithms
There are three main algorithmic questions that arise in coding theory:
1. Complexity of encoding;
2. Error detection: given r ∈ ¦0, 1¦
n
, decide if r is a codeword; and
3. Error correction: given r ∈ ¦0, 1¦
n
, find x ∈ ¦0, 1¦
k
minimizing ∆(r, C(x)).
In this course, we will interested in these questions in the context of sublinear time
algorithms. We need to be specific what we mean by sublinear time. Note that a sublinear
time algorithm A to compute a function f ¦0, 1¦
k
→ ¦0, 1¦
n
doesn’t have enough time to
read it’s input or write it’s output. We get over this by accessing the input and writing the
output by oracle access. That is, an oracle machine A is a sublinear time algorithm for f if
A
x
(j) = f(x)
j
where f(x)
j
is the j-th bit of f(x). Note that j need only be log n bits, so
can be read entirely in sublinear time. More formally,
• The input is represented implicitly by an oracle. Whenever the sublinear time algo-
rithm wants to access the j
th
bit of the input string x (for some j ∈ [k]), it queries
the input x-oracle for the j
th
bit and obtains x
j
.
Figure 1: Sublinear Time Algorithms
• The output is not explicitly written by the algorithm, instead it is only implicitly
given by the algorithm. Formally, on being queried for index i of the output string
f(x) (for some i ∈ [n]), the algorithm outputs the bit f(x)
i
. Thus, the algorithm itself
behaves as an oracle for the string f(x), which in turn has oracle access to the input
oracle x.
• Since the algorithm does not read the entire input x, we cannot expect it compute
the output f(x) exactly. We instead relax our guarantee on the output as follows:
On input x ∈ ¦0, 1¦
k
, the algorithm must compute f(x

) exactly for some x

∈ ¦0, 1¦
k
that is -close to the actual input x. In other words, the algorithm computes functions
on some approximation to the input instead of the input itself.
3-2
Property testing, for those familiar with it, is typically done in this framework. Figure 1
gives a pictorial description of a sublinear time algorithm with the above mentioned relax-
ations.
Now we’ll consider the algorithmic questions of coding theory in the sublinear time
framework. Let’s consider the questions above one by one:
1. For a code to have good error-correcting properties, most bits of the codeword needs
to depend on most message-bits. Taking this into account, it does not seem reasonable
to expect a “good” code to have sublinear time encoding. (However, there has been
some recent work in the area of locally encodable codes by relaxing some of the error-
correction properties of the code).
2. Error detection: in this context, error detection is known as local testing. That is,
given r ∈ ¦0, 1¦
n
, test if either r is a codeword or r is far from every codeword (similar
to the gap problems we saw earlier, and also to property testing).
3. Error correction is known as local decoding in this context. Given j, the decoder
queries some bits of a noisy codeword C(x) + η and outputs the j-th bit of x. more
formally, we say a code is locally decodable if there exists a local decoder Dec such
that for all x and r where ∆(r, C(x)) < , the decoder satisfies
∀i ∈ ¦1, . . . , k¦, Pr[Dec
r
(i) = x
i
] ≥
3
4
.
If there is such a decoder for a given code C and it makes fewer than q queries, then
we say C is (q, )-locally decodable.
Obviously, sublinear time algorithms in general, and local decoding and local testing
algorithms in particular, will be randomized algorithms.
Note that local decoding is only required to work when the input is sufficiently close to a
codeword, whereas local testability determines whether a given string is close to a codeword
or far from any codeword. Thus the local decodability of a code says nothing about its local
testability.
3.2 The Walsh-Hadamard Code and Linearity Testing
Now onto our first code, the Walsh-Hadamard code. This will be the main tool in proving
that NP has exponential size PCPs, i.e. NP ⊆ PCP[poly, O(1)]. There are two dual views
of the Walsh-Hadamard code: on the one hand, WH(x) is the evaluation of every linear
function at x; on the other hand, it consists of the dot product of x with every ∈ ¦0, 1¦
n
.
A function f ¦0, 1¦
k
→ ¦0, 1¦ is linear if ∀x, y, f(x + y) = f(x) + f(y).
For example, for any a ∈ ¦0, 1¦
k
, the function
a
(x) =

a
i
x
i
mod 2 (i.e. the dot
product of a and x as vectors over GF(2)) is a linear function. In fact, L
k
= ¦
a
[a ∈ ¦0, 1¦
k
¦
is the set of all linear boolean functions. (Proof: observe that the space of linear functions
is a vector space, and has the same dimension as L
k
.)
The Walsh-Hadamard code WH : ¦0, 1¦
k
→ ¦0, 1¦
2
k
is then defined as WH(x) =
x
,
i.e. WH(x) is the truth table of
x
. More formally, for any a ∈ ¦0, 1¦
k
, the a-th bit of the
Walsh-Hadamard codeword WH(x) is WH(x)
a
=
x
(a).
3-3
Since any two distinct linear functions disagree on exactly half the set of inputs (i.e,
Pr
a
[
x
(a) =
y
(a)] = 1/2, for x = y), the fractional distance of the Walsh-Hadamard codes
is 1/2. Thus the Walsh-Hadamard code is a [k, 2
k
, 2
k−1
]-code. It has very good distance,
but poor rate.
It is useful to note that there are two dual views of the Walsh-Hadamard code based on
the fact that WH(x)
a
=
x
(a) =
a
(x). Thus, WH(x) is both the evaluation of the linear
function
x
at every point as well as the evaluation of every linear function at the point x.
Note that the WH code has the special property that the input bits x
i
are in fact a subset
of the codeword bits, since x
i
=
x
(e
i
). Codes with this property are called systematic codes.
3.2.1 Local Decodability of the Walsh-Hadamard Code
Decoding the Walsh-Hadamard code is very simple. Given a garbled codeword f : ¦0, 1¦
k

¦0, 1¦, which is δ-close to some Walsh-Hadamard codeword, the decoder Dec works as
follows: (The decoder Dec has oracle access to the function f)
Dec
f
: “On input z,
1. Choose r ∈
R
¦0, 1¦
k
2. Query f(z + r) and f(r)
3. Output f(z + r) −f(r)

Claim 3.1. If f : ¦0, 1¦
k
→ ¦0, 1¦ is δ-close to WH(x), then,
Pr[Dec
f
(z) =
x
(z)] ≥ 1 −2δ, ∀x ∈ ¦0, 1¦
k
.
Proof. Since f is δ-close to
x
, we have that for a random r, the probability that f(z +r) =

x
(z + r) is at least 1 − δ, and so is the probability that f(r) =
x
(r). If both of these
conditions hold, then Dec
f
(z) =
x
(z), by the linearity of
x
. Thus Pr
r
[Dec
f
(z) =
x
(z)] ≥
1 −2δ.
Since the decoder only makes two queries, the Walsh-Hadamard code is 2-locally decod-
able.
3.2.2 Local Testability of the Walsh-Hadamard Code: Linearity Testing
To locally test the WH code, we wish to test whether a given truth table is the truth table
of a linear function. The problem of local testing of the WH code is more commonly called
linearity testing. Formally, given f ¦0, 1¦
k
→ ¦0, 1¦ we want to test whether it is a linear
function (WH codeword) or far from linear.
The test is, as in the WH decoder, quite simple: pick y and z uniformly at random from
¦0, 1¦
k
and check that f(z) + f(y) = f(z + y). This test was proposed and first analyzed
by Blum, Luby and Rubinfeld [BLR93].
BLR-Test
f
: “ 1. Choose y, z ∈
R
¦0, 1¦
k
independently
2. Query f(y), f(z), and f(y + z)
3. Accept if f(y) + f(z) = f(y + z).

3-4
Obviously if f is linear, this test will pass with probability 1. The question is, can there
be function that are far from linear but still pass this test with high probability? No, as
shown in the following
Theorem 3.2. If f is δ-far from linear, then
Pr[BLR-Test
f
accepts ] ≤ 1 −δ.
The above theorem is tight since a random function is 1/2-far from linear, and passes
the BLR-Test with probability exactly 1/2.
The original proof of Blum, Luby and Rubinfeld [BLR93] is a combinatorial proof of
a weaker version of the above theorem, but we will give an algebraic proof, as similar
techniques will arise later in the course. This algebraic proof is due to Bellare, Coppersmith,
h˚astad, Kiwi and Sudan [BCH
+
96]. Before proceeding to the proof, we will first need to
equip ourselves with some basics of Boolean Fourier analysis.
3.2.3 Fourier Analysis
First, rather than working over ¦0, 1¦ as the output of a linear function, it will be convenient
to treat the output space as ¦+1, −1¦ (the roots of unity) under multiplication. Thus the
Boolean 0 corresponds to 1 in this setting, and the Boolean 1 corresponds to -1. Linearity
now takes the form: f ¦0, 1¦
n
→ ¦+1, −1¦ is linear if f(x + y) = f(x)f(y).
Consider the family of functions T = ¦f : ¦0, 1¦
k
→ R¦ and equip T with an addition
(f + g)(x) = f(x) + g(x). It is clear that T is a vector space over the reals. Furthermore,
the characteristic functions ¦δ
a
[a ∈ ¦0, 1¦
k
¦ are a basis, where δ
a
(x) = 1 if x = a and 0
otherwise. Thus T has dimension 2
k
.
We will now show that the linear functions of the form χ
a
(x) = (−1)
a(x)
= (−1)
P
a
i
x
i
( mod 2)
also form a basis for T. For this, it is fruitful to define an inner product on T as follows:
'f, g` = Exp
x∈{0,1}
k [f(x)g(x)] =
1
2
k

x∈{0,1}
k
f(x)g(x).
(It is an easy exercise to check that this is in fact an inner product.) Note that there are
already 2
k
= dimT functions of this form, so all we need to do is show that they are linearly
independent.
We begin by examining a few basic properties of the functions χ
a
. First, note that
Property 1. χ
a
(x + y) = χ
a
(x)χ
a
(y)
i.e. χ
a
is linear, as mentioned previously. Second,
Property 2. χ
a+b
(x) = χ
a−b
(x) = χ
a
(x)χ
b
(x)
i.e. χ
a
(x) is also linear in a; this should come as no surprise, because of the duality of

a
(x) as both a linear function of a and of x.
The first property we will need that is not entirely obvious is that
Property 3.
Exp
x

a
(x)] =

1 if a = 0
0 otherwise
3-5
Proof. If a = 0, this clearly holds. If a = 0, then by permuting the indices we may assume
that a
1
= 0 without loss of generality. Then we have
2
k
Exp
x

a
(x)] =

x
(−1)
P
a
i
x
i
=

x:x
1
=1
(−1)
P
a
i
x
i
+

x:x
1
=0
(−1)
P
a
i
x
i
Then, since (−1)
a(0y)
= −(−1)
a(1y)
, these two sums exactly cancel out.
We then have,
Property 4.

a
, χ
b
` =

1 if a = b
0 otherwise
.
This follows from the above via 'χ
a
, χ
b
` = E
x

a
(x)χ
b
(x)] = E
x

a−b
(x)], and then
applying the above fact.
Thus, the χ
a
form not only a basis for T, but also an orthonormal basis for T. Since
the χ
a
form an orthonormal basis, for any f ∈ T we may write f =

a
ˆ
f
a
χ
a
for 'f, χ
a
` =
ˆ
f
a
∈ R. These
ˆ
f
a
are called the Fourier coefficients of f.
Observer that if the normalized distance δ(f, χ
a
) = , then
ˆ
f
a
= 1(1−)+(−1) = 1−2,
so the Fourier coefficients capture the normalized distance from linear functions.
Now we come to one of the most basic useful facts in Fourier analysis:
Property 5 (Parseval’s identity). 'f, f` =

ˆ
f
2
a
Proof. Writing f in terms of the basis χ
a
, we get:
'f, f` = '

a
ˆ
f
a
χ
a
,

b
ˆ
f
b
χ
b
`
=

a,b
ˆ
f
a
ˆ
f
b

a
, χ
b
` (by linearity of ', `)
=

a
ˆ
f
2
a
where the last line follows from the previous because the χ
a
form an orthonormal basis.
Corollary 3.3. In particular, if f is a Boolean function, i.e. ±1-valued, then

ˆ
f
2
a
= 1.
3.2.4 Proof of Soundness of BLR-Test
Finally, we come to the proof of the soundness of the Blum-Luby-Rubinfeld linearity test:
Proof of Theorem 3.2. Suppose f is δ-far from any linear function. Note that we can rewrite
the linearity condition f(x)f(y) = f(x + y) as f(x)f(y)f(x + y) = 1, since f is ±1-valued.
Then
Pr
x,y
[BLR-Test accepts f] = Pr
x,y
[f(x)f(y)f(x + y) = 1]
3-6
Note that for any random variable Z with values in ¦+1, −1¦, Pr(Z = 1) = Exp[
1+Z
2
],
since if Z = 1, then (1 + Z)/2 = 1 and if Z = −1, then (1 + Z)/2 = 0, so (1 + Z)/2 is an
indicator variable for the event Z = 1. Thus we have
Pr
x,y
[BLR-Test accepts f] = Exp
x,y

1 + f(x)f(y)f(x + y)
2

=
1
2
+
1
2
Exp
x,y
[f(x)f(y)f(x + y)]
Now, writing out f in terms of its Fourier coefficients, we get
Pr
x,y
[BLR-Test accepts f] =
1
2
+
1
2
Exp
x,y

a,b,c
ˆ
f
a
ˆ
f
b
ˆ
f
c
χ
a
(x)χ
b
(y)χ
c
(x + y)


=
1
2
+
1
2
Exp
x,y

a,b,c
ˆ
f
a
ˆ
f
b
ˆ
f
c
χ
a
(x)χ
b
(y)χ
c
(x)χ
c
(y)


Then, we apply linearity of expectation and the fact that x and y are independent to get:
Pr
x,y
[BLR accepts f] =
1
2
+
1
2

a,b,c
ˆ
f
a
ˆ
f
b
ˆ
f
c
Exp
x

a
(x)χ
c
(x)] Exp
y

b
(y)χ
c
(y)]
=
1
2
+
1
2

a,b,c
ˆ
f
3
a

1
2
+
1
2
max
a
(
ˆ
f
a
)

a
ˆ
f
2
a
=
1
2
+
1
2
max
a
(
ˆ
f
a
) (by Parseval’s identity)
= 1 −δ
where the last line follows from the fact that f is δ-far from linear, so its largest Fourier
coefficient can be at most 1 −2δ, as noted previously.
References
[BCH
+
96] Mihir Bellare, Don Coppersmith, Johan H˚astad, Marcos A. Kiwi, and
Madhu Sudan. Linearity testing in characteristic two. IEEE Transactions on
Information Theory, 42(6):1781–1795, November 1996. (Preliminary version in
36th FOCS, 1995). doi:10.1109/18.556674.
[BLR93] Manuel Blum, Michael Luby, and Ronitt Rubinfeld. Self-
testing/correcting with applications to numerical problems. J. Computer and
System Sciences, 47(3):549–595, December 1993. (Preliminary Version in 22nd
STOC, 1990). doi:10.1016/0022-0000(93)90044-W.
3-7

the algorithm computes functions on some approximation to the input instead of the input itself. find x ∈ {0. • The input is represented implicitly by an oracle. we will interested in these questions in the context of sublinear time algorithms. the algorithm outputs the bit f (x)i . 2. In this course.1 Algorithmic Questions and Sublinear Time Algorithms There are three main algorithmic questions that arise in coding theory: 1. so can be read entirely in sublinear time. We get over this by accessing the input and writing the output by oracle access. 1}n doesn’t have enough time to read it’s input or write it’s output. Whenever the sublinear time algorithm wants to access the j th bit of the input string x (for some j ∈ [k]). Thus. Note that j need only be log n bits. 1}k that is -close to the actual input x. 1}k → {0. More formally. C(x)). That is. Figure 1: Sublinear Time Algorithms • The output is not explicitly written by the algorithm. we cannot expect it compute the output f (x) exactly.distance that can be achieved. on being queried for index i of the output string f (x) (for some i ∈ [n]). Complexity of encoding. instead it is only implicitly given by the algorithm. • Since the algorithm does not read the entire input x. 1}n . We need to be specific what we mean by sublinear time. We instead relax our guarantee on the output as follows: On input x ∈ {0. which in turn has oracle access to the input oracle x. In other words. 3-2 . the algorithm itself behaves as an oracle for the string f (x). decide if r is a codeword. 1}k minimizing ∆(r. Error detection: given r ∈ {0. an oracle machine A is a sublinear time algorithm for f if Ax (j) = f (x)j where f (x)j is the j-th bit of f (x). and 3. Note that a sublinear time algorithm A to compute a function f {0.1. the algorithm must compute f (x ) exactly for some x ∈ {0. Error correction: given r ∈ {0. 1}n . 1}k . it queries the input x-oracle for the j th bit and obtains xj . 3. Formally.

Note that local decoding is only required to work when the input is sufficiently close to a codeword. for any a ∈ {0. A function f {0. the function a (x) = ai xi mod 2 (i. There are two dual views of the Walsh-Hadamard code: on the one hand. i. on the other hand. Thus the local decodability of a code says nothing about its local testability. Error correction is known as local decoding in this context. more formally. (However. )-locally decodable.e. For a code to have good error-correcting properties. y. N P ⊆ P CP [poly. the a-th bit of the Walsh-Hadamard codeword W H(x) is W H(x)a = x (a). it does not seem reasonable to expect a “good” code to have sublinear time encoding. . 1}k . then we say C is (q. Let’s consider the questions above one by one: 1. More formally. Given j. 1}2 is then defined as W H(x) = x . Taking this into account.Property testing. 2. Error detection: in this context. given r ∈ {0. 1}k } is the set of all linear boolean functions.e. and has the same dimension as Lk . f (x + y) = f (x) + f (y). i. whereas local testability determines whether a given string is close to a codeword or far from any codeword. 1} is linear if ∀x. 3-3 . In fact. will be randomized algorithms. the dot product of a and x as vectors over GF (2)) is a linear function. Now we’ll consider the algorithmic questions of coding theory in the sublinear time framework. 1}k → {0. That is.) k The Walsh-Hadamard code W H : {0. the decoder satisfies 3 ∀i ∈ {1. sublinear time algorithms in general. W H(x) is the evaluation of every linear function at x. O(1)].e. most bits of the codeword needs to depend on most message-bits. and also to property testing). . 1}n . . there has been some recent work in the area of locally encodable codes by relaxing some of the errorcorrection properties of the code). 1}k . 1}k → {0. Obviously. error detection is known as local testing. the Walsh-Hadamard code. This will be the main tool in proving that N P has exponential size PCPs. 3. it consists of the dot product of x with every ∈ {0. C(x)) < . 1}n . the decoder queries some bits of a noisy codeword C(x) + η and outputs the j-th bit of x. Lk = { a |a ∈ {0. test if either r is a codeword or r is far from every codeword (similar to the gap problems we saw earlier. 3. For example. for those familiar with it. (Proof: observe that the space of linear functions is a vector space. W H(x) is the truth table of x . Figure 1 gives a pictorial description of a sublinear time algorithm with the above mentioned relaxations. we say a code is locally decodable if there exists a local decoder Dec such that for all x and r where ∆(r. is typically done in this framework. Pr[Decr (i) = xi ] ≥ . . for any a ∈ {0. 4 If there is such a decoder for a given code C and it makes fewer than q queries. k}. and local decoding and local testing algorithms in particular.2 The Walsh-Hadamard Code and Linearity Testing Now onto our first code.

1. the probability that f (z + r) = x (z + r) is at least 1 − δ.Since any two distinct linear functions disagree on exactly half the set of inputs (i. we have that for a random r. The test is. Luby and Rubinfeld [BLR93]. 2k−1 ]-code. for x = y). but poor rate. Proof. 1}k → {0. 1}k → {0. Pra [ x (a) = y (a)] = 1/2. 1} is δ-close to W H(x). The problem of local testing of the WH code is more commonly called linearity testing.2. then Decf (z) = x (z). Query f (y). quite simple: pick y and z uniformly at random from {0. Choose r ∈R {0. the decoder Dec works as follows: (The decoder Dec has oracle access to the function f ) Decf : “On input z. Codes with this property are called systematic codes. 1}k independently 2. which is δ-close to some Walsh-Hadamard codeword. This test was proposed and first analyzed by Blum. z ∈R {0. 1}k 2. and so is the probability that f (r) = x (r). since xi = x (ei ). 1}k and check that f (z) + f (y) = f (z + y). 3.e. 1.1 Local Decodability of the Walsh-Hadamard Code Decoding the Walsh-Hadamard code is very simple. If both of these conditions hold. Since f is δ-close to x . Formally. Pr[Decf (z) = x (z)] ≥ 1 − 2δ. Accept if f (y) + f (z) = f (y + z). by the linearity of x . Since the decoder only makes two queries. given f {0. 1}k → {0. Thus. Note that the WH code has the special property that the input bits xi are in fact a subset of the codeword bits. 3. Given a garbled codeword f : {0. then. as in the WH decoder. 2k .2. W H(x) is both the evaluation of the linear function x at every point as well as the evaluation of every linear function at the point x. f (z). If f : {0. Thus the Walsh-Hadamard code is a [k. 3-4 . BLR-Testf : “ 1. It has very good distance. 1} we want to test whether it is a linear function (WH codeword) or far from linear. Query f (z + r) and f (r) 3. the Walsh-Hadamard code is 2-locally decodable. Choose y. 1}.2 Local Testability of the Walsh-Hadamard Code: Linearity Testing To locally test the WH code. Output f (z + r) − f (r) Claim 3. 1}k . Thus Prr [Decf (z) = x (z)] ≥ 1 − 2δ. the fractional distance of the Walsh-Hadamard codes is 1/2. we wish to test whether a given truth table is the truth table of a linear function. It is useful to note that there are two dual views of the Walsh-Hadamard code based on the fact that W H(x)a = x (a) = a (x). and f (y + z) 3. ∀x ∈ {0.

This algebraic proof is due to Bellare. 1}k } are a basis. as similar techniques will arise later in the course. For this. it is fruitful to define an inner product on F as follows: f. Thus the Boolean 0 corresponds to 1 in this setting. a The first property we will need that is not entirely obvious is that Property 3. as shown in the following Theorem 3. as mentioned previously. so all we need to do is show that they are linearly independent. this test will pass with probability 1.e. χa is linear. but we will give an algebraic proof. Consider the family of functions F = {f : {0. rather than working over {0. χa (x) is also linear in a. the characteristic functions {δa |a ∈ {0. this should come as no surprise. Furthermore. 1}k → R} and equip F with an addition (f + g)(x) = f (x) + g(x). P We will now show that the linear functions of the form χa (x) = (−1) a (x) = (−1) ai xi ( mod 2) also form a basis for F. If f is δ-far from linear. Thus F has dimension 2k .) Note that there are already 2k = dim F functions of this form. χa (x + y) = χa (x)χa (y) i. where δa (x) = 1 if x = a and 0 otherwise. −1} is linear if f (x + y) = f (x)f (y). Luby and Rubinfeld [BLR93] is a combinatorial proof of a weaker version of the above theorem. we will first need to equip ourselves with some basics of Boolean Fourier analysis. We begin by examining a few basic properties of the functions χa .1}k (It is an easy exercise to check that this is in fact an inner product. Before proceeding to the proof. and passes the BLR-Test with probability exactly 1/2. note that Property 1. g = Expx∈{0. The question is. Property 2. The original proof of Blum. χa+b (x) = χa−b (x) = χa (x)χb (x) i. It is clear that F is a vector space over the reals. First.Obviously if f is linear. Kiwi and Sudan [BCH+ 96].1}k [f (x)g(x)] = 1 · 2k f (x)g(x). 1} as the output of a linear function. −1} (the roots of unity) under multiplication.2. it will be convenient to treat the output space as {+1. 1}n → {+1. Expx [χa (x)] = 1 if a = 0 0 otherwise 3-5 . can there be function that are far from linear but still pass this test with high probability? No. h˚ astad. x∈{0. Coppersmith. and the Boolean 1 corresponds to -1.3 Fourier Analysis First. then Pr[BLR-Testf accepts ] ≤ 1 − δ. because of the duality of (x) as both a linear function of a and of x.2. Linearity now takes the form: f {0.e. 3. Second. The above theorem is tight since a random function is 1/2-far from linear.

If a = 0. These fa are called the Fourier coefficients of f .b ˆ ˆ fa fb χa . Thus. then by permuting the indices we may assume that a1 = 0 without loss of generality. this clearly holds. ˆ Observer that if the normalized distance δ(f.y 3-6 . otherwise This follows from the above via χa . Then we have 2k · Expx [χa (x)] = x (−1) P ai xi = Then. since f is ±1-valued. f = ˆ2 fa Proof. we get: f. these two sums exactly cancel out. 3. then fa = 1(1− )+(−1) = 1−2 . for any f ∈ F we may write f = a fa χa for f. Now we come to one of the most basic useful facts in Fourier analysis: Property 5 (Parseval’s identity). if f is a Boolean function. Corollary 3. χa . so the Fourier coefficients capture the normalized distance from linear functions. ±1-valued. Note that we can rewrite the linearity condition f (x)f (y) = f (x + y) as f (x)f (y)f (x + y) = 1. χa ) = .Proof. Property 4. we come to the proof of the soundness of the Blum-Luby-Rubinfeld linearity test: Proof of Theorem 3. χa = ˆ ˆ fa ∈ R. b ˆ fb χb = a.3. Since ˆ the χa form an orthonormal basis. χb = 1 0 if a = b . In particular.2. f. χb (by linearity of ·. then ˆ2 fa = 1. If a = 0. χb = Ex [χa (x)χb (x)] = Ex [χa−b (x)]. Writing f in terms of the basis χa . and then applying the above fact.e. i.4 Proof of Soundness of BLR-Test Finally. the χa form not only a basis for F.y x.2. · ) ˆ2 fa a = where the last line follows from the previous because the χa form an orthonormal basis. since (−1) a (0y) (−1) x:x1 =1 P ai xi + (−1) x:x1 =0 P ai xi = −(−1) a (1y) . but also an orthonormal basis for F. Suppose f is δ-far from any linear function. Then Pr [BLR-Test accepts f ] = Pr [f (x)f (y)f (x + y) = 1] x. f = a ˆ fa χa . We then have.

1109/18. Selftesting/correcting with applications to numerical problems. 42(6):1781–1795.b. writing out f in terms of its Fourier coefficients. IEEE Transactions on Information Theory.y 2 2 a.b. P r(Z = 1) = Exp[ 1+Z ].1016/0022-0000(93)90044-W. Marcos A. we apply linearity of expectation and the fact that x and y are independent to get: Pr [BLR accepts f ] = x.y 1 + f (x)f (y)f (x + y) 2 = 1 1 + Expx. 1995). November 1996. −1}. December 1993. Michael Luby. doi:10. then (1 + Z)/2 = 1 and if Z = −1. Linearity testing in characteristic two. doi:10.y 1 1 + 2 2 1 1 + 2 2 ˆ ˆˆ fa fb fc Expx [χa (x)χc (x)] Expy [χb (y)χc (y)] a. so its largest Fourier coefficient can be at most 1 − 2δ. we get   1 1 ˆ ˆˆ Pr [BLR-Test accepts f ] = + Expx. 1990). Kiwi.c 1 1 ˆ + max(fa ) 2 2 a ˆ2 fa a 1 1 ˆ + max(fa ) (by Parseval’s identity) = 2 2 a = 1−δ where the last line follows from the fact that f is δ-far from linear.Note that for any random variable Z with values in {+1.y  fa fb fc χa (x)χb (y)χc (x + y) x.y [f (x)f (y)f (x + y)] 2 2 Now.556674. as noted previously. so (1 + Z)/2 is an indicator variable for the event Z = 1.c   1 1 ˆ ˆˆ = + Expx. and Madhu Sudan. then (1 + Z)/2 = 0.c Then. Thus we have Pr [BLR-Test accepts f ] = Expx. 2 since if Z = 1. Johan H˚ astad.y x.b. 47(3):549–595. (Preliminary Version in 22nd STOC.b. [BLR93] Manuel Blum. J.y  fa fb fc χa (x)χb (y)χc (x)χc (y) 2 2 a. References [BCH+ 96] Mihir Bellare. 3-7 . Don Coppersmith. (Preliminary version in 36th FOCS.c = ≤ ˆ3 fa a. and Ronitt Rubinfeld. Computer and System Sciences.

Sign up to vote on this title
UsefulNot useful