Você está na página 1de 2

Julia Kempe

Spring 2009 Homework 2 School of Computer Science


Fundamental Ideas Due 27 April 2009 3:10pm Tel Aviv University

If you discuss this homework with others, you must say so in the homework. Please write
a sentence in the beginning of the homework indicating if you did it alone or with whom you
discussed it. If you get help from a published source (book, paper internet, etc), cite that. No late
homework will be accepted.

1. Warm-up

(a) Algebra: Construct the field GF (8) explicitly, starting with GF (2). Describe all your
steps in detail, just as they were outlined in the course. Show that your construction is
a field.
(b) Reed-Solomon: Recall, that for a Reed-Solomon code we pick field elements α1 , . . . , αm ∈
GF (q) and encode the message (c1 , . . . , ck ) ∈ GF (q)k into (p(α1 ), . . . , p(αm )) ∈
P
GF (q)m where p(x) = k−1 i
i=0 ci x . (Note that here we do not add the αi to the encod-
ing, as we can assume that they are fixed and known.) Show that these Reed-Solomon
codes are linear codes over GF (q). Give the generator matrix of this code.
(c) Hadamard: Recall the Sylvester construction of a Hadamard matrix H of size 2n ∗ 2n .
We can label each row by x ∈ {0, 1}n and similarly each column by y ∈ {0, 1}n .
Show (for instance by induction) that the entry in row x and column y is given by
Hx,y = (−1)x·y where the inner product is taken mod 2. Conclude that when we
replace 1 by 0 and −1 by 1 and view the rows of the Hadamard matrix as codewords,
then the encoding C(x) of x ∈ {0, 1}n (the row labelled x) is given by C(x)y = x · y.

2. Improving the Hamming code: Let C be an (n, k, d)2 code for some odd d. Construct the
code C 0 by adding a ‘parity bit’ to each codewords: i.e., a bit whose value is set to the XOR
of all n other bits.

(a) Find the parameters of C 0 (message length, block length, distance). What are its error
detection/correction abilities?
(b) What happens if we add another parity bit?
(c) What happens over non-binary alphabets?
(d) Show that for any l ≥ 1 there exists a [2l , 2l − l − 1, 4]2 code. Briefly describe the
generating/parity check matrix of this code.

3. Extending the Hamming bound and Hamming code: Let q be a prime power and work
over GF (q).

(a) In the course we saw the Hamming bound for codes over GF (2). Prove a generalization
of the Hamming bound to codes over GF (q). As in the binary case, perfect codes are
codes that meet the Hamming bound.
(b) Find a family of perfect q-ary codes of minimum distance 3.

1
Julia Kempe
Spring 2009 Homework 2 School of Computer Science
Fundamental Ideas Due 27 April 2009 3:10pm Tel Aviv University

4. The hat problem: n players enter a room and a red or yellow hat is placed on each player’s
head. The color of each hat is determined by an independent coin toss. Each person can see
the other players’ hats but not his own. No communication of any sort is allowed, except
for an initial strategy session before the game begins. Once they have had a chance to look
at the other hats, the players must simultaneously guess the color of their own hats or pass.
The group shares a $10 million prize if at least one player guesses correctly and no player
guesses incorrectly. Your goal is to find a strategy for the group that maximizes their chances
of winning the prize.
Before you go on, try to obtain probability 3/4 for n = 3.

(a) Let G be the family of all directed graphs G on vertex set {0, 1}n satisfying that for any
edge (u, v) in G, u and v differ in at most one coordinate. For G ∈ G, let K(G) be the
number of vertices of G with in-degree at least one, and out-degree zero. Show that the
maximum probability of winning the hat problem is given by maxG∈G K(G)/2n .
n
(b) Show that K(G)/2n is at most n+1 for any G ∈ G. Hint: Use the fact that the out-
degree of every vertex is at most n.
n
(c) Show that if n = 2` −1, then there exists G ∈ G with K(G)/2n = n+1
. Hint: Hamming
code!

5. Noiseless and noisy coding theorems:

(a) Noiseless: We showed in class that n-bit binary messages in which each bit is indepen-
dently 0 with probability p can be compressed to length ≤ (H(p) + ε)n such that the
decoder errs only with probability less than δ. Show that we can achieve error δ = 0
(zero-error) of the decoder if we allow encodings of different length for different mes-
sages. Show that for any ε > 0 there is a sufficiently large n such that the expected
message length is ≤ (H(p) + ε)n. Hint: You need to deal with messages outside A.
Do it in the most straightforward way.
(b) Noisy: In class we saw the noisy coding theorem for a binary symmetric channel. Now
assume that instead we are dealing with a non-symmetric channel: If 0 is sent along
the channel, the channel doesn’t corrupt it, and 0 arrives at the output. If 1 is sent, it
gets flipped to 0 with probability p. Show the noisy coding theorem for such channels
for any p < 1. Hint: There are several ways to solve this problem. But if you do not
manage after some thinking, one way is to do some preprocessing, where you encode
each bit by repeating it several times. Study what the channel does to this repetition
code and prove the noisy coding theorem from there by using the version we showed
in class.

Você também pode gostar