Você está na página 1de 47

DIPLOMA THESIS — Communication Systems Group, Prof. Dr.

Burkhard Stiller

Bachelor Thesis
Live P2P Video Streaming
Framework

Stefan Zehnder
Zürich, Switzerland
Student ID: 02-918-563

Supervisor: Fabio Hecht, Cristian Morariu


Date of Submission: June 12, 2008

University of Zurich
Department of Informatics (IFI)
Binzmuehlestrasse 14, CH—8050 Zurich, Switzerland
Diploma Thesis
Communication Systems Group
Department of Informatics (IFI)
University of Zürich
Binzmuehlestrasse 14, CH—8050 Zurich, Switzerland
URL: http://www.csg.unizh.ch/
Live P2P Video Streaming Framework

Table of Contents
1 Introduction ............................................................................................................4
1.1 Introduction and Motivation ................................................................................................... 4
1.2 Description of Work and Goals ............................................................................................. 5
2 Related Work ..........................................................................................................6
2.1 Peer-to-peer Networks .......................................................................................................... 6
2.1.1 Pastry ...................................................................................................................... 6
2.2 Multiple Description Coding .................................................................................................. 7
2.2.1 Sub-Sampling Multiple Description Coding (SMDC) ............................................... 8
2.2.2 Multiple Description Motion Coding (MDMC) ........................................................... 8
2.2.3 Multiple State Video Coder (MSVC) ........................................................................ 9
2.2.4 Motion-Compensation Multiple Description Video Coding (MCMD) ........................ 9
2.2.5 Comparison of the MDC schemes ........................................................................... 9
2.2.6 Hierarchically Layer Encoded Video (HLEV) ......................................................... 10
2.3 Video Handling in Java ....................................................................................................... 10
2.3.1 Java Media Framework API (JMF) ........................................................................ 10
2.3.2 FOBS4JMF and JFFMPEG ................................................................................... 10
2.3.3 Freedom for Media in Java (FMJ) .......................................................................... 11
2.3.4 VideoLAN - VLC media player ............................................................................... 12
2.3.5 Summary ............................................................................................................... 12
2.4 P2P Video Streaming .......................................................................................................... 12
2.4.1 Resilient Peer-to-Peer Streaming .......................................................................... 13
2.4.2 Distributed Video Streaming with Forward Error Correction .................................. 13
2.4.3 SplitStream ............................................................................................................ 14
2.4.4 CoolStreaming/DONet ........................................................................................... 15
2.4.5 PULSE ................................................................................................................... 15
3 Live P2P Video Streaming Framework ..............................................................17
3.1 Envisioned Scenario ........................................................................................................... 17
3.2 System Components Overview ........................................................................................... 17
3.3 Design of the Components ................................................................................................. 19
3.3.1 Subcomponents of the P2P Manager .................................................................... 19
3.3.2 Subcomponents of the MDC Manager .................................................................. 21
3.4 Implementation of the Components .................................................................................... 24
3.4.1 Subcomponents of the P2P Manager .................................................................... 24
3.4.2 Subcomponents of the MDC Manager .................................................................. 26
3.5 Implemented MDC Scheme ................................................................................................ 28
3.6 Algorithm to Search for a possible Source .......................................................................... 29
3.7 Network Structures and Communication Aspects ............................................................... 30
3.7.1 Monitor the Status of Connected Peers ................................................................. 30
3.7.2 Peer disconnects the Stream ................................................................................. 31
3.7.3 Protocols ................................................................................................................ 32
4 Performance Evaluation .....................................................................................34
4.1 Framework Evaluation ........................................................................................................ 34
4.2 Problems ............................................................................................................................. 34
4.2.1 DHT - PAST ........................................................................................................... 34
4.2.2 MDC ....................................................................................................................... 35
4.2.3 Media Handling in Java ......................................................................................... 36
4.2.4 VLC - JVLC ............................................................................................................ 36
5 Future Work .........................................................................................................37
6 Acknowledgements .............................................................................................38

Odd page Page 3


Live P2P Video Streaming Framework

1 Introduction
The first chapter gives a short introduction into this work and description of this work’s
goals.

1.1 Introduction and Motivation


Since high bandwidth broadband connections have become available to everyone, the
demands for online video are constantly increasing. Video-on-demand systems like
YouTube [1] and BitTorrent [2] are highly successful, where the user has the possibility to
choose when he wants play a video.
In contrast to video on demand, live video streams are only available at one particular time.
The most important task here is to create a continuous video stream. The momentary used
systems are mainly constructed as a server to client solution, where one single source
distributes the video content to its receivers. This approach only scales up to certain point,
until the server’s full capacity is reached.
A Unicast system, where each client has a direct connection to the server, has the
advantage that it is easy to implement, and live and on-demand streaming is not a problem.
The disadvantage the server has to sustain the entire video distribution work load. This
means the server needs a high bandwidth connection and the operator has to carry all the
costs of maintaining the functionality of the system.
Peer-to-Peer networks now offer the possibility to distribute video content to an unlimited
number of users and reducing the bandwidth bottleneck on the source. The use of
centralized server architecture is not needed since the video distribution throughout the
network is handled by each client. Bandwidth savings at the server side can be achieved
through P2P video content delivery systems. P2P systems reduce the server’s work
enormously and make the system more scalable because a new peer joining the network
has no longer an impact on the server’s resources. A direct server-client solution forces the
server to have sufficient bandwidth for each client to stream the video, unlike the P2P
approach where the server (source of the video stream) needs only enough bandwidth to
stream the video to a certain initial amount of peers. The rest of the video distribution is
then handled by the peers in the network.
The question to solve in a P2P system for video streaming are: How does a peer choose its
partner from which it is receiving the video stream. Does a peer have a single source or are
the multiple sources for the stream? What is the structure of the P2P network, a tree based
network or a meshed based network? In a P2P system is always the risk that a source peer
may suddenly fail or disconnect. For example in tree-based system where every peer has a
single source, the disconnect of a single peer could damage the video distribution
profoundly. The higher a peer is in a tree hierarchy the more peers depend on its well being
and its continuous streaming capacity. If one peer disconnects an entire branch of the tree
could lose the video stream. To create a stable video distribution system every peer in the
system should have multiple sources to circumvent the single point of failure.
When designing P2P system for video streaming an important task is always to implement
error-recovery methods in the case of source node failures. A simple solution would be to
gather redundant video data from several peers. If one peer suddenly disconnects there are
still other sources active. A much better approach is a technique called multiple description
coding (MDC). It has the advantage of making a live video stream more reliable and more
bandwidth usage optimized without the use of redundant data. MDC schemes divide a

Page 4 Even page


Live P2P Video Streaming Framework

single video stream into several substreams (descriptions), which can be routed on several
paths to the target. The more substreams a peer receives the better the quality of the video
stream. To play the stream only one substream is needed. MDC allows better failure-
resistance than a single video stream. A peer can always choose the number of
substreams it wants to receive according to its available bandwidth.

1.2 Description of Work and Goals


The Goal of this thesis is to design and implement a P2P framework for streaming of live
video using multiple description coding. The design of this framework should be modular in
such a way that in the future, components may easily be replaced or new features may be
added to the framework. The Peer-to-Peer part of this framework is built on an existing
overlay network system. This allows focusing this work more on the video distribution part
than the initial creation of the overly network and the inter-connection of the peers. The
current implementation will be based on Pastry [3] for connecting the peers, sending
messages and serve as a distributed hash table (DHT) to store framework related data in
the network. The MDC component will create the different descriptions (substreams) from a
media source and provide the necessary operations to combine and play the descriptions.
After the two basic components have been created, an application will be implemented on
top of these two parts to perform an evaluation of the proposed solution.

Odd page Page 5


Live P2P Video Streaming Framework

2 Related Work
This chapter gives an overview of related work in the area of P2P video streaming and
multiple description coding, and an overview of the available libraries for media handling in
Java.

2.1 Peer-to-peer Networks


Today, the traditional client-server approach requires a tremendous amount of effort and
resources to keep up with the continuous growth of the Internet in terms of users and
bandwidth. The Peer-to-Peer communication paradigm provides a solution for the growing
requirements. A definition given in [23] describes a P2P system as follows: “A peer-to-peer
system is a self-organizing system of equal, autonomous entities (peers) which aims for the
shared usage of distributed resources in a networked environment avoiding central
services.” P2P networks should be able to satisfy the vast demand for resources such as
bandwidth, storage capacity, or processing power and scale accordingly. The distributed
nature of P2P system increases reliability in case of failures by replicating data over
multiple peers. Over 50% of Internet traffic is due to Peer-to-Peer applications, sometimes
even more than 75%.
The overly network consist of all the participating peers in the current system. Every node in
the network is linked to a couple of known nodes. Based on how the nodes in the overlay
network are linked to each other, a P2P network can be classified as structured or
unstructured. An unstructured system has a central server which handles the data item
lookups and posses the location knowledge of all stored data items in the network. The
server is only used for locating a data item in the network. The actual data exchange is
handled directly by the peers. This concept reduces the server’s work but the server is still
a bottleneck with regard to resources. Most of the popular P2P networks such as Gnutella
and FastTrack are unstructured. The structured approach does not need a central server.
The system uses a Distributed Hash Table (DHT) to store data in the network. The DHT
provides distributed indexing as well as scalability, reliability and fault tolerance. Some well
known DHTs are Chord, Pastry, Tapestry, and CAN [23].

2.1.1 Pastry
Pastry is a Peer-to-Peer overlay network. The nodes in a Pastry network form a
decentralized self-organized network that can be used to route messages from one node to
another. FreePastry [20] is an open-source implementation of Pastry and is intended to
serve as platform for performing research and the development of P2P applications.
Several applications have been built on top of FreePastry, for example a storage utility
called Past [4], a scalable publish/subscribe system called Scribe [5] and many others are
under development.
Each Pastry node has a unique node-ID, a 128-bit numeric node identifier. All the nodes are
arranged in a circular space in which the node-ID is used to determine the nodes position.
Before joining a Pastry system, a node chooses a node-ID. The Pastry system allows
arbitrary node-IDs. Commonly, a node-ID is formed as the hash value of the node’s IP
address. Each node maintains a routing table, a neighborhood set and a leaf set.
The routing table is made up of several rows with multiple entries per row. On node n, the
entries in row i hold the identities of Pastry nodes whose node-IDs share an i-digit prefix

Page 6 Even page


Live P2P Video Streaming Framework

with n. For example, entries in the first row (i=0) have no common prefix with n. This gives
each node a rough knowledge of other nodes, nodes that are distant in the identifier space.
The neighborhood set contains node-IDs which are closest to the local node, regarding
network proximity metric, and are used to maintain local properties. This can for example
be used to store replicate information on the neighbor nodes. Replication is a fault-tolerant
way to store data on other nodes by making a copy of the original data.
The routing table sorts node-IDs by prefix. The leaf set of node n holds all the nodes with
the numerically closest node-ID to n. The leaf set and the routing table provide information
relevant for routing.
When a node n receives a message to be routed, the node first checks its leaf set. If the key
to the target node falls within the range covered by its leaf set, the message will directly
forwarded to the node whose ID is closest to the target key. If there is no match in the leaf
set, the routing table will be used to forward the message over a longer distance. This is
handled by trying to pass the message on to a node which shares a longer common prefix
with the message key than the key (node-ID) of node n. If there is no such entry in the
routing table, the message is forwarded to a node which shares a prefix of the same length
with the message key as n but which is numerically closer to the message key than n.
The Pastry network is self-organizing. This means Pastry has to deal with node arrivals,
departures, and failures. At the same time Pastry needs to maintain a good routing
performance. A new node n joins the network over a well known Pastry node k, for
bootstrapping. The node n copies its initial neighborhood set from k since k is considered to
be close. To build the leaf set, node n routes a special “join” message via k to a node c (with
the numerically closest node-ID to n) and retrieves the leaf set of c for itself. All nodes which
are forwarding the join message to c, provide n with their routing information and allow n to
construct its routing table. When the table is constructed the node sends its node state to all
notes in its routing table so they can update their own routing information.
A node failure is detected when a communication attempt with another node fails. The
aliveness of the nodes in the routing table and leaf set are examined automatically since
the routing procedure requires contacting these nodes. The neighborhood set is not
involved in routing and therefore the nodes need to be tested periodically. Failed nodes
need to be deleted and replaced. The replacing procedure is handled by contacting another
node, that has similar characteristics with the failed node, and retrieving replacement
information from it [3][23].

2.2 Multiple Description Coding


MDC (Multiple Description Coding) was first invented by the Bell Laboratories. The idea
was to transmit data over multiple (telephone) lines to achieve higher reliability. If one line
would fail, the signal still could reach the target through the other lines, but with reduced
quality.
In video streaming the idea of MDC is to split a single stream into several descriptions
(bitstreams) and transmit them over several channels to the target. Each description
contains a part of the original stream. To restore the original picture all descriptions are
needed. The advantage here is, in order to see the video stream one descriptions is
sufficient. If one description is lost the video stream still can be played but with lesser
quality.

Odd page Page 7


Live P2P Video Streaming Framework

MDC is especially helpful in case of unreliable transport channels and the growing interest
in voice, image and video communications over the Internet. For example, the loss of one
packet can lead to the loss of a large number of source samples and hence in an
interruption of the stream. But with MDC there won’t be any interruption, only variations in
the stream quality [9][10].
In the next few sub-sections there will be an introduction into four different multiple
description coding schemes. All presented coding schemes are based on the video codec
standard H.264/AVC. The codec was developed by the ITU-T Video Coding Experts Group
(VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG). It is a block
based motion estimation codec standard with the capability to provide good video quality at
substantially lower bit rates than previous standards and having enough flexibility to be
applied to a wide variety of applications on a wide variety of networks and systems,
including low and high bit rates, low and high resolution video, broadcast, DVD storage,
RTP/IP packet networks, and ITU-T multimedia telephony systems and therefore the most
suitable video codec for video streaming over the internet. More details about the codec
can be found in this paper [11].
According to the definition in [12], MDC schemes can be grouped into two sets: The first
group exploits the spatial correlation within each frame of the sequence when creating the
descriptions. The second group takes advantage of the temporal correlation between the
subsequences obtained by the temporal sampling of the original video sequence.

2.2.1 Sub-Sampling Multiple Description Coding (SMDC)


The first MDC scheme [12] presented here is based on spatial correlation. It takes
advantage of the fact that within a single video frame, neighboring pixels have the same or
similar values.
In a single video frame, a high spatial correlation exists, which makes it easy to estimate a
single pixel value according to its neighbors. This results in a MDC scheme where adjacent
pixels can be assigned to separate descriptions. To create the descriptions each input
frame is sampled along its rows and columns with a certain sampling factor. For example by
using a sampling factor of 2, each input sequence results in four sub-sequences with
halved resolution and a quarter of the original frame size. Each sub-sequence is then sent
to a separate encoder which has a single description as its output bitstream.
If the end-user receives only one description he may reconstruct the video at a lower
resolution without any artifacts or channel distortion. The more description arrive the better
the decoder can estimate the lost information.
A drawback of this approach is the decrease of inter- and intra-coding gains because of
spatial sampling which causes a reduction of the correlation between adjacent pixels.

2.2.2 Multiple Description Motion Coding (MDMC)


The motion vector field is one of the most important information in many video coding
standards. The loss of this information can lead to disastrous effect in the decoded picture.
This coding scheme proposes an algorithm to enhance the robustness of the motion vector
field.
In motion coding each frame is decomposed into blocks of pixels with the same size. A
motion vector is used to store the changes for each block from the current frame to the next
one.

Page 8 Even page


Live P2P Video Streaming Framework

In this MDC scheme the block-based motion vector fields are split into two parts using a
enhanced version of the quincunx sampler The quincunx sampler generates the two parts
by splitting the motion vectors of adjacent blocks into two disjoint sets. In quincunx
subsampling every other block is removed from each line in the frame. The two sets are
transmitted individually to the decoder as a single description. Each description includes
some unique specific information but also some partial information which has to be
duplicated on both descriptions in order to reconstruct the original signal.
Simulation results show that the MDC scheme reduces a large amount of data without
causing serious quality loss of the reconstructed video, compared to simply duplicated
bitstream transmission [12][13].

2.2.3 Multiple State Video Coder (MSVC)


In a video input sequence frames normally have a high temporal redundancy depending on
the sequence frame rate. Each frame only slightly differs from the previous ones. This can
be used to create two or more subsequences from the original video by using temporal
sampling. In the simplest example an input sequence is splitted into odd and even frames
and multiplexed into two subsequences. Each subsequence is then processed by the
encoder and results into two bitstreams (descriptions).
This MD scheme is called Multiple State Video Coding because in order to decode the
whole video it requires the state of more than one frame. The receiver can decode the
stream at full frame rate if he receives both bitstreams. If one description is lost, the
decoder may reproduce the original sequence at half bitrate and the missing information
can be estimated by temporal interpolation of the received subsequence. The quality of the
decoded video also depends on the used algorithm for error concealment.

2.2.4 Motion-Compensation Multiple Description Video Coding (MCMD)


This coding scheme works similar to the one discussed before. The input sequence is
subsampled into even and odd frames. A central encoder works on the full-rate sequence
by taking both, even and odd frames and using a block-based motion compensation
predictor to compute the current frame from the previous two frames. Two additional side
encoders take only even or odd frames but instead of also computing a prediction error,
they encode the difference between the estimation of the central encoder and the prediction
at the side encoders. Therefore the output generated by the side encoders is needed by the
decoder to recover the real prediction error. The outputs of the three encoders are merged
into two descriptions.
In difference to the previous mentioned MDC schemes the redundancy can be varied by
adjusting the side encoders rate but has an extra computational load due to the double
motion estimation.

2.2.5 Comparison of the MDC schemes


A comparison of the MDC schemes can be found in Table 1. Spatial MDC coding schemes
usually demand lower computational complexity than temporal ones since temporal MDC
algorithms require the storage of many reference frames. The coding schemes MSVC,
SMDC, MDMC provide fully H.264/AVC-compliant bitstreams. This allows decoding of the
streams by any standard H.264 decoder. The MCMD scheme has the advantage that it is
external to the H.264 core unit and can therefore be optimized independently [12].

Odd page Page 9


Live P2P Video Streaming Framework

Table 1: Qualitative comparison of MDC coding techniques (Table taken from [12])

SMDC MDMC MSVC MDCM

Exploited correlation spatial spatial temporal temporal

Efficiency good good quite good variable

Computational Cost medium low medium high

Syntax Compliance yes yes yes no

Tunability no no no yes

2.2.6 Hierarchically Layer Encoded Video (HLEV)


Another slight different approach to MDC is the Hierarchically Layer Encoded Video
(HLEV). The video is split into one base layer and one or more enhancement layers. The
base layer contains the fundamental information about the stream. To play the video only
the base layer would be needed. The enhancements layers could be used to increase the
quality of the video and contain additional information to reconstruct the video. But in
contrast to MDC where every description is independent from each other, the layers need
the information from their sub layers. Without their subjacent layers an enhancement layer
is useless and without the base layer the video can’t even be reconstructed. This concept
has been used in the video coding standard MPEG-2, H.263 and H.263+.
Like in multiple description coding, the peer would have the possibility to choose the
number of incoming layers/descriptions according to his bandwidth and so define his video
stream quality [14].

2.3 Video Handling in Java


Java supports handling multimedia files with the Java Media Framework API (JMF) [15].
There are also additional Libraries available like Fobs4JMF [16], JFFMPEG [17], VLC/JVLC
[19], and Freedom For Media in Java (FMJ) [18] to make use of video data in Java.

2.3.1 Java Media Framework API (JMF)


The Java Media Framework API (JMF) enables audio, video and other time-based media to
be added to Java applications. The API can capture playback, stream and transcode
several media formats. The API is very complex and needs some weeks to understand.
Although it offers a lot of functionality, the problem is small support of video codecs (cf.
Table 2). The last improvement in this API was done in 2003 (Current version: JMF 2.1.1e).
Since then nothing changed, although new and better video codecs have been developed,
and the need for streaming media files over the Internet has largely increased.

2.3.2 FOBS4JMF and JFFMPEG


Fobs4JMF is an open source project developed by Omnividea Multimedia and the
computer vision group at UPM-DISAM (FOBS: Ffmpeg OBjectS). FOBS is a set of object
oriented APIs to deal with media. FOBS is an addition to JMF, it relies on the FFMPEG
library [12]. It is implemented as a wrapper class around the FFMPEG C++ library, but

Page 10 Even page


Live P2P Video Streaming Framework

Table 2: List of available video support formats in JMF

Media Type Video Codec Functionality RTP Support

AVI (.avi) read1/write2 -

Cinepak D3 -

MJPEG D, E4 -

RGB D, E -

YUV D, E -

VCM D, E -

HotMedia (.mvr) read only -

IBM HotMedia D -

MPEG-1 Video (.mpg) read only R5,T6

Multiplexed System D R,T


stream

Video-only stream D R,T

QuickTime (.mov) read/write -

Cinepak D -

H.261 D R

H.263 D, E R,T

JPEG (420, 422, 444) D, E R,T

RGB D, E -
1.read indicates the media type can be used as input (read from a file)
2.write indicates the media type can be generated as output (written to a file)
3.D indicates the format can be decoded and presented.
4.E indicates the media stream can be encoded in the format.
5.R indicates that the format can be decoded and presented.
6.T indicates that media streams can be encoded and transmitted in the format.

provides developers with a much simpler programming interface. It allows playing the most
common formats and codecs (ogg, mp3, m4a, divx, xvid, h264, mov, avi, etc.) thanks to the
FFMPEG library. The downside, it provides no encoding mechanism.
JFFMPEG is another open source project built on FFMPEG. JFFMEPG is a plug-in that
allows the playback of a number of common audio and video formats. A JNI wrapper allows
calls directly into the full FFMPEG library. But again there is only support for decoding of
video streams, there’s no encoding of video streams.

2.3.3 Freedom for Media in Java (FMJ)


Freedom for Media in Java (FMJ) is another open source project with the goal of providing
an alternative to JMF. This framework doesn’t depend on JMF unlike FOBS4JMF and
JFFMEPG. It has its own API, but is still compatible with JMF. The idea is that FMJ can be
used as a replacement library for JMF. Therefore it supports all functionality of FMJ, but
provides additional features like support for modern codecs, wrapping of various native
libraries and other improvements. The API provides a lot of support for playing all kinds of

Odd page Page 11


Live P2P Video Streaming Framework

media codecs and types, again through wrapping around native libraries (like the FFMPEG
library) but encoding of media with new codecs like H.264/AVG is still in development in this
framework.

2.3.4 VideoLAN - VLC media player


The VideoLAN project provides an advanced cross-platform media player allowing to
handle various audio and video formats (MPEG-1, MPEG-2, MPEG-4, DivX, mp3, ogg,
etc.) as well as DVDs, VCDs, and various streaming protocols. Another very important
feature the player can also be used as a server to stream in unicast or multicast, in IPv4 or
IPv6, on a high-bandwidth network without the need of any external codec or program to
work. Additionally the player allows the use of the H.264 / AVC codec, with his own
implemented x264 library.
For Java exists the JVLC Java Multimedia Library. This sub project aims to give to
developers in Java a fully functional multimedia library. JVLC is built around the VideoLAN
multimedia player, so is accesses the player’s native C libraries by using the Java Native
Access (JNA) library.

2.3.5 Summary
Most of the discussed projects like Fobs4JMF, JFFMPEG, FMJ and VLC support decoding
of today’s available video codecs, but only a few of them have the right encoding
capabilities to support a large variety of codes. Sun’s JMF offers a lot of functionality and is
included in other projects like Fobs4JMF and JFFMPEG, but has limited codec support.
Table 3 gives an overview of the available projects for video handling in Java.

Table 3: Video Supporting Projects for Java

H.264 H.264
Project Decoding Encoding
Playback Encoding

JMF yes (but limited yes (but limited no no


to some video to some video
codecs) codecs)

Fobs4JMF yes no yes no

JFFMPEG yes no yes no

FMJ yes yes (but limited yes no


to some video
codecs)

VLC / JVLC yes yes yes yes

2.4 P2P Video Streaming


Peer-to-Peer systems have the potential to overcome the limitations of today’s server-
based media distribution systems with better scalability and a clear price advantage. All
connected peers in the system will help to forward the data stream to other peers in the

Page 12 Even page


Live P2P Video Streaming Framework

network. This section gives some examples for related work in the area of P2P video
streaming. A comparison of the different solutions is given in Table 4.

2.4.1 Resilient Peer-to-Peer Streaming


Padmanabhan et al. [6] present a P2P video streaming approach named CoopNet. The
idea is not to replace a server streaming solution with a pure P2P system but to alleviate
the overload at the server in the case of a special event.
The distribution of the video stream is done by using MDC (cf. Section 2.2) and multiple
distribution trees. The original stream is encoded into several different bitstreams
(descriptions) and then sent over different trees. Each tree may have one or more
descriptions. The audio and video signal is first partitioned into group of frames (GOFs), all
having the same duration. Each GOF is then encoded and packetized into M packets. The
M packets represent the M descriptions. These descriptions are then sent over multiple
CoopNet trees to their target nodes. The client depacketizes the received packets and sorts
their individual data units by their media decode time. The quality of the streams depends
on the number of the received descriptions.
The clients periodically send reception information to the server, which allows it to adapt to
dynamic network conditions and client population. This is done by using a reverse path of
each tree. The reports are aggregated to avoid server overload.
If a peer leaves, all its descendant nodes no longer receive the description propagated by
the leaving peer until the next update interval. A peer, that misses a description sends a
report for the missing description to its parent node in the tree hierarchy during the current
report interval and receives the description from a different parent in the next interval.
A problem in this approach is the control overhead at the source which is required to
maintain the full knowledge of all distribution trees in the network and hence allows the
server to adapt to network changes. CoopNet not only supports live video streaming but
also video-on-demand. In the case of video-on-demand a large buffer space is needed at
each peer. If a peer doesn’t receive all video parts from its serving peers, the requesting
peer has to locate the missing parts, which will only result in more delay.

2.4.2 Distributed Video Streaming with Forward Error Correction


In this paper [7], a rate allocation scheme is proposed to be used with Forward Error
Correction (FEC) to stream video. FEC is a system of error control for data transmission.
The sender adds redundant data to its transmission. This additional data allows the receiver
to detect and correct error. In this case the video is encoded in such a way that it can be
reconstructed at the destination only from the received packets, even if some parts are
missing. There won’t be any retransmission of lost packets.
The architecture proposes a receiver-driven transport protocol employing two algorithms:
rate allocation and packet partition. All transmission coordination is done by the receiver
and not the sender. The sender only estimates its round trip time to the receiver and the
loss rate. After computing this information the sender transmits the data to the receiver,
where the rate allocation algorithm calculates the optimal sending rate according to the
information received from the sender. If the receiver decides to change the sender’s
sending rate, the receiver sends a control packet informing the sender about the new
sending rate.
The job of the packet partition algorithm is to ensure that no sender sends the same
packets. Therefore it is necessary that the receiver constantly sends control packets to the

Odd page Page 13


Live P2P Video Streaming Framework

sender so that the sender is able to determine the next packet to be sent. The sender
chooses the next packet by using the packet partition algorithm.
A drawback of FEC is that it results in bandwidth expansion; it induces a small overhead
and hence reduces the amount of available bandwidth for the actual video stream. The
system has to make a trade-off between redundancy of the data and an efficient use of the
available bandwidth.

2.4.3 SplitStream
SplitStream [8] is a high-bandwidth content distribution system based on end-system
multicast, which also includes video streaming. Some P2P streaming architectures use a
tree for video distribution. A node in the tree forwards its incoming video stream to all its
child nodes. The idea behind SplitStream is to spit the content to be distributed into several
substreams, called stripes. Every stripe has its own tree structure and is using multicast for
stream distribution. SplitStream doesn’t use one tree, but several tree structures. A peer
that wishes to receive a certain stripe must join a specific tree network. To create the
different stripes a technique called multiple description coding can be used, described in
Section 2.2.
Scribe [5] is a scalable application-level multicast infrastructure system which is built upon
Pastry. To create a multicast group Scribe generates a random Pastry key known as the
group id. The multicast tree associated with the group is formed by the union of Pastry
routes from each group member to the group id’s root. By using reverse path forwarding the
messages are multicast from the root to the individual members. SplitStream is the further
development of Scribe with focus on video distribution.
A simple example how SplitStream works is given in Figure 1. The original content from the
source is split into two stripes. Each stripe then builds its own multicast tree such that a
peer is an interior node in one tree and a leaf in the other.

Figure 1: Basic approach of SplitStream

In other P2P tree network constructions there is the problem of leaf nodes, which do not
have any child nodes and don’t need to forward any resources. The burden of forwarding
multicast traffic is only carried by the interior tree nodes. The challenge in SplitStream is to

Page 14 Even page


Live P2P Video Streaming Framework

construct a forest of multicast trees such that a peer is a leaf node in one network and in all
the others he acts as an interior node and has to forward the stream. A peer can choose
how many stripes he wants to receive according to its bandwidth but he also has to
contribute the same amount of data he is receiving. A drawback of this design is that there
is no guarantee of finding an optimal forest, even if the network has sufficient capacity.

2.4.4 CoolStreaming/DONet
DONet [21] is a Data-driven Overly Network for live media streaming. Every peer in the
DONet network exchanges periodically data availability information with a set of partners.
Missing information or unavailable data element can be retrieved from one or more partner
peers. Every peer functions as data supplier to its partner peers.
The idea behind DONet’s data-driven design that it should be easy to implement (no
complex global structure to maintain), efficient (data forwarding is dynamically determined
according to data availability) and, robust and resilient (the partnerships enable adaptive
and quick switching among multiple suppliers).
The IP address is used as DONet’s unique node identifier. A new node first contacts the
origin node, which randomly selects a deputy node and redirects the new node to the
deputy. From the deputy the new node obtains a list of possible partner nodes and tries to
establish a partnership with them in the overlay. The origin node of the video stream is
persistent during the lifetime of the stream.
The system consists of three key modules: membership manager, partnership manager
and the transmission scheduler. The membership manager maintains a partial view of other
overlay nodes. Every node sends periodically a membership message to its partner nodes
to announce its existence. The partnership manager establishes and maintains the
partnership with other nodes by continuously exchanging its Buffer Map with the partners.
The Buffer Map represents the node’s buffer, containing the current available video stream
segments. The last component, the transmission scheduler, schedules the transmission of
video data. Thanks to the partnership manager, a node is always aware of its partner’s
video segments. The transmission scheduler has to decide from which partner to fetch
which data segment. Each node can be either a receiver or a supplier of a video stream
segment, or both.
The DONet system doesn’t have a parent-child relationship between the nodes. Every node
that needs something can request a data segment from one of its partners that claims to
possess it. A problem is the large overhead that this system induces since every peer has
to inform its partners periodically about the current buffer content.
CoolStreaming is the Internet-based implementation of DOTNet and was first released in
2004 and has been used to broadcast sports programs.

2.4.5 PULSE
PULSE [22] is a P2P system for live video streaming and is designed to operate in
scenarios where the nodes can have heterogeneous and variable bandwidth resources.
The PULSE system places nodes in the network according to their current trading
performances. This means nodes with rich resources are located near the source to be
able to serve a large number of neighbors with more recent data. Whereas nodes with
scarce resources would only slow down the system if they are placed near the source.
Nodes are able to roam freely in the system and allowed to react to global membership

Odd page Page 15


Live P2P Video Streaming Framework

changes and local bandwidth capacity variations over time. PULSE uses the data-driven
and receiver-based approach.
PULSE forms a mesh based network structure with trading and control links between the
peers. Every node is free to exchange control information and data for the stream with
peers of its choosing. The main criteria for choosing partners are the topological
characteristics of the underlying network, the resources available at each peer and the
chunk distribution/retrieval algorithms executed by the nodes.
The video stream encoded at the source is split into a series of chunks. To make the system
more resilient to chunk loss the source may apply data fixed-rate error correction codes
(FEC) or other forms of encoding, such as MDC. All chunks are numbered. This allows the
peers to reconstruct the initial stream. A PULSE peer has three main components: The data
buffer stores chunks before playback. The knowledge record keeps information about
remote peer’s presence, data content, past relationships and current local node
associations. The third component, the trading logic, whose role is to request chunks from
neighbors and to schedule the packet sending.
.
Table 4: Comparison of Video Streaming Solutions

Distributed
Video
CoopNet SplitStream DONet Pulse
Streaming
with FEC

MDC Support yes no yes no possible but


not included

Network Structure multiple trees mesh multiple trees mesh mesh

Live Streaming yes yes yes yes yes

Video on Demand yes - no no no


(VoD)

Forward Error no yes no no possible but


Correction not included

Retransmission of only for VoD no no yes yes


lost/missing data needed

Page 16 Even page


Live P2P Video Streaming Framework

3 Live P2P Video Streaming Framework


This chapter describes the design and the implementation of the P2P Live Video Streaming
Framework. It starts with a short introduction of the system and the idea behind this
framework. Afterwards the design and the actual implementation will be discussed in detail.

3.1 Envisioned Scenario


When a new peer connects to the overlay network, the peer first receives a list with
available video channels from the network. The list is the result of a lookup call into the
underlying overlay network where all the information is stored in a Distributed Hash Table
(DHT). When the peer now chooses to play a certain channel an additional lookup call is
made but this time searching for available sources instead of channels. The lookup result is
a list (source list) of available peers which are willing to share the video stream. Since the
idea is to use MDC the source list contains the IP address of the sources and the IDs of the
descriptions they offer. The DHT has the advantage that no central server for the
information lookup is needed. Every peer takes part in the storage process and maintains
an entire list. The list can be the channel list or one of the source lists. It is important that the
peers constantly update their stored lists with new information when for example a new
source enters or leaves the network. How the information distribution is handled in the
network depends on the used overlay network implementation. The problem using DHTs is
the sudden failure of a peers. A peer that leaves the network could lead to a data loss in the
DHT. The system needs fail-safe mechanisms to prevent this kind of problem.
From the retrieved source list, the peer picks some possible sources by using a specific
algorithm. If a suitable source peer has been found, the peer tries to establish a direct
connection to that peer. The connections are no longer made over the P2P overlay network
and its message routing system but are direct TCP connections between the peers. This
makes the communication between two peers more efficient and the system more modular
when using a separate communication component, for example in the case of replacing the
overlay network. If a connection is successful, a protocol is used to negotiate the
description request. Otherwise, if the connection setup fails or the request is declined, the
peer chooses the next suitable source from the list. As soon as a request is accepted, the
source peer starts sending the video stream to the receiving peer.
Every peer that receives a description has to register itself in the DHT such that other peers
can use it as a source. When a peer acts as a streaming server, it first has to register its
new created channel in the network and also has to add itself as the first available source
for the new channel. Other peers receiving the channel will extend the source list for this
channel by serving also as sources.
If a peer acts as a source for another peer and suddenly fails, the stream receiving peer
loses its source. This leads to the fact that a peer constantly has to monitor all its sources
and verify that they are still alive and active, and in case of a failure the peer needs error-
recovery mechanisms in such a way that the stream won’t suffer an interruption.

3.2 System Components Overview


This section gives a short overview of the main components in this architecture. Figure 2
shows the hierarchically structure of the components to create a complete application by
using this framework. The following sub-sections give a short introduction to the
components.

Odd page Page 17


Live P2P Video Streaming Framework

Figure 2: Framework Overview

P2P Overlay Network


The goal of this work is to design and implement a live video streaming system that is built
as a Peer-to-Peer system. In order to do this, the first task is to choose a peer-to-peer
overlay network system on which this framework can be built on. The P2P overlay network
used is FreePastry which is a Java implementation of Pastry. FreePastry provides the
necessary functionality to create and connect peers, sending messages to one another and
also has an implementation of a DHT integrated (Past). Past will be used to store
framework related data in a Distributed Hash Table.

Live P2P Video Stream Framework


The framework can be divided into two separate main components (cf. Figure 2). The first is
the P2P Manager. Its primary tasks are maintaining the connection to other peers in the
system, store and retrieve information from the distributed hash table and manage the
communication between the peers.
The second component is the MDC (Multiple Description Coding) Manager. Its main task is
to handle the entire video process: reading a video stream from a source, encode the input
stream, creating multiple descriptions, send the video stream into the network and also
receive and play a video stream.
Some subcomponents of these two main components access the network directly through
TCP or UDP sockets during other subcomponents make use of the overlay network. The
design and implementation of the P2P Manager and the MDC Manager will be discussed in
more detail later on.

Application
The application is built on top of the framework. It basically has to initialize the framework
components (P2P and MDC Manager) with the right parameters and to shut them down on

Page 18 Even page


Live P2P Video Streaming Framework

application closing. The GUI will access the framework components and data through the
application interface since the application manages the two active instances of the P2P and
MDC Manager.

Graphical User Interface (GUI)


The GUI is needed to offer the basic functionality of the framework to the user. It allows the
user to select a video channel from a list of channels and displays afterwards the selected
channel on the screen as a part of the GUI.

3.3 Design of the Components


The framework consists of two main components: the P2P Manager and the MDC
Manager. The P2P Manager handles the aspects of the Peer-to-Peer system and the
communication between the peers. Whereas the MDC Manager takes control of the entire
video stream handling. The separation into theses two basic components is made in order
to achieve a modular design of the framework; the P2P part is independent from the MDC
part. This section describes these two components and their subcomponents.

3.3.1 Subcomponents of the P2P Manager


The P2P Manager is composed of four subcomponents (cf. Figure 3). The first is the P2P
Controller that acts as the P2P Manager’s access interface and initializes and controls the
remaining components. The Connection Manager manages the current connected peers.
The Communication Module is managing direct connections between the peers and the
Resource Manager is the implementation of the DHT and consequently a part of it.

Figure 3: P2P Manager

Odd page Page 19


Live P2P Video Streaming Framework

Communication Module
The Communication Module handles all direct communication aspects between the peers
over TCP or UDP. This component has no access to the peer-to-peer overlay network. The
reason is the modular design of the framework and the knowledge that direct connections
are more efficient and reliable than sending messages over a peer-to-peer network.
The main functionality of the component is to request a certain video substream
(description) from other peers and negotiate the terms of the stream sharing. The request
sending peer is asking if the source peer is willing to upload the requested substream.
A peer leaving the network or stops participating in the current video stream distribution
needs to send a disconnect signal to is neighbor peers so they can immediately start
looking for a replacement. Especially if the disconnected peer was source. This means the
module needs to be capable of handling different communication protocol.
Another task is to check if the connected peers, that are currently uploading or downloading
a stream, are still alive. A peer has to verify periodically that its connected peers are still
alive by exchanging messages. If a peer does not answer, the connection to the peer will be
terminated. This is necessary in the case one of the current connected peers suddenly fails
and needs to be replaced with another source peer.
The Communication Module is closely coupled to the Connection Manager, since the
Connection Manager maintains the lists with the current incoming and outgoing
connections.

Connection Manager
The Connection Manager maintains four different lists (cf. Figure 4). The first list is the
Channel List. This list contains all current available video channels in the network. Each list
entry consists of the channel name and the channel key. The key is needed to create a
lookup for possible channel sources in the DHT. The list is filled by the P2P Controller
component.
The second list, the Source List, holds all the available sources for the current playing
channel. As soon as the peer chooses to watch a certain channel, the P2P Controller
creates a lookup for the channel sources and fills in the Source List. The list only contains
the available sources for the current playing channel. If the peer switches the channel, the
list content will be deleted and replaced. Each entry holds the IP address and the port
number of a source peer and the identifier of the description he offers.
The third list, the Download List, contains the elements from which the peer is currently
receiving a part of the video stream (description). This list is managed by the
Communication Module. As soon the Communication Module has successfully negotiated
a description, the sender’s connection information will be added to the list.
The last list, the Upload List, keeps track of the current active stream upload connections,
the connections to which the peer is forwarding the video data stream. This list is also
updated by the Communication Module since it handles all incoming requests.

Page 20 Even page


Live P2P Video Streaming Framework

Figure 4: Connection Manager

Resource Manager - DHT


This component is part of the distributed hash table (DHT) of the overlay network. Its task is
to store parts of the framework related data like channel or source information. It also
handles the lookups for the data stored in the network or adds new information to the DHT.

P2P Controller
The P2P Controller is the main component in the P2P Manager. Its task is to initiate all the
other components and to provide the necessary control mechanisms. The connection to the
underlying overlay network is also set up by the P2P Controller.
When a peer has to make a lookup call or add some new data in the DHT, the P2P
Controller initiates the calls through the Resource Manager. The results of the lookup calls
will also be handled by the P2P Controller (e.g. updating the lists in the Connection
Manager).
An application built on top of this framework needs an instance of this component to set up
and control the subcomponents of the P2P Manager.

3.3.2 Subcomponents of the MDC Manager


The MDC Manager Component processes the entire video stream. Its tasks are capturing
the video from any kind of source (TV-card, media file, etc.), decoding/encoding of the
video, sending and receiving streams through the network and displaying the video on the
screen. This section describes the design of its subcomponents (cf. Figure 5).

Odd page Page 21


Live P2P Video Streaming Framework

A standard peer that only wants to receive and play a video stream needs only the Player,
Decoder, Joiner, In-Buffer and Out-Buffer components. If the peer acts as a consistent
source for a video channel the Encoder and Splitter components are also used.

Figure 5: MDC Manager

Encoder
The main function of the Encoder is the encoding of an incoming video stream. The
Encoder should be able to take different kinds of available video inputs. For example a
possible input could be an installed TV card, a media file or any other possible video
source. The incoming video stream needs fist to be encoded. This is necessary because
sending the raw video stream over the network would take unnecessary high bandwidth
resources. With a suitable compression algorithm the raw bitstream can be encoded and
hence use much less bandwidth. The encoded video stream will then be sent to the Splitter
for further processing.

Decoder
The Decoder has the opposite task than the Encoder. It decompresses the stream into a
format that can be played and displayed on the screen. For decoding is the same video
codec used as in the Encoder. The Decoder receives an encoded video stream from the
Joiner, decodes it and sends the video stream to the player.

Page 22 Even page


Live P2P Video Streaming Framework

Splitter
The Splitter performs the actual video stream splitting. Since multiple description coding is
used, it is needed to split the original stream into several substreams (descriptions). This is
the task of the Splitter. The Encoder sends the encoded original video stream and the
Splitter creates the independent substreams after a specified MDC scheme. Dependent on
the used MDC scheme the two components Encoder and Splitter may be combined into a
single component given that the splitting and encoding task can’t be done independently.
Another task of the splitter it to create the UDP packets to send the video stream over the
network to the target.

Joiner
The purpose of the Joiner is to take several incoming substreams and combine them to the
original stream by using a specific MDC scheme. If the Splitter receives all substreams, the
original video can be reconstructed, otherwise only a part of the original stream will be
available. Again, like the Encoder and Splitter component, the Decoder and Joiner may be
combined into one component, depending on the used MDC scheme.
The Joiner takes the incoming packets from the In-Buffer and, retrieves the video data from
the packets and creates the video stream.

In-Buffer
The In-Buffer listens to incoming packets from the network. When a packet arrives, it will be
added to the buffer. This is needed because the packets may arrive in undefined intervals
and the system needs a constant stream to play the video, especially in the case of live
video streaming. Before the video starts to play, the video stream is pre-buffered. Then a
continuous video stream can be created. Also a packet burst can be easily corrected by the
buffer
Each incoming packet from the network will be copied to the Out-Buffer because each peer
also has to contribute in the video distribution process and stream the video channel to
other peers.

Out-Buffer
The Out-Buffer manages the sending of the UDP packets containing video data. Depending
on the peers, main purpose the Buffer receives UDP packets from the Splitter or the In-
Buffer. If the peer acts as a streaming server, the Out-Buffer receives new packets from the
Splitter, otherwise from the In-Buffer. The Out-Buffer has access to the Connection
Manager to determine each packets correct target destination since the Connection
Manager maintains the list with all the incoming and outgoing connections. A successfully
sent packet will be deleted from the buffer.

Player
The Player receives an incoming video stream from the Decoder and has now the job to
display the decoded video stream on the screen. It also provides user interface capability,
such that a user may interact with the video stream, like resizing the video window or
muting audio stream.

Odd page Page 23


Live P2P Video Streaming Framework

MDC Controller
The MDC Controller is the main component in the MDC Manager. Its task is to initiate all the
other components and to provide the necessary control mechanisms. It also offers the
interface for accessing all other modules in the MDC Manager.

3.4 Implementation of the Components


This chapter focuses at the actual implementation and how it differs in some aspects from
the design. Some subcomponents also have been replaced with an external application,
the VLC media player. Because the player allows functionality that is not directly available
in Java.

3.4.1 Subcomponents of the P2P Manager


This subsection describes the implementation of the P2P Manager that is built on
FreePastry.

Communication Module
As discussed in Subsection 3.3.1, this component handles the direct communications
aspects between the peers. The current implementation can handle both connection types
TCP and UDP. There is an open port for incoming TCP connections and also another port
for incoming UDP packets.
The port used for incoming TCP socket connections is defined as parameter in the
framework configuration file. The default port value is set to 14100. Every incoming TCP
socket connection will be handled in a separate thread since it is more efficient to handle
several connections at the same time than processing one after the other. The maximum
number of allowed incoming connections is also defined in the configuration file. If a peer
has too many ongoing connections at a time, it sends a “Too-Many-Connections” message
to each incoming connection request. This helps the request sending peer to distinguish if
he should try to establish a connection later on or mark the peer as dead. The first message
sent by the connection requesting peer determines the protocol to use in this connection
(e.g. protocol for a stream request, protocol for peer disconnect, etc.). A connection
remains active as long as the end of the protocol hasn’t been reached. The component is
implemented in such a way that in the future additional protocols can be added to the
system.
Instead of only using TCP, the component also may send and receive UDP packets through
an open UDP port. The UDP socket is also used to send messages between the peers but
with less connection overhead than TCP and no need for a continuous open connection
stream. In the current implementation UDP messages are used to verify that the source
peers, from whom a peer is receiving a video stream, are still alive.
The Communication Module is constantly monitoring the Download List in the Connection
Manger to check if that there are enough active sources for the current active video stream.
If this is not the case, the module requests the address of a new possible source from the
Connection Manager. The Connection Manager determines the next source address by
using a specific algorithm (the algorithm will be discussed later on in Section 3.6) and
returns the result, a new possible source, to the Communication Module. Then a new
connection will be established over TCP to the possible source peer and the protocol for a
video stream request will be initialized. If the negotiation of the stream request is successful

Page 24 Even page


Live P2P Video Streaming Framework

then the new source will be added to the Download List otherwise the system is looking for
the next possible source.

Connection Manager
The Connection Manager maintains the four lists (Channel List, Source List, Download List
and Upload List). The implementation of this component needs to be thread safe since
different components may want to simultaneously access one of the integrated lists. The
Out-Buffer accesses the Upload List to determine the destination of the UDP video packets.
The Communication Module updates the Download List, Upload List and Source List by
adding or removing connection information of other peers. The P2P Controller updates the
Channel List and the Source List if necessary.
The Channel List holds all current active video channels. Each entry consists of the channel
name and the channel id. The channel id is the Pastry lookup key to find the list with the
channel sources in the DHT. The other three lists contain connection information to other
peers. An entry comprises of the IP address and port numbers for TCP and UDP
connections.

Resource Manager - DHT


Since Pastry is the used peer-to-peer overlay network it is clear to use Past implementation
as the distributed hash table to store information in the network. The Resource Manager
provides the mechanisms to store and retrieve framework related data in the Pastry
network. The DHT contains the list with all available channels and for each channel a list
with possible sources.
When a new peer joins the existing network, it first has to retrieve the channel list from the
DHT. The Pastry key to lookup the channels is common knowledge to all peers and it is not
allowed to change. If a video streaming server enters the network, the server needs to
register its channel name and channel ID in the DHT. If a server is the first in the network a
channel list will be initialized automatically otherwise the information will be added to the
existing channel list. A server leaving the network has to unregister itself in the channel list.
Besides the channel list there is also a source list in the DHT for every channel. A peer
receiving a description has to register itself with the description id in the correct channel
source list because other peers then have the chance to use it as source. A peer that
leaves the network or stops playing the current channel, needs to unregister all its entries in
the DHT source list for the active channel. When a streaming server is leaving the network
it is also necessary to destroy the entire source list for the servers channel.
The current framework uses four types of different Past content messages to update the
DHT:
• Update-Content-Message: It is used to store new information in the DHT. This can be a
new channel or a new source.
• Remove-Content-Message: Removes a single channel entry in the network or removes
all entries of a single peer in a source list.
• Clear-Entire-List-Content-Message: It is used to destroy an entire list. This is only used
by the server to clean up the DHT after a server is shutting down its video stream.

Odd page Page 25


Live P2P Video Streaming Framework

• Remove-Single-Description-Source-Message: Every description, a peer receives, has


to be registered in the DHT. If a peer loses a description (an incoming video substream)
the peer needs to unregister itself in the source list for the lost description and therefore
protecting other peers to unnecessary try to establish a connection to this peer and ask
for a lost description.
The Routing of the messages is automatically handled by Pastry and storing of the content
objects is done by Past. A list is always completely stored at a single peer. A peer can
maintain one list or multiple lists but every list contains all entries. This makes it easy to
retrieve an entire list with one single lookup call.
It is possible to switch between two storage modes: persistent storage and memory
storage. Persistent storage mode writes the DHT data to the file system whereas memory
storage stores the data in the memory as long as the peer is alive and running.
Another parameter that needs to be set in the framework’s configuration file is the number
of replicas to use. To make the system more stable against failures, each data element
stored in a Past peer will be replicated to all its neighboring peers (cf. Figure 6). There is
currently no method implemented to clean up the DHT. A failed peer leaves all its entries in
the DHT.

Figure 6: Past Content Replication

P2P Controller
Besides the tasks already mention in the Design section the P2P Controller also creates a
new Pastry node and tries to connect to an existing Pastry ring when the address and port
number of a bootstrap node is given. If the connection to the ring fails, the node starts its
own ring.

3.4.2 Subcomponents of the MDC Manager


The actual Implementation of the MDC Manager differs from the design on several points
(cf. Figure 7). The video encoding and decoding is not directly included in this framework.
Since Java cannot offer native video encoding of in a suitable way, an external application

Page 26 Even page


Live P2P Video Streaming Framework

is used to handle the encoding and decoding of the video stream. The application used is
the VLC media player, serving as a video stream server as well as a video player. A peer
sends and receives video data from the VLC media player trough an open UDP socket.

Figure 7: Actual implementation of the P2P Manager

Splitter
The Splitter has an open UDP socket and is waiting for incoming UDP packets from the
VLC media player since the source is in an external application. The Splitter is only needed
if the peer decides to act as a streaming server. From the VLC media player the Splitter
receives the UDP packets in the correct order and adds to each packet a sequence
number. The number is continuously increasing. The sequence number is needed to
reconstruct the correct order of the packets at the receiver side to play the video stream. To
every UDP packet the Splitter also adds a description id number. The packets with the
same id produce a single description. The modified packets will then be added to the Out-
Buffer and sent to their target peers.

Joiner
The Joiner is the opposite of the Splitter. Instead of adding new data to the packets the
Joiner removes the description id and the sequence number. The result is the original sized
UPD packet created by the VLC media player. The Joiner sends the cleaned UDP packets
trough an UDP socket to the VLC media player that is going to play the video stream.
The Joiner grabs new packets from the In-Buffer. The buffer is sorting the packets
according to their sequence number. Each packet removed from the buffer has to be
examined. If the sequence number of the packet is greater than the number of the previous
packet, the packet will be sent to the VLC media player since the packets are in the correct

Odd page Page 27


Live P2P Video Streaming Framework

order. Is the sequence number smaller than the previous one, the packet needs to be
discarded because the time frame the packet represents has already passed. A live video
stream is time sensitive and packets that have arrived too late are no longer of use.

In-Buffer
The In-Buffer listens on an open socket for incoming UDP packets. The buffer is
implemented as a priority queue, sorting its packets according to their sequence number.
The packet with the smallest sequence number is always on the head of the queue and the
one with the biggest number on the tail of the queue. This makes it easy for the Joiner to
retrieve the packets in the correct order since the next suitable packet is always on the
head.

Out-Buffer
The Out-Buffer is implemented as a simple first-in-first-out (FIFO) buffer. The packet that is
the oldest in the buffer will be sent first to its destination target.
In the Upload List (part of the Connection Manager) are all peers listed to which ones the
peer has to send the packets that are currently in its Out-Buffer. The list doesn’t only contain
the connection information but also the id of the currently subscribed description. Together
with the description ID stored in every UDP packet, every packet can be mapped to its
correct receiver. The Out-Buffer creates a copy of the packet adds the correct address and
then sends it over the network to the target.

MDC Controller
The MDC Controller launches the external VLC media player as soon as one is needed.
The input parameters for the media player will be set automatically by the MDC Controller.

3.5 Implemented MDC Scheme


The meaning of multiple description coding has been discussed in Section 2.2 and also
some coding schemes have been introduced. This framework uses a different approach.
The VLC media player is used as the source for the video stream because of its large
support of different media codecs and its already implemented server functionality. With
simple parameter commands the player can be configured to stream a video in UDP
unicast. Each UDP packet, created by the player, contains a sequence of the original video
stream (video and audio data).
The player does not send the UDP packets directly into the network but sends the packets
over an open socket to the application (to the Splitter component). The Splitter adds a
sequence number to each UDP packet. The sequence number is later on used to
determine the correct order of the UDP packets on the video receiver side since UDP does
not guarantee that the packets arrive in the correct order.
Each UDP packet also obtains a description ID besides the sequence number. The packets
with the same description ID belong together and create a single description. An example
with three descriptions is shown in Figure 8.

Page 28 Even page


Live P2P Video Streaming Framework

Figure 8: Multiple Description Coding Example Scenario

If the peer receives all descriptions streams (packets) it can watch the video stream in full
quality. Otherwise, if the peer receives only a part of the streams, the video will be played
with some small artifacts.

3.6 Algorithm to Search for a possible Source


Before a peer can receive a video steam it has to look for available sources and try to
connect to as many of them as needed in order to obtain all substreams (descriptions). This
section describes the algorithm how a peer chooses its sources from which it wants to
receive one or multiple descriptions.
As soon as the peer has chosen a channel the P2P Controller initializes an automatic
lookup to the P2P network retrieving all possible sources for the desired video channel. All
this sources are stored in a list (Source List) maintained by the Connection Manager. The
Source List holds all available sources with their IP address and port number to create a
connection. Additionally to the connection information, there is also the description ID
stored in the list. Therewith the peer knows which other peer offers what descriptions.
Since the number of available descriptions per channel is defined in a global parameter (all
video channels have the same number of descriptions), the peer knows how many
descriptions it has to receive for the complete video stream. The algorithm tries to gather all
descriptions and starts by choosing at random a possible source for the first description
(description with the lowest ID) in the source list. Next, the peer sets up a connection to the
source and initializes the negotiation protocol for the specific description. If the connection
fails or the source peer declines the request, the respective peer’s information will be
removed from the source list since the algorithm is choosing the sources randomly and
does not want to connect to an unusable peer twice. If the connection is successful and the

Odd page Page 29


Live P2P Video Streaming Framework

source peer accepts the request then the source peer will be added to the Download List
(which contains all the peers serving as active sources).
The algorithm runs and tries to find additional resources as long as there are possible
source peers in the Source List and the current peer does not receive the complete set of
the necessary descriptions.
Algorithm in detail:
1 The algorithm first determines how many descriptions that need to be gathered to
receive the full video stream of the desired channel. In the current implementation there
is a global parameter providing this information. A list containing all description IDs is
created.
2 The next step is to compare the description ID list, created in step 1,with the description
IDs already active in the Download List since only one source for each description is
needed.
3 After the first two steps the algorithm knows which descriptions the peer is already
receiving and which ones are still missing. For every missing description the algorithm
chooses at random a possible source from the Source List.
4 The peer now tries to establish a connection to the possible source peer and sends on
connection success a description request message.
5 There are two possible next steps:
5.1 The source peer is unreachable or denies the request so it will be removed from
the Source List.
5.2 The peer accepts the request and is now an active source. The peer will be
added to the Download List as an active source.
6 Stop the algorithm if there are no more available sources or the peer is receiving all
descriptions. Otherwise start again with step one.
The current algorithm chooses the sources randomly. This approach works to test the
framework but it is not an efficient solution. A more improved algorithm would have to
consider the underlying network structure, such as the geographical location of the peers,
the round trip time, the reliability of the connection, packet loss, and also the available
bandwidth capacities of the peers.

3.7 Network Structures and Communication Aspects


This section gives an overview of the current used communication protocols and the
interaction between the peers. It also explains how a peer monitors the status of all its
connected peers to achieve error detection on node failure.

3.7.1 Monitor the Status of Connected Peers


In a peer-to-peer network the main problem is the unreliability of the connected peers. A
peer may suddenly disconnect without warning and as a result all its connected partner
peers would lose a part of the video stream. Thus there is an implementation needed to
detect and correct the error.
A peer needs to be aware of its source peers and if they are still alive. If a source peer
disconnects a replacement is needed immediately. To monitor if all current connected
source peers are still alive the peer sends messages to every source peer in a constant
interval, asking if the peer is still alive. A peer receiving such a message sends immediately
a response message, confirming that it is still alive. If there is no answer the peer whose

Page 30 Even page


Live P2P Video Streaming Framework

response message is missing, is considered as disconnected and needs to be replaced


with a new source peer. The message sending is done by iterating over the Download List
(contains all active sources for the current channel) and sending a message to every
element in the list. Each source element in the list has a counter implemented, when a
message has been sent to one of the peers in the list the corresponding peers counter is
increased. If the counter reaches a certain threshold the element will be removed from the
list and the source peer is considered as disconnected. Otherwise if an answer arrives the
counter is set to zero, knowing that the peer is still alive.
The messages are being sent within UDP packets. UDP is used because it induces less
overhead than TCP/IP for the connection setup. Also having a constant open socket stream
is more troubling than just sending simple UDP packets. The smaller reliability of UDP
packets that they really arrive at the target doesn’t matter since the verifying that a peer is
still alive is a continuous process. A single missing packet doesn’t disturb the system only if
several packets went missing over a longer period of time triggers an error recovery
mechanism.
A dead source peer is going to be removed from the Download List immediately. As soon
as the algorithm described in Section 3.6 notices a missing description the system starts
looking for a replacement.
So far the receiver side has been discussed and how the monitoring of its active sources
has been handled. A similar mechanism is needed for the stream sending peer. If one of his
connected peers disconnects, the peer would disturb the network with unnecessary UDP
packets. To prevent this the same mechanism is implemented as on the receiver side but
without sending additional messages. Here, on the stream sending side the peer waits for
incoming messages from the receiving peers. As soon as a message arrives, the stream
sending peer updates the Upload List by setting the counter for the specific peer to zero
and sends response message.
For additional safety every peer sends the current active channel name in the message,
because it is necessary to examine if both peers (sender and receiver) still use the same
channel. A simple test to see if the peer is still alive is not enough since the sending peer
may switch the channel without informing the receiving peer. As a result the receiver would
obtain a wrong video substream or a stream goes missing without that the peer realizes it.
A single peer may also be the source for not just one description but also stream multiple
descriptions to the same peer. If this is the case each description needs to be examined
individually since it may happen that the source peers looses one description but still can
upload the remaining ones. For this case the description ID numbers are added to the
message. Hence a peer does not only examine the channel name but also controls
independently each description and adapts automatically to description losses.
A mechanism to retrieve a single lost packet is not implemented in the framework. Since in
live video streaming a lost packet is not that important as in video-on-demand systems. The
focus is here mainly on the continuous stream, rather than on the complete media stream.
The requested lost packet may also not arrive in time and be useless because of the live
behavior of the stream.

3.7.2 Peer disconnects the Stream


A peer leaving the network or switching the current active channel needs to inform its
connected peers. So they can immediately start looking for a replacement. The peer,

Odd page Page 31


Live P2P Video Streaming Framework

leaving the network, uses a protocol to send a disconnect message to all its current
connections and informing them, they are going to lose current subscribed substream.
Every peer normally monitors its active connections and would automatically realize a
change, but it would take some time. It is more efficient to directly send a message to the
connected peer so that they can instantly adapt to the change and look for a new source
before it has a noticeable impact on the video stream.

3.7.3 Protocols
The current implementation requires two protocols for the direct communication between
the peers. The first protocol is used to negotiate a substream (description) request (cf.
Figure 9). Every incoming description request is first evaluated with the following criteria:
• Does the current active channel name match the requested channel name?
• Is the requested description available?
• The total amount of active connections is not reached.
• Does the connection to the peer already exist?
After the evaluation of the request the asked peer sends an Accept-Message and starts
sending the UDP packets. If the peer declines an Decline-Message will be sent.

Figure 9: Protocol for description request

Page 32 Even page


Live P2P Video Streaming Framework

The second protocol is used to send a Stream-Disconnect-Message (cf. Figure 10) to all
current connected peers to inform them that this source peer is about to shutdown or switch
the channel.

Figure 10: Disconnect stream protocol

Odd page Page 33


Live P2P Video Streaming Framework

4 Performance Evaluation
This chapter gives information about the evaluation of the implemented application. It
shows which parts are working and where in the implementation are open problems.

4.1 Framework Evaluation


The Implementation of this work has been tested in the lab of the Communications Systems
Research Group of the University of Zurich. The test environment was one peer that did act
as the origin for a video stream and also as the bootstrap node, and several other peers
that were used to test the playback functionality and to upload the stream to other clients.
The system was tested with 20 descriptions, so the initial video stream was divided into 20
substreams.
The maximum allowed active upload connections for every peer was to 30. This means
every peer was able to upload a maximum of 30 description to other peers. The first peer
that joined the network did receive the descriptions directly from the bootstrap peer. Every
other new peer that joined used multiple peers as sources.
The video stream distribution in the network was successful. Every peer joining the network
was able to receive the video stream and play it on the screen.
With the implemented Past DHT every peer was capable of storing and retrieving
information in the DHT. One issue in the Past implementation was discovered during the
tests and will be discussed later on in the next section.
The VLC media player was used to create a video stream and for video playback. The
video was encoded with the H.264 video codec. VLC offers only MPEG-TS for the transport
of the video stream with UDP packets. MPEG-TS is a communication protocol for the
transport of MPEG-2 video and audio data. It allows multiplexing of digital audio and video,
and offers error correction. The video was encoded at a bitrate of 384 kb/s, which offered a
suitable streaming quality over a 100 Mbit network and could be uses as well as for
streaming in the Internet.

4.2 Problems
This section gives an overview of open problems or problems whose solutions did lead to
design changes.

4.2.1 DHT - PAST


Although Past handles the DHT lookups and storage of data very reliable, there is a
problem how Past reacts to churn. Peers joining the network may have an impact on the
information retrieval of other peers in the network. The information storage in the DHT is
managed automatically by Past. The data to store is to the corresponding peer and stored
there. Past takes automatically care of the information replication to ensure data
redundancy. As partner for replication, Past uses the peers in Pastry’s neighborhood set.
As soon as a new peer joins the neighborhood set of an existing peer which is currently
storing some framework information, the new peer may become a partner for the replication
process. This means from this moment on the new peer also receives copies of the data
insert into one of the peers in the neighborhood set. The problem is that Past does not pass
all the already existing data on the new peer. The new peer only receives the new data that
is inserted in the DHT. If any other peer in the network now initializes a lookup for some

Page 34 Even page


Live P2P Video Streaming Framework

information and the lookup-message arrives at the new peer, then the resulting return-
message contains only a part of the stored information since the new peer does not have all
the data.
To solve this problem every new peer that joins a neighborhood set that stores information,
needs to receive all the already inserted data from its neighbors. The attempted solution for
this problem was to use the reinsert-functionality of Past. The reinsert method takes as
argument the key of an already stored content object, creates a local lookup for that object
and reinserts the content object in the Past network by updating all peers which are storing
the specific object. This would update all replicas and hence also supply a new peer with
the already stored data in the DHT. As soon as peer finds a new neighbor in its
neighborhood set the reinsert method would be initialized. But the attempted solution failed
because Past was unable to reinsert the data. The Past API was always able to first retrieve
the stored data but subsequently unable to reinsert it. For unknown reasons, Past always
did lose the reinsert-messages somewhere in the Past implementation. Another attempt by
using the same methods that are currently used to register new channels or sources in the
network, did suffer from the same effect. Past seems to have a problem when a peer p tries
to reinsert an object that p is managing.

4.2.2 MDC
The proposed MDC scheme in Section 3.5 is not suitable enough to make the video stream
more error-resistant. Small losses in the video stream transmission already have a high
impact on the stream quality. If only 5% of the UDP packets are missing the video shows
artifacts or a considerable amount of blur. During the test were a standard number of twenty
descriptions used. If only one description did go missing the effect on the video stream was
severe. An example is given in Figure 11 and Figure 12. The video was encoded with the
H.264 video coded at a bitrate of 384 kb/s. The first image (cf. Figure 11) shows the video
with full quality, where the player did receive all descriptions. The second image (cf.
Figure 12) shows the video with 95% of all descriptions where 5% were missing.
The implemented MDC scheme is not able to make the video stream more resistant to
sudden peer failures. A peer that had lost one of its sources and was not able to find a
replacement in time did suffer from noticeable video errors.

Figure 11: Video playback with all descriptions

Odd page Page 35


Live P2P Video Streaming Framework

Figure 12: Video playback with one missing description

4.2.3 Media Handling in Java


The first idea was to integrate the video decoding/encoding part directly into Java and not
using the external player VLC. Early tests with the Java Media Framework (JMF) and other
multimedia APIs (Jffmpeg, Fobs4Java and FMJ) exposed their missing capabilities. JMF is
very complex and supports capturing and encoding but has no support for new video
codecs. Although the other projects offer video decoding for a lot of today’s available video
codecs, they are missing the encoding functionality.

4.2.4 VLC - JVLC


For the VLC media player there is also a Java implementation (JVLC) of the player. The first
idea was directly to use the Java player since it provides the same functionality as the
external VLC player and would offer the chance to include the player directly in the
application. The used library was JVLC 0.9.0-2007-02-25. But early test showed some
deficiencies in the Java implementation. Two peers both using JVLC (one acting as source
the other as receiver) were unable to establish a video stream between each other. The
receiver always had problems to reconstruct and display the video. If one of the peers used
JVLC and the other the VLC media player video streaming was possible, regardless which
one of the peers did act as the source or the receiver. Because of this issue, the decision
was made to use the external VLC media player. The external player also did perform more
efficient in displaying the video.
A problem using MPEG-TS multiplexing is that VLC does include any stream specific
information that would help the decoder to decode the video stream. The receiver has to
figure out by itself what coded to use. Sometimes the VLC player was not able to decode its
proper generated video stream. In some cases it took a long time until the video did appear
on the screen although the audio was already hearable, in other cases VLC was not able to
play the video at all, the only way to solve this was to stop and restart the playback.

Page 36 Even page


Live P2P Video Streaming Framework

5 Future Work
The implemented Past DHT allows storing and retrieving framework related information in
the network. But has some open issues as pointed out in the previous chapter. New peers
in the network do not receive the stored data in the DHT from its neighbors. A solution
would be to transmit the data to the new peer over the Communication Module instead of
using message routing in Past. Another solution; instead of storing the data in lists, Past
stores each DHT entry separately. The lookup mechanism no longer would search for a
single entry (list) but would look for the data on several peers and not just return the first
found entry. Because of the modular design it would also be possible to switch Past with
another DHT implementation that hopefully would provide a better data consistency.
Another missing functionality in the DHT is a cleanup mechanism. Peers that are correctly
leaving the network already clean-up the DHT by removing their proper additions. But peers
that get disconnected because of a network failure, leave behind their entries in the DHT. In
a while the DHT gets filled with inaccurate information. A possible approach to solve this
problem would be: As soon as a peer detects a DHT entry that points to a stream source
that no longer exists, it sends a message into the Past network that marks the element as
faulty. If other peers also mark the same element, it will be removed.
As explained in Subsection 4.2.2, the implemented MDC scheme does not make the
system more error-tolerant, in case some of the descriptions go missing when a peer is
suddenly disconnecting. Another MDC scheme should be used that offers better results in
the case of a failure. Some examples are already given in Section 2.2 and a helpful
overview of different schemes can be found in [9].
At the moment is no solution implemented to overcome Network Address Translation (NAT)
issues. Peers that are behind a NAT firewall or router cannot connect to peers in the
Internet. Propositions for the solution of this problem can be found in [24].
Currently an external application is used to encode and decode, also to play and create the
video stream. In a future implementation the functionality of the VLC media player could be
included into Java. This would make the application more user-friendly if all components
could be included into a single application.
The implemented algorithm for choosing a description source could be improved in a future
version. At the moment a new source is picked randomly form the source list. An improved
algorithm would take other network properties into consideration when choosing a source:
What is available bandwidth of the source peer? What is the round-trip-time and the packet
loss rate of the path? What is the peer geographical location? All these points should be
taken into consideration.

Odd page Page 37


Live P2P Video Streaming Framework

6 Acknowledgements
I would like to specially thank Prof. Dr. Burkhard Stiller and my supervisors Fabio Hecht and
Cristian Morariu for all their support and help during this work.

Page 38 Even page


Live P2P Video Streaming Framework

References

[1] YouTube; http://www.youtube.com


[2] BitTorrent; http://www.bittorrent.com
[3] A. Rowstron and P. Druschel: Pastry: Scalable, distributed object location and routing
for large-scale peer-to-peer systems; IFIP/ACM International Conference on Distrib-
uted Systems Platforms (Middleware), Heidelberg, Germany, pages 329-350, Novem-
ber, 2001.
[4] P. Druschel and A. Rowstron: Past: A large-scale, persitent peer-to-peer storage util-
ity; HotOs VIII, Schoss Elmau, Germany, May, 2001.
[5] M. Castro, P. Druschel, A-M. Kermarrec and A. Rowstron: SCRIBE: A large-scale and
decentralized application-level multicast infrastructure; IEEE Journal on Selected
Areas in Communication (JSAC), Vol. 20, No, 8, October 2002.
[6] V. N. Padmanabhan, H. J. Wang and P. A. Chou: Resilient Peer-to-Peer Streaming;
Microsoft Research, Proceedings of the 11th IEEE International Conference on Net-
work Protocols, 2003.
[7] T. Nguyen and A. Zakhor: Distributed Video Streaming with Forward Error Correction;
In Proc. Packet Video Workshop, Pittsburgh, USA, 2002.
[8] M. Castro, P. Druschel, A-M. Kermarrec, A. Nandi, A. Rowstron and A. Singh: Split-
Stream: High-bandwidth multicast in a cooperative environment; SOSP'03, Lake Bol-
ton, New York, October, 2003.
[9] V. K. Goyal: Multiple Description Coding: Compression Meets the Network; IEEE Sig-
nal Processing Magazine, September, 2001.
[10] Q. Chen: Robust Video Coding using Multiple Description Lattice Vector Quantization
with Channel Optimization; Master’s Degree Project, KTH Signals, Sensors and Sys-
tems, Stockholm, Sweden, 2005.
[11] T. Wiegand, G. J. Sullivan, G. Bjontegaard, and Ajay Luthra: Overview ot the H.264 /
AVC Video Coding Standard; IEEE Transactions on Circuits and Systems for Video
Technology, July, 2003.
[12] O. Campana, A. Cattani, A. Guisti, S. Milani, N. Zandonà and G. Calvagno: Multiple
Description Coding Schemes for the H.264/AVC Coder; Proc. of Wireless Reconfig-
urable Terminals and Protocols (WiRTeP 2006), Rome, Italy, April 10-12, 2006.
[13] J. Yang, W. I. Choi and B. Jeon: Multiple Description Coding for H.264/AVC Motion
Vectors using Matching Criteria; SPIE--The International Society for Optical Engineer-
ing.2005.
[14] M. Zink and A. Mauthe: P2P Streaming using Multiple Description Coded Video; IEEE
Proceedings of the 30th EUROMICRO Conference, pp. 240-247, 2004.
[15] Java Media Framework; http://java.sun.com/products/java-media/jmf/
[16] FOBS4JMF; http://fobs.sourceforge.net/
[17] JFFMPEG; http://ffmpeg.mplayerhq.hu/
[18] Freedom for Media in Java; http://fmj-sf.net/
[19] VideLAN - VLC media player; http://www.videolan.org/
[20] FreePastry: http://freepastry.rice.edu/FreePastry/

Odd page Page 39


Live P2P Video Streaming Framework

[21] X. Zhang, J. Liu, B. Li, and T.-S. P. Yum: CoolStreaming/DONet: a data-driven overlay
network for peer-to-peer live media streaming; in Proc. IEEE INFOCOM'05, Miami, FL,
Mar. 2005.
[22] F. Pianese, J. Keller and E. W. Biersack: PULSE, a Flexible P2P Live Streaming Sys-
tem; Proceedings of the Ninth IEEE Global Internet Workshop, Barcelona, Spain,
2006.
[23] R. Steinmetz, K. Wehrle: Peer-to-Peer Systems and Applications; Springer Verlag,
Berlin Heidelberg, 2005.
[24] B. Ford, P. Srisuresh and D. Kegel: Peer-to-Peer Communication Across Network
Address Translators; In USENIX Annual Technical Conference, 2005.

Page 40 Even page


Live P2P Video Streaming Framework

List of Figures
Figure 1 Basic approach of SplitStream .................................................................................14
Figure 2 Framework Overview ...............................................................................................18
Figure 3 P2P Manager ............................................................................................................19
Figure 4 Connection Manager ................................................................................................21
Figure 5 MDC Manager ..........................................................................................................22
Figure 6 Past Content Replication ..........................................................................................26
Figure 7 Actual implementation of the P2P Manager ............................................................27
Figure 8 Multiple Description Coding Example Scenario ......................................................29
Figure 9 Protocol for description request ................................................................................32
Figure 10 Disconnect stream protocol ....................................................................................33
Figure 11 Video playback with all descriptions ......................................................................35
Figure 12 Video playback with one missing description .........................................................36

List of Tables
Table 1 Qualitative comparison of MDC coding techniques ..................................................10
Table 2 List of available video support formats in JMF .........................................................11
Table 3 Video Supporting Projects for Java ...........................................................................12
Table 4 Comparison of Video Streaming Solutions ...............................................................16

Odd page Page 41


Live P2P Video Streaming Framework

Appendix A - Tutorial
A detailed tutorial how to use the application is given in this chapter.

A .1 Create new ring or join existing ring


When the application starts, the screen in Figure A.1 appears. First, the IP address and the
port number of the bootstrap node needs to be entered, and also the path of the VLC media
player. As soon as the user presses the “Connect” button the application tries to connect to
the bootstrap node. If the attempt fails, the node initializes automatically a new ring (cf.
Figure A.2).

Figure A.1 Screenshot of the welcome screen

• Bootstrap Address: Enter the IP Address of the bootstrap node to connect to an existing
network. Otherwise enter “localhost” to create a new Pastry ring.
• Bootstrap Port: The port number of the bootstrap node.
• VLC Path: Path to the VLC media player.

Figure A.2 Inform user that a new ring has been created

Page 42 Even page


Live P2P Video Streaming Framework

A .2 Starting a new channel


To create a new video stream peer, the user needs to switch to the “Server” tab (cf.
Figure A.3) and fill in the necessary input fields. The button “Create Server” starts the VLC
media player and initializes the new channel.
• Channel Name: Enter a name for the new channel. The name should be unique in the
network.
• File to play: The path to the media file that should be played. If the field remains empty
the application will just start the VLC media player.
• VLC Parameters: This field can be used to enter the parameters for the VLC media
player directly. Details about the VLC parameters can be found on the VLC homepage
(www.videolan.org).
• Stream Bitrate: The bitrate of the video stream is used to determine the sending rate of
the UDP packets from the In-Buffer to the VLC media player.

Figure A.3 Create new channel

A .3 Configuring VLC as streaming source


The VLC media player (cf. Figure A.4) can create a stream from an existing file or from a
TV signal input.

Odd page Page 43


Live P2P Video Streaming Framework

Figure A.4 VLC media player

File streaming: To stream from a file the following steps are needed:
1 Select “File -> Open File” to create a stream from a file.
2 Choose the correct file by using the “Browse” button (cf. Figure A.5).
3 Activate the checkbox “Stream/Save” and press the button “Settings”.
4 In the new window enter the settings as shown in Figure A.6.
.

Figure A.5 Window for file streaming

Page 44 Even page


Live P2P Video Streaming Framework

Figure A.6 Settings window

TV Input: To stream a TV signal the following steps are needed:


1 Select “File -> Open Capture Device” to capture a live stream.
2 Choose the correct capture device (cf. Figure A.7).
3 Activate the checkbox “Stream/Save” and press the button “Settings”.
4 In the new window enter the settings as shown in Figure A.6.

Odd page Page 45


Live P2P Video Streaming Framework

Figure A.7 Capture from TV source

A .4 Select a channel to play


This section explains how to retrieve the channel list from the network and start the
playback of a channel.

1 Switch to the tab named “Channel List” (cf. Figure A.8).


2 Press the button “Refresh List” to retrieve the entire channel list from the network.
3 Choose a channel and press the button “Play”.
4 The VLC media player is opening. It may take some time until the video starts to play.

Page 46 Even page


Live P2P Video Streaming Framework

Figure A.8 Choose channel for playback

Odd page Page 47

Você também pode gostar