Authenticating Streamed Data in the Presence of Random Packet Loss

It’s a scheme for authenticating streamed data delivered in real-time over an insecure network.
The difficulty of signing live streams is two fold. First, authentication must be efficient so
the stream can be processed without delay. Secondly, authentication must be possible even if
some packets in the sequence are missing. Streams of audio or video provide a good example.
They must be processed in real-time and are commonly exchanged over UDP, with no guarantee that
every packet will be delivered. Existing solutions to the problem of signing streams have been
designed to resist worst-case packet loss. In practice however, network loss is not malicious
but occurs in patterns of consecutive packets known as bursts. Based on this realistic model of
network loss, we propose an authentication scheme for streams which achieves better performance
as well as much lower communication overhead than existing solutions.

There are two issues to consider when signing streams. On the one hand, the signature scheme must
be efficient enough to permit authentication on the fly without introducing delays. On the other
hand, the signature scheme must be robust enough that authentication remains possible even if some
packets are lost. The naive solution to authenticate a stream is to sign each packet in the stream
individually. The receiver checks the signatures of packets as they arrive and stops processing the
stream immediately if an invalid signature is discovered. Immediate authentication is possible, but
the computational load on both the sender and the receiver is too great to make this approach practical.
A more efficient solution is proposed in by Gennaro and Rohatgi. They observe that one-time signatures
can be used in combination with a single digital signature to authenticate a sequence of packets. Each
packet carries a public-key, which is used in a one-time signature scheme to sign the following packet.
Only the first packet needs to be signed with a regular digital signature. Since one-time signatures
are an order of magnitude faster to apply than digital signatures, and can also be verified somewhat
more efficiently, this solution offers a significant improvement in execution speed.

However, there is a major difficulty with this approach. Recall that audio and video streams are sent
using UDP, which provides only ”best-effort” service and does not guarantee that all packets will be
delivered. If a packet is missing, the authentication chain is broken and subsequent packets can not
be authenticated. (Another problem is that one-time signatures incur a substantial communication
overhead). If a sequence is received incomplete, we would still like to be able to authenticate all
the packets that were not lost. This defines resistance to loss in a strong sense: a packet is either
lost or authenticable. A weaker alternative would allow a few packets to be received unauthenticated
in case of packet loss. We offer two justifications for adopting the strong definition. First, it is
essential for some applications that only authenticated content be received. Consider a stream that
delivers stock quotes in real time. While it might be acceptable to lose a quote, we must ensure that
only authenticated quotes are ever displayed. Secondly, our constructions which resist loss in the
strong sense can easily be adapted to the weaker notion of resistance.

Existing authentication schemes that resist packet loss have been designed to resist worst-case packet
loss. Any number of packets may be lost anywhere in the sequence, without interfering with the receiver’s
ability to authenticate the packets that arrived. Studies conducted on packet loss in UDP suggest that
resisting worst-case packet loss is an overkill. The focus should be instead on resisting random packet
loss. We will show how that leads to much more efficient constructions. Since packet loss on the network
is not malicious, it is natural to analyze the patterns of loss and design our authentication schemes
accordingly. Paxson shows that on the Internet consecutive packets tend to get lost together in a burst.
We adopt this model and propose authentication schemes designed to resist bursty loss. Specifically, our
goal is to maximize the size of the longest single burst of loss that our authenticated streams can
withstand. Of course, this is not to say that our constructions resist only a single burst. As will be
clear, once a few packets have been received after a burst, our scheme recovers and is ready to maintain
authentication even if further loss occurs.

Advertisements

, , , , , , , ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: