Archive for July, 2012


We assert that, in many delay tolerant networks, duplicates may pose a larger problem: they hinder
the ability to partially process data within the network. In-network processing is seen as
desirable because it can dramatically reduce bandwidth requirements. Unfortunately, as we will
see, if data is coarsely aggregated within the network it can be difcult to detect or eliminate
duplicates, which can lead to incorrect answers.

In-network processing has been proposed in a number of delay-prone environments. For example, in
sensor networks, bandwidth is generally scarce, especially at the edges of the network, and thus
doing some fusion or aggregation of sensor readings as data is routed is potentially benecial. A
number of papers note the benets of in-network aggregation, citing orderof magnitude or greater
bandwidth reductions for some classes of operations given particular network topologies. Similarly,
when moving data between different classes of networks (e.g. the Internet and GPRS), it may be
useful to transcode or downsample data items, sometimes in a non-deterministic way, as when
dithering an image.

If the network cannot guarantee duplicate-free semantics, some in-network operations might produce
incorrect answers: consider a sensor network attempting to compute an average over the readings
from a number of sensors. If one of these readings is duplicated, it will obviously skew answers.
We call such operations duplicate sensitive. Of course, some in-network operations are duplicate
insensitive – computing a minimum of a set of readings, for example, has this property.

Thus, we have seen that, unless we wish to sacrice the availability of our network, duplicates may
arise in disconnection-prone delay tolerant networks. Furthermore, because many such networks may
wish to perform in-network computation, duplicates can be more problematic than in traditional
networks. In the next section, we examine possible techniques for mitigating the overhead of
duplicate elimination.

, , , , ,

Leave a comment

The Multiplexing Transport Protocol Suite

The two transport protocols most commonly used in the Internet are TCP, which offers a reliable
stream, and UDP, which offers a connectionless datagram service. We do not offer a connectionless
protocol, because the mechanisms of a rate-based protocol need a longer-lived connection to
work, as they use feedback from the receiver. The interarrival time of packets is measured at the
receiver and is crucial for estimating the available bandwidth and for discriminating congestion
and transmission losses. On the other hand, a multiplexing unreliable protocol that offers
congestion control can be used as a basis of other protocols. The regularity of a rate-based
protocol lends itself naturally to multimedia applications. Sound and video need bounds on arrival
time so that the playback can be done smoothly. A multimedia protocol is the natural offshoot.
Most multimedia applications need timely data. Data received after the playback time is useless.
Moreover, for a system with bandwidth constraints, late data is adverse to the quality of playback,
as it robs bandwidth from the flow. There are many strategies to deal with losses, from forgiving
applications to forward error correction (FEC) schemes. Retransmissions are rarely used, because
they take the place of new data, and the time to send a request and receive the retransmission may
exceed the timing constraints.

When multiple channels are available, and the aggregated bandwidth is greater than the bandwidth
necessary to transmit the multimedia stream, retransmissions can be done successfully without
harming the quality of playback. The simultaneous use of multiple link layers generates extra
bandwidth. The best-case scenario is the coupling of a low bandwidth, low delay interface with a
high bandwidth, high delay interface. The high bandwidth interface allows for a good quality
stream, while the low delay interface makes retransmissions possible by creating a good feedback
channel to request (and transmit) lost frames.

When the aggregated bandwidth is not enough to transmit packets at the rate required by the
application, packets have to be dropped or the application has to change the characteristics of
its stream. Adapting applications can change the quality of the stream on the fly to deal with
bandwidth variations, but for non-adapting applications, the best policy is to drop packets at
the sender. Sending packets that will arrive late will cause further problems by making other
packets late, which can have a snowball effect.

In contrast to a multimedia protocol, a reliable protocol has to deliver intact every packet that
the application sent. In this case, time is not the most important factor. Lost or damaged frames
will have to be retransmitted until they are successfully received. If the application expects the
data to be received in the same order it was sent, the protocol will have to buffer packets
received after a loss until the lost packet retransmission is received. Using the channel
abstraction to multiplex the data increases the occurrence of out-of-order deliver, increasing the
burden in the receiving end.

, , , , , , , , , , , ,

1 Comment

Signaling System Number 7 (SS7)

SS7 is the network control signaling protocol utilized by the Integrated Services
Digital Network (ISDN) services framework. ISDN control information for call handling
and network management is carried by SS7. SS7 is a large and complex network
designed to provide low latency and to have redundancy in many network elements. The
SS7 control-signaling network consists of signaling points, signaling links and signaling
transfer points. Signaling links or SS7 links interconnect signaling points. Signaling
points (SSP) use signaling to transmit and receive control information. A signaling point
that has the ability to transfer signaling messages from one link to another at level 3 (SS7
level 3 will be described in detail later) is a Single Transfer Point (STP). There is a
fourth entity, the Service Control Point (SCP), which acts as a database for the SS7
network. The STP queries the SCP to locate the destination of the calls. The design of
the SS7 protocol is such that it is independent of the underlying message transport
network. The design of the signaling network is very important in that it will directly
impact the availability of the overall system. In general, the network will be designed to
provide redundancy for signaling links and for STPs. Figure 1 shows a basic SS7

Figure 1: SS7 Signaling Endpoints in a Switched-Circuit Network

A typical call can be illustrated using Figure 1. User A goes off-hook in New York
and begins dialing. User A is calling User C in San Francisco. The dialed digits are
transmitted across the local loop connection to a local switch that has signal point
functionality (SSP). The local switch translates the digits and determines the call is not
local to itself. The local switch will use its signal point functionality to signal into the
SS7 network to a Signal Transfer Point (STP). The STP queries a SCP to locate the
destination local switch. The STP signals to the destination local switch to alert it of the
incoming call. The destination local switch rings the phone of User C. User C answers
and the two local switches signal across the SS7 network and determine the bearer path
through the PSTN. Once the path is setup the call begins. When either user goes on
hook, the network signals the other end to tear down the bearer path and the call is
terminated. The worldwide SS7 network is divided into national and international levels.
This allows the numbering plans and administration to be separated.

, , , , , , , , , , , , , ,

1 Comment

Position and orientation tracking in VR devices

The absolute minimum of information that immersive VR (Virtual Reality) requires, is the position and
orientation of the viewer’s head, needed for the proper rendering of images. Additionally other
parts of body may be tracked e.g., hands – to allow interaction, chest or legs – to allow the
graphical user representation etc. Three-dimensional objects have six degrees of freedom
(DOF): position coordinates (x, y and z offsets) and orientation (yaw, pitch and roll angles for
example). Each tracker must support this data or a subset of it. In general there are
two kinds of trackers: those that deliver absolute data (total position/orientation values) and
those that deliver relative data (i.e. a change of data from the last state).

The most important properties of 6DOF trackers, to be considered for choosing the right
device for the given application are,

  1. update rate – defines how many measurements per second (measured in Hz) are made.
    Higher update rate values support smoother tracking of movements, but require more
  2. latency – the amount of time (usually measured in ms) between the user’s real (physical)
    action and the beginning of transmission of the report that represents this action. Lower
    values contribute to better performance.
  3. accuracy – the measure of error in the reported position and orientation. Defined
    generally in absolute values (e.g., in mm for position, or in degrees for orientation).
    Smaller values mean better accuracy.
  4. resolution – smallest change in position and orientation that can be detected by the
    tracker. Measured like accuracy in absolute values. Smaller values mean better
  5. range – working volume, within which the tracker can measure position and orientation
    with its specified accuracy and resolution, and the angular coverage of the tracker.Beside these properties, some other aspects cannot be forgotten like the ease of use, size and
    weight etc. of the device. These characteristics will be further used to determine the quality and
    usefulness of different kinds of trackers.

, , , , , , , , , , , , ,

1 Comment

DMA Controller in Embedded System

Consider an office that contains a file cabinet. In that file cabinet is paperwork
that several office workers need to access. Some only need to access it once in
a while, while others require the paperwork much more often. The manager of the
office sets up a policy so that some of the individuals have a personal key to get
into the file cabinet, while others must get a shared key from the manager. In other
words, some office workers have direct access to the file cabinet, and others have
indirect access.

In many embedded system designs, the CPU is the only device that is connected
to the memory. This means that all transactions that deal with memory must use
the CPU to get the data portion of that transaction stored in memory, just as some
office workers had to obtain the manager’s key. Direct memory access (DMA) is a
feature that allows peripherals to access memory directly without CPU intervention.
These peripherals correspond to the office workers with their own keys.

For example, without DMA, an incoming character on a serial port would generate
an interrupt to the CPU, and the firmware would branch to the interrupt handler,
retrieve the character from the peripheral device, and then place the
character in a memory location. With DMA, the serial port places the incoming
character in memory directly. When a certain programmed threshold is reached, the
DMA controller (not the serial port) interrupts the CPU and forces it to act on
the data in memory. DMA is a much more efficient process. Many integrated
microprocessors have multiple DMA channels that they can use to perform
I/O-to-memory, memory-to-I/O, or memory-to-memory transfers.

, , , , , , , , , , , ,

Leave a comment

Trusted Internet Connection

Similar to Departments and Agencies that utilize Networx MTIPS, those using a TIC will already have a contractual relationship in place with their ISP, usually a Networx ISP. Pursuant to that relationship, the ISP, in its ordinary course of business, will use routing tables to ensure that only traffic intended for the Department or Agency’s IP addresses is routed to the Department or Agency’s networks. And the Department or Agency remains responsible for ensuring that only traffic intended for, or originating from, that Department or Agency is routed through the EINSTEIN sensor.

Since EINSTEIN collects network flow information for all traffic traversing a sensor, if, in a rare case the required contractual routing protections fail, in the normal course only network flow information associated with the improperly routed traffic would be collected. This mechanism minimizes the possibility of capturing or releasing Personally Identifiable Information (PII). If improperly routed network traffic matched a pattern of known malicious activity an alert would be triggered. In the event of an alert, and upon further inspection and investigation with the Department or Agency receiving the incorrectly routed traffic, a US-CERT analyst would be able to identify an incorrectly routed traffic error. US-CERT would then work with NCSD’s Network Security Deployment and Federal Network Security branches, the relevant Department or Agency, the ISP and, if necessary, the MTIPS vendor, to remedy the routing problem. In the unlikely event that an ISP’s routing tables mistakenly assign a government IP address to a commercial client, a routing loop would result. The routing loop would cause errors and break the commercial customer’s connection. When the ISP detects the routing loop or the customer reports its broken connections to the ISP, the ISP would correct the error in its ordinary course of business.

, , , , , , , , , , , , , ,

Leave a comment

Power Analysis Attacks on Secure Embedded Systems

The power consumption of any hardware circuit (cryptographic ASICs or processors running
cryptographic software) is a function of the switching activity at the wires inside it.
Since the switching activity (and hence, power consumption) is data dependent, it is not
surprising that the key used in a cryptographic algorithm can be inferred from the power
consumption statistics gathered over a wide range of input data. These attacks are called
power analysis attacks and have been shown to be very effective in breaking embedded
systems such as smartcards. Power analysis attacks are categorized into two main classes:
Simple Power Analysis (SPA) attacks and Differential Power Analysis (DPA) attacks.

SPA attacks rely on the observation that in some systems, the power profile of
cryptographic computations can be directly used to reveal cryptographic information. For
example, Figure 1 shows the power consumption profile for an ASIC implementing the DES
algorithm. From the profile, one can easily identify the 16 rounds of the DES algorithm.
While SPA attacks have been useful in determining higher granularity information such as
the cryptographic algorithm used, the cryptographic operations being performed, etc.,
they require reasonably high resolution to reveal the cryptographic key directly. In
practice, SPA attacks have been found be useful in augmenting or simplifying brute-force
attacks. For example, it has been shown in that the brute-force search space for a SW DES
implementation on an 8-bit processor with 7 Bytes of key data can be reduced to 2^40 keys
from 2^56 keys with the help of SPA.

Figure 1: The power consumption profile of a custom hardware implementation
of the DES algorithm

DPA attacks employ statistical analysis to infer the cryptographic key from power
consumption data. These attacks use the notion of differential traces (difference between
traces) to overcome the disadvantages of measurement error and noise associated with SPA
techniques. DPA has been shown to be highly robust and effective in extracting keys from
several embedded systems, not limited to smartcards. Recent approaches such as enhance the
effectiveness of DPA attacks by providing techniques that improve the signal to noise
ratio. While the initial DPA attacks targeted DES implementations, DPA has also been used
to break public-key cryptosystems.

, , , , , , , , , , , ,

Leave a comment

%d bloggers like this: