Archive for August, 2012

ATM Benefits in the WAN

You could think of IMA as the unknown factor that adds cost effectiveness
into the ATM WAN equation. And because the result equals ATM benefits for all WAN
users, not just those with very high traffic loads, it’s worthwhile to quickly review
ATM’s WAN benefits.

Highly scalable bandwidth. ATM’s biggest claim to fame is its speed—from £ 1.544
Mbps to gigabit ranges, with 1.2 Gbps (SONET OC-12) as the maximum customer
premise bandwidth available. The benefit: incremental costs for incremental bandwidth,
resulting in increased efficiency on hightraffic WAN links and an opportunity to
“right-size” bandwidth needs even to very high user demand.

Network simplification through consolidation. ATM is the answer for combining
applications that traditionally required different networks because of the different
transport requirements of their traffic. This in turn lets network planners stop the
proliferation of complex parallel networks: for example; one carrying data, another
carrying voice, and another carrying video. ATM’s ability to consolidate all types
of traffic onto a single WAN link greatly reduces complexity, and simplifies network
management by eliminating these separately managed lines.

Bandwidth efficiency. Consolidation of diverse traffic types also lets network
managers with high volumes of traffic fully utilize high-speed WAN links, instead of
partially filling separate links with different types of traffic.

Quality of service. ATM offers bandwidth allocation based on user-defined needs and
prioritization, as well as load sharing of multiple technology types for guaranteed
quality of service (QoS). ATM’s traffic management controls enable seamless integration
of voice, video, and data while providing the separate management techniques
required by each type of traffic.

Open connectivity. Because ATM is not based on a specific type of physical
transport, it is compatible with all currently deployed physical networks. It can be
transported over twisted pair, coax, and fiber optics. And since ATM is a standard rather
than a proprietary protocol, it can run on any vendor’s standards-compliant products or be
purchased from any carrier.

Excellent fault tolerance. ATM networks can be built with very high levels of fault
tolerance at relatively low cost. IMA, for example, allows for load sharing and maximum
network uptime.

ATM infrastructure availability. Service providers have invested heavily in the ATM
infrastructure for reasons similar to those of enterprises: consolidation of traffic/backbones,
better bandwidth utilization, and so on. ATM can also be deployed as a private
network built from leased lines such as T1/E1, T3/E3, or OC-3/STM-1.

Taken in sum, ATM’s capabilities— scalable bandwidth, network simplification,
bandwidth efficiency, guaranteed QoS, open connectivity, fault tolerance, and infrastructure
availability—make it invaluable for corporate WANs. ATM is also a stable WAN technology
with an extensive public infrastructure. Up until now, the primary barrier to securing
ATM benefits in the WAN has been the limited availability of carrier service.

, , , , , , , , ,

1 Comment

Operation of TRIP

A TRIP Speaker(LS) establishes intra-domain and inter-domain “peering sessions” with
other TRIP Speakers to exchange routing information. The peering sessions are established
to exchange routes to telephony destinations. The peers update each other of new
routes learned by them. Each peer may in-turn learn about new routes from other peers
or through gateways registering telephony prefixes to them or through a static configuration
on the Location Servers. The peers also “withdraw” the routes they advertised to
the other peer on learning about the unavailability of the routes.TRIP peering sessions
use TCP for transport.

Apart from conveying the telephony destinations (prefixes) that a Location Server can
reach, a routing update also carries some more information about that route, called the
“attributes” associated with the route like capacity, cost, etc. These attributes are helpful
in describing characteristics of the route as well as in correct operation of the protocol.
They also help in enforcing policies and network design.

TRIP qualifies inter-domain sessions as running E-TRIP sessions ( External TRIP ) and
intra-domain sessions as I-TRIP (internal TRIP ).Figure 1 shows two ITADs. ITAD 1
has two Location Servers. Gateways G1 and G2 register with LS2 and Gateways G3 and
G4 register with LS1. LS1 and LS2 have I-TRIP peering. LS1 peers with LS3 in ITAD2
(E-TRIP peering).

Figure 1 TRIP operation

Internal TRIP uses a link state mechanism to flood database updates over an arbitrary
topology same as open shortest path first. An attempt is made to synchronize routing
information among TRIP LSs within an ITAD to maintain a single unified view. To
achieve internal synchronization, internal peer connections are configured between LSs
of the same ITAD such that the resulting intra-domain Location Server topology is connected
and sufficiently redundant. When an update is received from an internal peer,
the routes in the update are checked to determine if they are newer than the version already
in the database. Newer routes are then flooded to all other peers in the same ITAD.

While updates within an ITAD are flooded onto internal peers, external TRIP updates
are point-to-point like Border Gateway Protocol. TRIP updates received by an ITAD
X from ITAD Y can be passed on to ITAD Z with or without any modifications ( with
X and Z not sharing any peering relation ). Thus a route ”advertisement” might reach a
peer after hopping through various TRIP Speakers in different ITADs.

Thus TRIP can be used for inter-domain as well as intra-domain routing. It is also
possible to use TRIP on a gateway as a registration protocol. When used in this way,
the TRIP Protocol shall run on the gateway in a “send-only” mode, only sending routing
information ( prefixes supported by the gateway ) to it’s peer ( a Location Server ).

TRIP – Telephony Routing over IP protocol

, , , , , , , , , , , , ,

Leave a comment

Virtual Humans in Virtual Environments

The participant should animate his virtual human representation in realtime,
however the human control is not straightforward: the complexity
of virtual human representation needs a large number of degrees of
freedom to be tracked. In addition, interaction with the environment
increases this difficulty even more. Therefore, the human control should
use higher level mechanisms to be able to animate the representation
with maximal facility and minimal input. We can divide the virtual
humans according to the methods to control them:

  1. Directly controlled virtual humans
  2. User-guided virtual humans
  3. Autonomous virtual humans
  4. Interactive Perceptive Actors

Direct controlled virtual humans

A complete representation of the participant’s virtual body should have
the same movements as the real participant body for more immersive
interaction. This can be best achieved by using a large number of
sensors to track every degree of freedom in the real body.
However, many of the current VE systems use head and hand tracking.
Therefore, the limited tracking information should be connected with
human model information and different motion generators in order to
“extrapolate” the joints of the body which are not tracked. This is
more than a simple inverse kinematics problem, because there are
generally multiple solutions for the joint angles to reach to the same
position, and the most realistic posture should be selected. In
addition, the joint constraints should be considered for setting the
joint angles.

Guided virtual humans

Guided virtual humans are those which are driven by the user but which
do not correspond directly to the user motion. They are based on the
concept of real-time direct metaphor, a method consisting of
recording input data from a VR device in real-time allowing us to
produce effects of different natures but corresponding to the input data.
There is no analysis of the real meaning of the input data. The
participant uses the input devices to update the transformation of the
eye position of the virtual human. This local control is used by
computing the incremental change in the eye position, and estimating
the rotation and velocity of the body center. The walking motor uses the
instantaneous velocity of motion, to compute the walking cycle length
and time, by which it computes the joint angles of the whole body. The
sensor information or walking can be obtained from various types of
input devices such as special gesture with DataGlove, or SpaceBall,
as well as other input methods.

Autonomous virtual humans

Autonomous actors are able to have a behavior, which means they must
have a manner of conducting themselves. The virtual human is assumed
to have an internal state which is built by its goals and sensor
information from the environment, and the participant modifies this
state by defining high level motivations, and state changes Typically,
the actor should perceive the objects and the other actors in the
environment through virtual sensors: visual, tactile and auditory
sensors. Based on the perceived information, the actor’s behavioral
mechanism will determine the actions he will perform. An actor may
simply evolve in his environment or he may interact with this
environment or even communicate with other actors. In this latter case,
we will consider the actor as a interactive perceptive actor.

The concept of virtual vision was first introduced by Renault
as a main information channel between the environment and the virtual
actor. The synthetic actor perceives his environment from a small
window in which the environment is rendered from his point of view. As
he can access z-buffer values of the pixels, the color of the pixels and
his own position, he can locate visible objects in his 3D environment. To
recreate the virtual audition, it requires a model a sound
environment where the Virtual Human can directly access to positional
and semantic sound source information of a audible sound event. For
virtual tactile sensors, our approach is based on spherical multisensors
attached to the articulated figure. A sensor is activated for any
collision with other objects. These sensors have been integrated in a
general methodology for automatic grasping.

Interactive Perceptive Actors

We define an interactive perceptive synthetic actor  as an actor
aware of other actors and real people. Such an actor is also assumed to
be autonomous of course. Moreover, he is able to communicate
interactively with the other actors whatever their type and the real
people. For example, Emering et al. describe how a directly controlled
Virtual Human performs fight gestures which are recognized by a
autonomous virtual opponent.

, , , , , , , , , ,

Leave a comment

DigiDoc security model

The general security model of the DigiDoc and OpenXAdES ideology works by obtaining
proof of validity of the signer’s X.509 digital certificate issued by a certificate authority (CA) at
the time of signature creation.

This proof is obtained in the format of Online Certificate Status Protocol (OCSP) response
and stored within the signed document. Furthermore, (hash of the) created signature is sent
within the OCSP request and received back within the response. This allows interpreting of
the positive OCSP response as “at the time I saw this digitally signed file, corresponding
certificate was valid”.

The OCSP service is acting as a digital e-notary confirming signatures created locally with a
smart card. From infrastructure side, this security model requires a standard OCSP
responder. Hash of the signature is placed on the “nonce” field of the OCSP request
structure. In order to achieve the freshest certificate validity information, it is recommended
to run the OCSP responder in “real-time” mode meaning that:

  • certificate validity information is obtained from live database rather than from
    CRL (Certificate Revocation List)
  • the time value in the OCSP response is actual (as precise as possible)

To achieve long-time validity of digital signatures, a secure log system is employed within the
model. All OCSP responses and changes in certificate validity are securely logged to
preserve digital signature validity even after private key compromise of CA or OCSP
responder. It is important to notice that additional time-stamps are not necessary when
employing the security model described:

  • time of signing and time of obtaining validity information is indicated in the OCSP
    response
  • the secure log provides for long-time validity without need for archival
    timestamps 

, , , , , , , ,

Leave a comment

Basics of Digital Watermarking

Digital watermarking can be used to embed various types of data, depending on
the particular application and intended use. For example, a watermark in a
digital movie file might simply identify the name or version of the movie.
Alternatively, it might convey copyright or licensing information from the
movie’s creator. Or it might embed a customer or transaction number that could
be used to identify individual payment or transaction data relating to that
particular copy of the movie. But the number of bits that can be contained in a
watermark itself today is typically modest – enough to provide some basic codes
or identifiers, but not enough to include the equivalent of a full sentence of text.

The general elements of a digital watermarking system are as follows.

1. Embedding of watermark in content – Every watermarking application
starts by placing a watermark into digital content. This involves modifying
the content using a special algorithm. The algorithm translates the data to
be conveyed by the watermark into specific, subtle modifications to the
content.

2. Subsequent reading of watermark by device/software – Every
watermarking application includes some capability for the embedded
watermarks to be subsequently recognized. Recognizing the watermark
requires knowledge of the algorithm used to embed it, because the reader
device or software needs to know what modifications to look for. Therefore,
readers are system‑ or vendor‑specific; there are no readers capable of
recognizing and deciphering all watermarks from all watermarking vendors.

3. Back‑end database for determining meaning of watermark – Most
watermarking applications involve maintaining a database for storing and
looking up data associated with specific watermarks. For example, the
information contained in a watermark itself might be simply a serial
number, while the database would enable that serial number to be correlated
with rights information or a specific consumer. Similarly, the information in
a watermark might consist of some type of coded message, requiring access
to the database to decode its meaning.

4. Actions triggered upon reading of watermark – In many watermarking
applications, the recognition or reading of a watermark triggers or enables
some type of action. Some actions may occur automatically, via
appropriately programmed hardware or software that looks for watermarks
and responds in predetermined ways. Other actions may depend on the
individualized decisions and responses of people to whom the information
in the watermark has been communicated.

, , , ,

Leave a comment

WSQ Query Processing

Even with an ideal virtual table interface, traditional execution of queries involving WebCount or
WebPages would be extremely slow due to many high-latency calls to one or more Web search engines.
The optimizations that can reduce the number of external calls, and caching techniques are
important for avoiding repeated external calls. But these approaches can only go so far—even after
extensive optimization, a query involving WebCount or WebPages must issue some number of search engine calls.

In many situations, the high latency of the search engine will dominate the entire execution time of the
WSQ query. Any traditional non-parallel query plan involving WebCount or WebPages will be forced to
issue Web searches sequentially, each of which could take one or more seconds, and the query processor
is idle during each request. Since Web search engines are built to support many concurrent requests, a
traditional query processor is making poor use of available resources.

Thus, we want to find a way to issue as many concurrent Web searches as possible during query
processing. While a parallel query processor (such as Oracle, Informix, Gamma, or Volcano)
is a logical option to evaluate, it is also a heavyweight approach for our problem. For
example, suppose a query requires 50 independent Web searches (for 50 U.S. states, say).
To perform all 50 searches concurrently, a parallel query processor must not only dynamically
partition the problem in the correct way, it must then launch 50 query threads or processes.
Supporting concurrent Web searches during query processing is a problem of restricted scope
that does not require a full parallel DBMS.

In the remainder of this section we describe asynchronous iteration, a new query processing
technique that can be integrated easily into a traditional non-parallel query processor to
achieve a high number of concurrent Web searches with low overhead. Asynchronous iteration is
in fact a general query processing technique that can be used to handle a high number of
concurrent calls to any external sources. (In future work, we plan to compare asynchronous
iteration against the performance of a parallel query processor over a range of queries
involving many calls to external sources.) As described in the following subsections,
asynchronous iteration also opens up interesting new query optimization problems.

, , , , , , , , , , , ,

Leave a comment

The Role of Law versus Ethics

The law consists of rules that are recognized by a society and enforceable
by some authority. It can impose affirmative obligations to act
in certain ways or require people to refrain from certain actions. Although
laws are informed by ethics, they are not equivalent and therefore laws
aren’t entirely congruent with societal ethical norms. For example, we
might agree that lying to a friend is unethical, but lying to a friend is not
illegal. Lying under oath, on the other hand, is always illegal. Legal and
ethical considerations matter to security research in several ways:

• Adherence to ethical principles might be required to meet regulatory or
legal requirements (for example, common rule). Conversely, knowing
and respecting existing laws might be required by an ethical code (such
as ACM).

• A law might identify an individual party’s rights and responsibilities,
and clarify the line between beneficial acts and harmful ones by defining
harm.

• Ethical principals that are adopted by the computer security research
community can inform judicial, legislative, and regulatory decisions.

• Where a law is ill-fitting or its interpretation unclear, ethics creates an
objective and consistent way for us to reason about the acceptability of
our actions.

, , , ,

1 Comment

%d bloggers like this: