Archive for February, 2012

Quantum Computers

Around 2030 computers might not have any transistors and chips. Think of a computer that is much faster than a common classical silicon computer. This might be a quantum computer. Theoretically it can run without energy consumption and billion times faster than today’s PIII computers. Scientists already think about a quantum computer, as a next generation of classical computers.

Gershenfeld says that if making transistors smaller and smaller is continued with the same rate as in the past years, then by the year of 2020, the width of a wire in a computer chip will be no more than a size of a single atom. These are sizes for which rules of classical physics no longer apply. Computers designed on today’s chip technology will not continue to get cheaper and better. Because of its great power, quantum computer is an attractive next step in computer technology.

A technology of quantum computers is also very different. For operation, quantum computer uses quantum bits (qubits). Qubit has a quaternary nature. Quantum mechanic’s laws are completely different from the laws of a classical physics. A qubit can exist not only in the states corresponding to the logical values 0 or 1 as in the case of a classical bit, but also in a superposition state.

A qubit is a bit of information that can be both zero and one simultaneously (Superposition state). Thus, a computer working on a qubit rather than a standard bit can make calculations using both values simultaneously. A qubyte, is made up of eight qubits and can have all values from zero to 255 simultaneously. “Multi-qubyte systems have a power beyond anything possible with classical computers.”

Forty qubits could have the same power as modern supercomputers. According to Chuang a supercomputer needs about a month to find a phone number from the database consisting of world’s phone books, where a quantum computer is able to solve this task in 27 minutes.


, , , , , , , ,

Leave a comment

Limitations of Modern Cryptosystems

Before exploring quantum key distribution, it is important to understand the state
of modern cryptography and how quantum cryptography may address current
digital cryptography limitations. Since public key cryptography involves complex
calculations that are relatively slow, they are employed to exchange keys rather
than for the encryption of voluminous amounts of date. For example, widely
deployed solutions, such as the RSA and the Diffie-Hellman key negotiation
schemes, are typically used to distribute symmetric keys among remote parties.
However, because asymmetric encryption is significantly slower than symmetric
encryption, a hybrid approach is preferred by many institutions to take advantage
of the speed of a shared key system and the security of a public key system for
the initial exchange of the symmetric key. Thus, this approach exploits the speed
and performance of a symmetric key system while leveraging the scalability of a
public key infrastructure.

However, public key cryptosystems such as RSA and Diffie-Hellman are not
based on concrete mathematical proofs. Rather, these algorithms are
considered to be reasonably secure based on years of public scrutiny over the
fundamental process of factoring large integers into their primes, which is said to
be “intractable”. In other words, by the time the encryption algorithm could be
defeated, the information being protected would have already lost all of its value.
Thus, the power of these algorithms is based on the fact that there is no known
mathematical operation for quickly factoring very large numbers given today’s
computer processing power.

Secondly, there is uncertainty whether a theorem may be developed in the future
or perhaps already available that can factor large numbers into their primes in a
timely manner. At present, there is no existing proof stating that it is impossible
to develop such a factoring theorem. As a result, public key systems are thus
vulnerable to the uncertainty regarding the future creation of such a theorem,
which would have a significant affect on the algorithm being mathematical
intractable. This uncertainty provides potential risk to areas of national security
and intellectual property which require perfect security.

In sum, modern cryptography is vulnerable to both technological progress of
computing power and evolution in mathematics to quickly reverse one way
functions such as that of factoring large integers. If a factoring theorem were
publicized or computing became powerful enough to defeat public cryptography,
then business, governments, militaries and other affected institutions would have
to spend significant resources to research the risk of damage and potentially
deploy a new and costly cryptography system quickly.

, , , , , , , , ,

Leave a comment

Existing Graph Based Stream Authentication Methods Using Crypto Signatures

For graph-based authentication, the main challenge is how to design a Directed Acyclic Graph (DAG)
with lowest overhead, highest verification probability and lowest sender and receiver delay.
However, there are tradeoffs between these performance criteria, which are summarized below.

  • Computation complexity: The number of hash operations and signature operations required at the
    sender and receiver. Note that computing a signature is much more complex than computing a hash.
  • Overhead size: The extra bytes introduced by stream authentication, including the hashes and
    signatures appended to the packets. The overhead size is determined by the number of edges in
    the authentication graph. Note that a signature is much bigger in size than a hash.
  • Verification percentage (or verification probability): the percentage of verifiable packets
    among all the received packets. Intuitively, the more redundant paths a packet has to the
    signature packet, the higher the probability of being verified.
  • Sender delay: The delay at the sender (in number of packets) from the time when the packet
    is produced by the encoder to the time that all authentication data appended to this packet
    is ready. Real-time communication scenario requires low sender delay. For non-real-time
    scenario, e.g., pre-encoded content for VOD applications, it is not important because the
    sender has priori knowledge of all packets.
  • Receiver delay: The delay at the receiver (in number of packets) from the time a packet is
    received to the time that it can be verified. For authenticated video, each packet must be
    received and pass the verification before its playout deadline.

, , , , , , , , , , , , ,

Leave a comment

System Architecture and XDP Interfaces

XTC database engine (XTC server) adheres to the widely used five-layer DBMS architectureIn Figure 1, we concentrate on the representation and mapping of XML documents. The file-services layer operates on the bit pattern stored on external, non-volatile storagedevices. In collaboration with the OS file system, the i/o managers store the physicaldata into extensible container files; their uniform block length is configurable to thecharacteristics of the XML documents to be stored. A buffer manager per container filehandles fixing and unfixing of pages in main memory and provides a replacement algorithmfor them which can be optimized to the anticipated reference locality inherent inthe respective XDP applications. Using pages as basic storage units, the record, index,and catalog managers form the access services. The record manager maintains in a setof pages the tree-connected nodes of XML documents as physically adjacent records.Each record is addressed by a unique life-time ID managed within a B-tree by the indexmanager. This is essential to allow for fine-grained concurrency control which requireslock acquisition on unique identifiable nodes. The catalog manager provides for the database metadata. The node manager implementing the navigational access layer transforms the records from their internal physical into an external representation, thereby managing the lock acquisition to isolate the concurrent transactions.The XML-services layer contains the XML manager responsible for declarative document access, e. g., evaluation of XPath queries or XSLT transformations.

Figure 1 XTC architecture overview

The agents of the interface layer make the functionality of the XML and node services
available to common internet browsers, ftp clients, and the XTC driver thereby
achieving declarative / set-oriented as well as navigational / node-oriented
interfaces. The XTCdriver linked to client-side applications provides for methods to
execute XPath-like queries and to manipulate documents via the SAX or DOM API. Each
API accesses the stored documents within a transaction to be started by the XTC
driver. Transactions can be processed in the well-known isolation levels
uncommitted, committed, repeatable, and serializable

, , , , , , , , , , ,

Leave a comment

Two Party Threat Model and Trust Assumptions

This threat model assumes that all databases (DBs) do not trust each other and that no information
from one set of data should be learned by another data owner. We also require that the computing
engine cannot learn anything about the underlying data, the exact query (for examples, the
selection conditions are protected but the computing engine must know the type of the queries to
response accordingly) and its result. We make the following trust assumptions.

  • All DBs are honest but curious.
  • For correctness of the result, the computing engine will perform the algorithm faithfully.
  • For confidentiality of the intermediate result, the computing engine will not collude with any
    of DBs. Moreover, there is no communication between the randomizer and the computing engine.
  • The querier can encrypt to the randomizer, the randomizer can encrypt to all DBs (as a whole,
    not necessary individually), and all DBs can encrypt to the querier. These can be easily realized
    if the randomizer, the DBs, and the querier have authenticated public keys.
  • To avoid replay attack and the correctness of the end result received by the querier, these
    ciphertexts should contain a cryptographic checksum, e.g. by message authentication code, such
    that any outsider adversary cannot inject arbitrary messages to the protocol.
  • The querier is distinct from the computing engine, and it cannot collude with any of the DBs.
    This can be realized by restricting the access to the querier terminal, which also conforms to
    regulations, e.g. HIPAA requires that access to hardware and software should be limited to
    properly authorized individuals.

, , , , , ,

Leave a comment

Reusing virtual pages via Automatic Pool Allocation

In Automatic Pool Allocation, each pool created by the program at run-time is essentially a distinct
heap, managed internally using some allocation algorithm. We can use the remapping approach
described above within each pool created by the program at run-time. The key benefit is that, at a
pool_destroy, we can release all (shadow and canonical) virtual memory pages of the pool to be
reused by future allocations. Note that physical pages will continue to be reused just as in the
original program, i.e., the physical memory consumption remains the same as in the original program
(except for minor differences potentially caused by using the pool allocator on top of the original heap).

The only significant change in the Allocation and Deallocation operations described above is for reusing
virtual pages. This is slightly tricky because we need to reuse virtual pages that might have been
aliased with other virtual pages previously. One simple solution would be to use the unmap system call
to release previous virtual-to-physical mappings for all pages in a pool after a pool destroy. unmap
would work for both canonical and shadow pages because these are obtained from the Operating System (OS)
via mmap and mremap respectively. Canonical pages are obtained in contiguous blocks from the underlying
system (via mmap) and the blocks can be unmapped efficiently. The shadow pages, however, are potentially
scattered around in the heap, and in the worst case may require a separate unmap operation for every
individual object allocated from the pool (in addition to the earlier mprotect call when the object was
freed). This could be expensive. We avoid the explicit munmap calls by maintaining a free list of
virtual pages shared across pools and adding all pool pages to this free list at a pool_destroy. We
modified the underlying pool allocator to obtain (canonical) pages from this free list, if available. If
this free list is empty, we use mmap to obtain fresh pages from the system as before. For each allocated
object, the shadow page(s) is(are) obtained using mremap as before to ensure a distinct virtual memory page.

A pool_free operation works just like the Deallocation case described previously, and invokes the underlying
pool_ free on the canonical page. A pool_destroy operation simply returns all canonical and shadow pages in
the pool to the shared free list of pages.

, , , , , , ,

Leave a comment

Advantages of an FPGA embedded processor

An FPGA embedded processor system offers many exceptional advantages compared to typical
microprocessors including:

  • customization
  • obsolescence mitigation
  • component and cost reduction
  • hardware acceleration


The designer of an FPGA embedded processor system has complete flexibility to select any combination of
peripherals and controllers. In fact, the designer can invent new, unique peripherals that can be connected
directly to the processor’s bus. If a designer has a non-standard requirement for a peripheral set, this can be
met easily with an FPGA embedded processor system. For example, a designer would not easily find an
off-the-shelf processor with ten UARTs. However, in an FPGA, this configuration is very easily

Obsolescence mitigation

Some companies, in particular those supporting military contracts, have a design requirement to ensure a
product lifespan that is much longer than the lifespan of a standard electronics product. Component
obsolescence mitigation is a difficult issue. FPGA soft-processors are an excellent solution in this case
since the source HDL for the soft-processor can be purchased. Ownership of the processor’s HDL code
may fulfill the requirement for product lifespan guarantee.

Component and cost reduction

With the versatility of the FPGA, previous systems that required multiple components can be replaced with
a single FPGA. Certainly this is the case when an auxiliary I/O chip or a co-processor is required next to an
off-the-shelf processor. By reducing the component count in a design, a company can reduce board size
and inventory management, both of which will save design time and cost.

Hardware acceleration

Perhaps the most compelling reason to choose an FPGA embedded processor is the ability to make
tradeoffs between hardware and software to maximize efficiency and performance. If an algorithm is
identified as a software bottleneck, a custom co-processing engine can be designed in the FPGA specifically
for that algorithm. This co-processor can be attached to the FPGA embedded processor through special,
low-latency channels, and custom instructions can be defined to exercise the co-processor. With modern
FPGA hardware design tools, transitioning software bottlenecks from software to hardware is much easier
since the software C code can be readily adapted into hardware with only minor changes to the C code.

, , , , , , , , ,

Leave a comment

%d bloggers like this: