Posts Tagged software product development
Arguably the most famous bug in this class is the bug exploited by the SQL Server “Slammer” worm. The SQL
Server Resolution Service operates over a UDP protocol, by default on port 1434. It exposes a number of
functions, two of which were vulnerable to buffer overflow issues (CAN-2002-0649). These bugs were
discovered by David Litchfield of NGS. Another SQL Server problem in the same category was the “hello”
bug (CAN-2002-1123) discovered by Dave Aitel of Immunity, Inc., which exploited a flaw in the initial
session setup code on TCP port 1433.
Oracle has not been immune to this category — most recently, David Litchfield found an issue with
environment variable expansion in Oracle’s “extproc” mechanism that can be exploited without a username
or password (CAN-2004-1363). Chris Anley of NGS discovered an earlier flaw in Oracle’s extproc
mechanism (CAN-2003-0634) that allowed for a remote, unauthenticated buffer overflow. Mark Litchfield
of NGS discovered a flaw in Oracle’s authentication handling code whereby an overly long username
would trigger an exploitable stack overflow (CAN-2003-0095). David Litchfield also found a flaw in
DB2’s JDBC Applet Server (no CVE, but bugtraq id 11401) that allows a remote, unauthenticated user
to trigger a buffer overflow.
The best way to defend yourself against this class of problem is first, to patch. Second, you should
attempt to ensure that only trusted hosts can connect to your database servers, possibly enforcing
that trust through some other authentication mechanism such as SSH or IPSec. Depending on the role
that your database server is fulfilling, this may be tricky. Another possibility for defense is to
implement an Intrusion Detection System (IDS) or an Intrusion Prevention System (IPS). These kinds
of systems have been widely discussed in security literature, and are of debatable value. Although
an IDS can (sometimes) tell you that you have been compromised, it won’t normally prevent that
compromise from happening. Signature-based IDS systems are only as strong as their signature
databases and in most cases signatures aren’t written by people who are capable of writing
exploits, so many loopholes in the signatures get missed.
“True anomaly” IDS systems are harder to bypass, but as long as you stick to a protocol that’s
already in use, and keep the exploit small, you can usually slip by. Although some IDS systems
are better than others, in general you need an IDS like you need someone telling you you’ve
got a hole in the head. IDS systems will certainly stop dumber attackers, or brighter attackers
who were unlucky, so they may be worthwhile provided they complement — and don’t replace — skilled
staff, good lockdown, and good procedures. IPS systems, on the other hand, do prevent some classes
of exploit from working but again, every IPS system the authors have examined can be bypassed with
a little work, so your security largely depends on the attacker not knowing which commercial IPS
you’re using. Someone may bring out an IPS that prevents all arbitrary code execution attacks at
some point, which would be a truly wonderful thing. Don’t hold your breath waiting for it, though.
The threat of smartphone malware with access to on-board sensors, which opens new avenues for illicit
collection of private information. While existing work shows that such “sensory malware” can convey
raw sensor data (e.g., video and audio) to a remote server, these approaches lack stealthiness, incur
significant communication and computation overhead during data transmission and processing, and can
easily be defeated by existing protections like denying installation of applications with access
to both sensitive sensors and the network. We present Soundcomber, a Trojan with few and innocuous
permissions, that can extract a small amount of targeted private information from the audio sensor
of the phone. Using targeted profiles for context-aware analysis, Soundcomber intelligently
“pulls out” sensitive data such as credit card and PIN numbers from both tone- and speech-based
interaction with phone menu systems. Soundcomber performs efficient, stealthy local extraction,
thereby greatly reducing the communication cost for delivering stolen data.Soundcomber
automatically infers the destination phone number by analyzing audio, circumvents known security
defenses, and conveys information remotely without direct network access. We also design and
implement a defensive architecture that foils Soundcomber, identify new covert channels
specific to smartphones, and provide a video demonstration of Soundcomber.
In essence, all audio recording and phone call requests are mediated by a reference monitor,
which can disable (blank out) the recording when necessary. The decision on when to turn off
the switch is made according to the privacy policies that forbid audio recording for a set
of user-specified phone numbers, such as those of credit-card companies. We evaluate our
prototype defensive architecture and show that it can prevent our demonstrated attacks
with minimal processing overhead.
We now summarize our major contributions:
Targeted, context-aware information discovery from sound recordings. We demonstrate that
smartphone based malware can easily be made to be aware of the context of a phone
conversation, which allows it to selectively collect high-value information. This is
achieved through techniques we developed to profile the interactions with a phone menu,
and recover digits either through a side-channel in a mobile phone or by recognizing
speech. We also show how only limited permissions are needed and how Soundcomber
can determine the destination number of the phone call through IVR fingerprinting.
Stealthy data transmission. We studied various channels on the smartphone platform
that can be used to bypass existing security controls, including data transmission
via a legitimate network-facing application, which has not been mediated by the
existing approaches, and different types of covert channels. We also discovered
several new channels, such as vibration / volume settings, and demonstrated that
covert channel information leaks are completely realistic on smartphones.
Implementation and evaluation. We implemented Soundcomber on an Android phone and
evaluated our technique using realistic phone conversation data. Our study shows that
an individual’s credit-card number can be reliably identified and stealthily disclosed.
Therefore, the threat of such an attack is real.
Defensive architecture. We discuss security measures that could be used to mitigate
this threat, and in particular, we designed and implemented a defensive architecture
that prevents any application from recording audio to certain phone numbers specified
by privacy policies.
The main idea behind the scalable extension of H.264/AVC is to take the block based hybrid video
coding scheme one step further and achieve spatiotemporal and signal-to-noise-ratio (SNR)
scalability. The term scalability in the video coding context means that physically meaningful
video information can be recovered by decoding only a portion of the compressed bit stream.
For example, one should be able to recover from the compressed bit stream a video with lower
resolution than the original by decoding only the lowest spatial layer and discarding other
spatial layers. In SVC, scalability is achieved by taking advantage of the layered approach.
The structure of the encoding depends on which kind of scalability is needed. For example, Figure 1
depicts the block diagram of an SVC encoder with two spatial layers, which contain additional
SNR enhancement layers.
In each spatial layer hierarchical motion compensation and prediction is made. The redundancy
between adjacent pictures and layers is based on interand intraprediction techniques. After
motion compensated prediction, transform coding is applied using the same transformation
techniques as in the H.264/AVC standard. SNR and quality scalability is achieved by coding the
difference between transformed and not transformed slices using progressive coding. These
progressively coded slices can then be truncated at any position within each slice thus improving
the userperceived visual quality proportional to the number of bits included in the truncated
slice. Mean while, temporal scalability is achieved using hierarchical B pictures, which provide a
predictive structure already included in H.264/AVC. Motion compensated temporal filtering can also
be used but it is, for the time being, included as a non-normative option only for achieving
temporal scalability. An example of hierarchical coding structure for group of pictures (GOP)
which length is eight pictures is illustrated in Figure 2. All of these scalability modes can be
combined to achieve three dimensional (spatial, temporal and SNR) scalability.
Figure 1: Block diagram for the H.264 scalable extension
Figure 2: Hierarchical GOP structure
The dependency between layers in scalable video coding
Layers in scalable video coding are classified as a base layer and enhancement layer(s). In SVC, the
base layer can be decoded using a standard H.264/AVC decoder. Information from lower layers is used
to remove the redundancy between different layers. This increases coding efficiency but it also
increases the importance of the lowest layers during decoding process and reduce error resiliency.
If the base layer or one of the most important layers is lost, less important layers are useless
because decoding them requires redundant data from the most important layers. The dependency between
layers makes the prioritization of different layers during transmission suitable. The base layer
also usually needs less transmission bandwidth than the enhancement layers, which is also quite
important when allocating resources to different prioritization classes. Based on the SVC layer
prioritization suitability we propose a mechanism for adapting the video transmission to rapidly
changing wireless channel and network conditions. One of the main requirements for the architecture
is to be general enough to work with different access networks from IEEE 802.11 (WiFi) and
IEEE 802.16 (WiMAX) to 3GPP and UMTS.
E-negotiation process model and protocol provide plausible method for negotiation systems and remove
additional activities of the system. The model with activities related to every phase presented in
“Fig.1”. Negotiation process model is formed by goal driven negotiation and requirements assigned to
agents. Based on view point, agents are able to improve their behaviors over time. In that way, they
may get better with experiences at selecting and achieving goals by taking correct actions. So agents
can take proactive and reactive negotiation actions.
Now a day’s, because in e-market there are infinite participants, exponentially growing amount of
information available, information transparency, information overloading, different negotiation
mechanism price wars between buyers and between sellers and complexity of modern business trade
, therefore intelligent agent based automated negotiation systems are hopeful technologies that
play an important role. Intelligent agents systems can adopt various mechanisms (e.g. bidding,
auction, bargaining, and arbitration), information discovery and collection, negotiator selection,
proposal generation and evaluation, play of advisor and provide effective suggestion, fully
understand of owner requirements and preferences, coordinate interdependent relationships between
various negotiations, build comprehensive user models and finally the systems can make trust between
both users. In addition, intelligent agents allow negotiator to play both roles of buyer and seller
at the same time. Our proposed design has considered the notations and results of this research show
that proposed intelligent agent architecture would reduce negotiation time and provide rapid response
for their owners. Next step, we explain the particular design of intelligent agent’s architecture.
In order to explain the architecture, first we have to illustrate types of agents employed in the
architecture and then we show buyer and seller agent architecture.
Figure 3.Seller Agent Architecture based on Intelligent Agents
In purposed model, Owner profile agent is to present owner goals, and aid the negotiator deciding on
goals and strategies. In addition, agents of this type would be able to adapt to the changes based on
owner behavior in the process of negotiations. Searcher Agent is an agent that searches potential
buyers or sellers that are in other distributed environments and performs the role of managing,
querying, or collating information from many distributed sources. A simple rule matching mechanism is
developed to extract relevant negotiator information from the search results. This agent also performs
post processing for the retrieved items and gives a list of qualified buyers or sellers with the
offer. Information mediator agent would envelope in actively searching, fetching, filtering, and
delivering information relevant to the issues from market knowledge base. Also this agent type is
able to identify the objectives, preferences and strategies of the opponent. Recommender agent is
able to generate a set of likely offers to be considered for submission to the opponent. The proposed
of advisor agent is to evaluate the offers received from the opponent and provide a feedback on the
defect and, possibly benefits of these offers. Negotiator agent is responsible for negotiating with
potential buyers or sellers, with respect to the preferences that have been collected from a group of
participants. Here, the goal is to bargaining with candidate sellers or buyers for the best offer
that satisfies the most demands and preferences of group members. In addition, this agent may be
capable of conducting negotiations by itself in a semi-autonomous or fully autonomous mode.
Applicability of full automation depends on the degree of certainty in objectives, preferences, and
tactics of the negotiator therefore the agent is an agent that optimizes the buyer or seller utility
based upon the requirements from owners’ requirements and constraints.Buyer or seller mediator agent
is an agent that delivers the status messages of active services between negotiators agents, and
between peer agents. In addition, this agent can play a role of an expert agent and in a sense; the
agent is an intelligent administrator agent.
As more individuals transmit data through a computer network, the quality of service received by the users
begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing.
A more effective method for routing data through a computer network can assist with the new problems being
encountered with today’s growing networks. Effective routing algorithms use various techniques to determine
the most appropriate route for transmitting data. Determining the best route through a wide area network
(WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices
present on the network. The most relevant routing information involves various measures that are often
obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to
employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it
maintains the ability to learn in dynamic situations.
Once the neural network is initially designed, any alterations in the computer routing environment can
easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is
essential in today’s rapidly growing and changing computer networks. These techniques, fuzzy reasoning
and neural networks, when combined together provide a very effective routing algorithm for computer
networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the
Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the
computer network migrates from a stable network to a more variable one. The advantages of applying this
fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
Applying artificial intelligence to specific areas of network management allows the network engineer
to dedicate additional time and effort to the more specialized and intricate details of the system.
Many forms of artificial intelligence have previously been introduced to network management; however,
it appears that one of the more applicable areas, fuzzy reasoning, has been somewhat overlooked.
Computer network managers are often challenged with decision-making based on vague or partial
information. Similarly, computer networks frequently perform operational adjustments based on this
same vague or partial information. The imprecise nature of this information can lead to difficulties
and inaccuracies when automating network management using currently applied artificial intelligence
techniques. Fuzzy reasoning will allow this type of imprecise information to be dealt with in a
precise and well-defined manner, providing a more flawless method of automating the network
management decision making process.
The objective of this research is to explore the use of fuzzy reasoning in one area of network
management, namely the routing aspect of configuration management. A more effective method for
routing data through a computer network needs to be discovered to assist with the new problems
being encountered on today’s networks. Although traffic management is only one aspect of
configuration management, at this time it is one of the most visible networking issues. This
becomes apparent as consideration is given to the increasing number of network users and the
tremendous growth driven by Internet-based multimedia applications. Because of the number of
users and the distances between WAN users, efficient routing is more critical in wide area
networks than in LANs (also, many LAN architectures such as token ring do not allow any
flexibility in the nature of message passing). In order to determine the best route over the
WAN, it is necessary to obtain information concerning all of the nodes, links, and LANs present
in the wide area network. The most relevant routing information involves various measures
regarding each link. These measures include the distance a message will travel, bandwidth
available for transmitting that message (maximum signal frequency), packet size used to segment
the message (size of the data group being sent), and the likelihood of a link failure. These
are often measured in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning
is a natural method to employ in an improved routing scheme.
Utilizing fuzzy reasoning should assist in expressing these imprecise network measures; however, there
still remains the massive growth issue concerning traffic levels. Most routing algorithms currently
being implemented as a means of transmitting data from a source node to a destination node cannot
effectively handle this large traffic growth. Most network routing methods are designed to be efficient
for a current network situation; therefore, when the network deviates from the original situation, the
methods begin to lose efficiency. This suggests that an effective routing method should also be capable
of learning how to successfully adapt to network growth. Neural networks are extremely capable of
adapting to system changes, and thus will be applied as a second artificial intelligence technique to
the proposed routing method in this research. The proposed routing approach incorporates fuzzy reasoning
in order to prepare a more accurate assessment of the network’s traffic conditions, and hence provide a
faster, more reliable, or more efficient route for data exchange. Neural networks will be incorporated
into the routing method as a means for the routing method to adapt and learn how to successfully handle
network traffic growth. The combination of these two tools is expected to produce a more effective
routing method than is currently available.
In order to achieve the primary objective of more efficient routing, several minor objectives also need
to be accomplished. A method of data collection is needed throughout the different phases of the study.
Data collection will be accomplished through the use of simulation methods; therefore, a simulation model
must be accurately designed before proceeding with experimenting or analysis. Additional requirements
include building and training the neural network and defining the fuzzy system. The objective of this
research is to demonstrate the effective applicability of fuzzy reasoning to only one area of network
management, traffic routing.
In the development of humanoids, both the appearance and behavior of the robots are significant issues.
However, designing the robot’s appearance, especially to give it a humanoid one, was always a role of the
industrial designer. To tackle the problem of appearance and behavior, two approaches are necessary: one
from robotics and the other from cognitive science. The approach from robotics tries to build very
humanlike robots based on knowledge from cognitive science. The approach from cognitive science uses the
robot for verifying hypotheses for understanding humans. We call this cross- interdisciplinary framework
android science. This conceptual paper introduces the developed androids and states the key issues in
Intelligence as subjective phenomena
How can we define intelligence ? This fundamental question motivates researchers in artificial intelligence
and robotics. Previous works in artificial intelligence considered functions of memory and prediction to
realize the intelligence of artificial systems. After the big wave of artificial intelligence in the 1980’s
and 1990’s, researchers focused on the importance of embodiment and started to use robots. The behavior-
based system proposed by Brooks was a trigger for this new wave. This means the main focus on artificial
intelligence and robotics has changed from an internal mechanism to interaction with the environment.
On the other hand, there are also two ideas in cognitive science. One is to focus on the internal
mechanism for understanding human intelligent behaviors, while the other focuses on the interactions
among people. This approaches is studied in the framework of distributed cognition. The idea of
distributed cognition has similar aspects to the behavior-based system. The common concept is to
understand intelligence through human-human or human-robot interactions. This is also follows the ideas
of the behavior-based systems and distributed cognition. Because intelligence is a subjective phenomena,
it is therefore important to implement rich interactive behaviors with the robot. The author believes the
development of rich interactions among robots will provide hints of principles of communication systems,
with the design methodology of intelligent robots then being derived from those principles.
Constructive approach in robotics
First we have the question of how to develop the robots. There are explicit evaluation criteria for robot
navigation such as speed, precision, etc. On the other hand, our purpose is also to develop interactive
robots. If we have enough knowledge of humans, we may have explicit evaluation criteria. However this
knowledge is not sufficient to provide a top-down design; instead the potential approach is rather
bottom-up. By utilizing available sensors and actuators, we can design the behaviors of a robot and then
decide the execution rules among those behaviors. While doing this developing, we also evaluate the
robot’s performance and modify the behaviors and execution rules. This bottom-up approach is called the
constructive approach .In the constructive approach, interactions between a robot and a human are often
evaluated and analyzed through discussions with cognitive scientists and psychologists, with the robot
then being improved by the knowledge obtained through the discussions.
Appearance and behavior
In the evaluation, the performance measures are subjective impression of human subjects who interact with
the robot and their unconscious reactions, such as synchronized human behaviors in the interactions and eye
movements. Obviously, both the appearance and behavior of the robots are important factors in this
evaluation. There are many technical reports that compare robots with different behaviors. However nobody
has focused on appearance in the previous robotics. There many empirical discussions on very simplified
static robots, say dolls. Designing the robot’s appearance, especially to give it a humanoid one, was always
a role of the industrial designer. However we consider this to be a serious problem for developing and
evaluating interactive robots. Appearance and behavior are tightly coupled with both each other and these
problems, as the results of evaluation change with appearance. We developed several
humanoids for communicating with people, as shown in Figure 1. We empirically know the effect of appearance
is as significant as behaviors in communication. Human brain functions that recognize people support our
Figure 1: From humanoids to androids. The first robot (the left end) is Robovie II developed by ATR
Intelligent Robotics and Communications Laboratories. The second is Wakamaru developed by Mitsubishi
Heavy Industry Co. Ltd. The third is a child android, while the fourth is the master of the child android.
To tackle the problem of appearance and behavior, two approaches are necessary: one from robotics and the
other from cognitive science. The approach from robotics tries to build very humanlike robots based on
knowledge from cognitive science. The approach from cognitive science uses the robot for verifying hypotheses
for understanding humans. We call this cross-interdisciplinary framework android science.
Figure 2: The framework of android science
Previous robotics research also used knowledge of cognitive science while research in cognitive science
utilized robots. However the contribution from robotics to cognitive science was not enough as robot-like
robots were not sufficient as tools of cognitive science, because appearance and behavior cannot be separately
handled. We expect this problem to be solved by using an android that has an identical appearance to a human.
Robotics research utilizing hints from cognitive science also has a similar problem as it is difficult to
clearly recognize whether the hints are given for just robot behaviors isolated from their appearance or for
robots that have both the appearance and the behavior. In the framework of android science, androids enable us
to directly exchange knowledge between the development of androids in engineering and the understanding of
humans in cognitive science.
The optimizer considers each algebraic operation independently since the query decomposition step has
already taken global reorganization decisions. Optimization of all operations but the n-way Select
Project Join (SPJ) is quit straightforward. It consists of choosing the algorithm and the home of
operation. The crucial issue in terms of the search strategy is the join ordering problem, which
is NP-hard on the number of relations.
The search space, or solution space, is the set of all QEPs that compute the same result. A point
in the solution space is one particular plan, i. e. solution for the problem. A solution is
described by the query tree for executing the join expression. Every point of the search space
has a cost associated with it; a cost function maps query trees to their respective costs. The
query tree itself is a binary tree that consists of base relations as its leaves and joins
operations as its inner nodes; edges denote the flow of data that takes place from the leaves
of the tree to the root. As the join operation is commutative and associative, the number of
possible query trees increases quickly with increasing the number of relations involved in the
query. Thus, we reduce the goal of the query optimizer to find the best join order, together
with the best join method for each join. For a complex query, involving many relations and
many operators, the number of equivalent operator trees can be very high. Investigating a large
search space may make optimization time prohibitive, sometime much more expensive than the
execution time. Therefore, query optimizers typically restrict the size of the search space
they consider. The first restriction is to use heuristics. Another important restriction is the
shape of the join tree. The following sections discuss the characteristics of the most interesting
kinds of search spaces depending on the shape of the join tree they include.
Left-Deep and Right-Deep Trees
The search space restricted to only left-deep trees consists of all query trees where the inner
relation of each join is a base relation as in Figure 2-a. For a fixed number of base relations,
left-deep tree does not leave any degree of freedom concerning the shape of the tree, but there
are n! ways to allocate n base relations to the tree leaves.
Figure 2. Different query trees
In most sequential optimizers like the optimizer of IBM System R, the search space is restricted
to left-deep trees. The reason for this heuristic restriction is that the space including all trees
is much larger, while an optimal or nearly optimal plan can often be found in the much smaller
space of left-deep trees. When join algorithms are not symmetric, which is the case for hash-join,
it is useful to distinguish between left-deep and right-deep trees, see Figure 2-b. Hash-based join
algorithm have been shown to be the most efficient for equi-joins and particularly interesting in a
parallel environment because they naturally induce intra-operation parallelism. Hash-join usually
consists of two consecutive phases: Build and probe. In the build phase, a hash table is constructed
on the join attribute of the smallest relation. In the probe phase, the largest relation is
sequentially scanned and the hash table is consulted for each of its tuples.
A new heuristic algorithm is proposed for constructing multicast tree for multimedia and real-time applications.
The tree is used to concurrently transmit packets from source to multiple destinations such that exactly one
copy of any packet traverses the links of the multicast tree. Since multimedia applications require some
Quality of Service, QoS, a multicast tree is needed to satisfy two main goals, the minimum path cost
from source to each destination (Shortest Path Tree) and a certain end-to-end delay constraint from source to
each destination. This problem is known to be NP-Complete. The proposed heuristic algorithm solves this
problem in polynomial time and gives near optimal tree. We first mention some related work in this area then
we formalize the problem and introduce the new algorithm with its pseudo code and the proof of its complexity
and its correctness by showing that it always finds a feasible tree if one exists. Other heuristic algorithms
are examined and compared with the proposed algorithm via simulation.
Handling group communication is a key requirement for numerous applications that have one source sends
the same information concurrently to multiple destinations. Multicast routing refers to the construction
of a tree rooted at the source and spanning all destinations. Generally, there are two types of such a tree,
the Steiner tree and the shortest path tree. Steiner tree or group-shared tree tends to minimize the total
cost of the resulting tree, this is an NP-Complete problem. Shortest path tree or source-based trees tends
to minimize the cost of each path from source to any destination, this can be achieved in polynomial time
by using one of the two famous algorithms of Bellman and Dijkstra and pruning the undesired links. Recently,
with the rapid evolution of multimedia and real-time applications like audio/video conferencing, interactive
distributed games and real-time remote control system, certain QoS need to be guaranteed in the resulted
tree. One such QoS, and the most important one is the end-to-end delay between source and each destination,
where the information must be sent within a certain delay constraint D. By adding this constraint to the
original problem of multicast routing, the problem is reformulated and the multicast tree should be either
delay constrained Steiner tree, or delay-constrained shortest path tree. Delay constrained Steiner tree
is an NP-Complete problem, several heuristics are introduced for this problem each trying to get near
optimal tree cost, without regarding to the cost of each individual path for each destination. Delay
constrained shortest path tree is also an NP-Complete problem. An optimal algorithm for this problem
is presented, but its execution time is exponential and used only for comparison with other algorithms.
Heuristic for this problem is presented, which tries to get a near optimal tree from the point of
view of each destination without regarding the total cost of the tree. An exhaustive comparison between
the previous heuristics for the two problems can be found. We investigate the problem of delay
constrained shortest path tree since it is appropriate in some applications like Video on Demand (VoD),
where the multicast group has a frequent change, and every user wants to get his information in the lowest
possible cost for him without regarding the total cost of the routing tree. Also shortest path tree always
gives average cost per destination less than Steiner tree. We present a new heuristic algorithm that finds
the required tree in polynomial time.
A parameterizable version of Dijkstra’s shortest path algorithm was designed in VHDL with Synopsys’ FPGA
Express design software version 3.4. The design was targeted for Altera’s FLEX10K device family with the
Quartus design software. The parameterizable features of Dijkstra’s shortest path algorithm were compiled
into a separate VHDL package which was included in the main design file. This way the design of other
versions of Dijsktra’s algorithm with different accuracy and for networks of different sizes is made
easier, since all the changes are made only in the VHDL package.
Fig. 1. The top-level block diagram of the FPGA-based Dijkstra’s shortest path algorithm. The ROM block
contains the network description and the RAM block contains the temporary results of shortest path
computation. The comparator bank selects the smallest distance from the prefetched edge lengths.
The block diagram of the FPGA-based Dijkstra’s shortest path algorithm is presented in Fig. 1. The
network structure is presented in the internal ROM block of the logic device. In the block diagram
of Fig. 1, there are six address lines. This is sufficient to represent all node-to-node links of
networks of size upto eight nodes, if the network is represented as an adjacency matrix. The
internal RAM blocks of the logic device represent the known status of nodes, the distance to this
node and the previous node. There are separate address and data lines for the ROM and RAM blocks,
which allows the parallel transfer of information to/from the memory blocks. To speed up the
computations for finding the closest node from the set of unknown nodes, all node-to-node links
whose other pair is the last known node are prefetched from ROM. Then the next known node is
computed by the parallel comparator bank.
As an example, the compiled version of Dijsktra’s algorithm for networks of maximum sixe eight nodes
fitted into an EPF10K20TC144-3 device, which has 1152 logic elements (LEs) corresponding to
approximately 20000 available gates. The design required 72 per cent of all LEs and 5 per cent of
available memory bits. Logic element requirements increase linearly, since the size of the comparator
bank grows linearly. On the other hand, memory requirements increase quadratically,since the network
description of a network of size N nodes requires NxN memory locations. If the network description
requires an external memory chip, all that is needed is to add an external data and address bus and
to change the memory handling functions to handle external memory instead of internal memory.
To compare the performance of an FPGA-based Dijsktra’s algorithm with a microprocessor-based version,
an identical algorithm was coded in C and compiled with gcc in Linux Redhat 6.2. The same network
descriptions were then tried in both the FPGA-based and software-based versions of the same algorithm.
The speedup factor in favor of the FPGA-based version depended on the number of network nodes. As
network sizes grew, the average execution time of the FPGA-based version grew only linearly, whereas
the average execution time of the microprocessor-based version displayed quadratic growth
(in Table 2). This can be attributed to the more effective FPGA-based execution in Dijkstra’s
algorithm, since multiple comparators are used in parallel.
Table 2. FPGA resources required by the implementation of Dijkstra’s shortest path algorithm and a
comparison between the execution times of FPGA-based and microprocessor-based versions of the same algorithm.
Embedded assessment leverages the capabilities of pervasive computing to advance early detection
of health conditions. In this approach, technologies embedded in the home setting are used to
establish personalized baselines against which later indices of health status can be compared.
Our ethnographic and concept feedback studies suggest that adoption of such health technologies
among end users will be increased if monitoring is woven into preventive and compensatory health
applications, such that the integrated system provides value beyond assessment. We review health
technology advances in the three areas of monitoring, compensation, and prevention. We then define
embedded assessment in terms of these three components. The validation of pervasive computing
systems for early detection involves unique challenges due to conflicts between the exploratory
nature of these systems and the validation criteria of medical research audiences. We discuss an
approach for demonstrating value that incorporates ethnographic observation and new ubiquitous
computing tools for behavioral observation in naturalistic settings such as the home.
Leveraging synergies in these three areas holds promise for advancing detection of disease states.
We believe this highly integrated approach will greatly increase adoption of home health
technologies among end users and ease the transition of embedded health assessment prototypes from
computing laboratories into medical research and practice. We derive our observations from a series
of exploratory and qualitative studies on ubiquitous computing for health and wellbeing.
These studies, highlighted barriers to early detection in the clinical setting, concerns about home
assessment technologies among end users, and values of target user groups related to prevention and
detection. Observations from the studies are used to identify challenges that must be overcome by
pervasive computing developers if ubiquitous computing systems are to gain wide acceptance for early
detection of health conditions.
The motivation driving research on pervasive home monitoring is that clinical diagnostic practices
frequently fail to detect health problems in their early stages. Often, clinical testing is first
conducted after the onset of a health problem when there is no data about an individual’s previous
level of functioning. Subsequent clinical assessments are conducted periodically, often with no data
other than self-report about functioning in between clinical visits. Self-report data on mundane or
repetitive health-related behaviors has been repeatedly demonstrated as unreliable. Clinical
diagnostics are also limited in ecological validity, not accounting for functioning in the home and
other daily environments. Another barrier to early detection is that agebased norms used to detect
impairment may fail to capture significant decline among people whose premorbid functioning was far
above average. Cultural differences have also been repeatedly shown to influence performance on
standardized tests. Although early detection can cut costs in the long term, most practitioners are
more accustomed to dealing with severe, late stage health issues than subclinical patterns that may
or may not be markers for more serious problems. In our participatory design interviews, clinicians
voiced concerns about false positives causing unwarranted patient concerns and additional demands
on their time. Compounding the clinical barriers to early detection listed above are psychological
and behavioral patterns among individuals contending with the possibility of illness. Our interviews
highlighted denial, perceptual biases regarding variability of health states, over-confidence in
recall and insight, preference for preventive and compensatory directives over pure assessment
results, and a disinclination towards time consuming self-monitoring as barriers to early detection.
Our ethnographic studies of households coping with cognitive decline revealed a tension between a
desire for forecasting of what illness might lie ahead and a counter current of denial. Almost all
caregivers and patients wished that they had received an earlier diagnosis to guide treatment and
lifestyle choices, but they also acknowledged that they had overlooked blatant warning signs until
the occurrence of a catastrophic incident (e.g. a car accident). This lag between awareness and
actual decline caused them to miss out on the critical window for initiation of treatments and
planning that could have had a major impact on independence and quality of life. Ethnography and
concept feedback participants attributed this denial in part to a fear of being diagnosed with a
disease for which there is no cure. They also worried about the effect of this data on insurers and
other outside parties. Participants in the three cohorts included in our studies (boomers, healthy
older adults, and older adults coping with illness themselves or in a spouse) were much more
interested in, and less conflicted about, preventive and compensatory directives than pure assessment.
Perceptual biases also appear to impede traditional assessment and selfmonitoring. Ethnography
participants reported consistently overestimating functioning before a catastrophic event and appeared,
during the interview, to consistently underestimate functioning following detection of cognitive
impairment Additionally, we observed probable over-confidence among healthy adults in their ability to
recall behaviors and analyze their relationship to both environmental factors and wellbeing. This
confidence in recall and insight seemed exaggerated given findings that recall of frequent events is
generally poor. As a result of these health perceptions, many of those interviewed felt that the time
and discipline required for journaling (e.g. of eating, sleeping, mood, etc.) outweighed the benefits.
Additionally, they expressed wariness of confronting or being reprimanded about what is already obvious
to them. They would prefer to lead investigations and develop strategies for improving their lives.
Pervasive computing systems may enable this type of integrated, contextualized inquiry if they can also
overcome the clinical and individual barriers that might otherwise impede adoption of the new technologies.