Posts Tagged Business Practices
Arguably the most famous bug in this class is the bug exploited by the SQL Server “Slammer” worm. The SQL
Server Resolution Service operates over a UDP protocol, by default on port 1434. It exposes a number of
functions, two of which were vulnerable to buffer overflow issues (CAN-2002-0649). These bugs were
discovered by David Litchfield of NGS. Another SQL Server problem in the same category was the “hello”
bug (CAN-2002-1123) discovered by Dave Aitel of Immunity, Inc., which exploited a flaw in the initial
session setup code on TCP port 1433.
Oracle has not been immune to this category — most recently, David Litchfield found an issue with
environment variable expansion in Oracle’s “extproc” mechanism that can be exploited without a username
or password (CAN-2004-1363). Chris Anley of NGS discovered an earlier flaw in Oracle’s extproc
mechanism (CAN-2003-0634) that allowed for a remote, unauthenticated buffer overflow. Mark Litchfield
of NGS discovered a flaw in Oracle’s authentication handling code whereby an overly long username
would trigger an exploitable stack overflow (CAN-2003-0095). David Litchfield also found a flaw in
DB2’s JDBC Applet Server (no CVE, but bugtraq id 11401) that allows a remote, unauthenticated user
to trigger a buffer overflow.
The best way to defend yourself against this class of problem is first, to patch. Second, you should
attempt to ensure that only trusted hosts can connect to your database servers, possibly enforcing
that trust through some other authentication mechanism such as SSH or IPSec. Depending on the role
that your database server is fulfilling, this may be tricky. Another possibility for defense is to
implement an Intrusion Detection System (IDS) or an Intrusion Prevention System (IPS). These kinds
of systems have been widely discussed in security literature, and are of debatable value. Although
an IDS can (sometimes) tell you that you have been compromised, it won’t normally prevent that
compromise from happening. Signature-based IDS systems are only as strong as their signature
databases and in most cases signatures aren’t written by people who are capable of writing
exploits, so many loopholes in the signatures get missed.
“True anomaly” IDS systems are harder to bypass, but as long as you stick to a protocol that’s
already in use, and keep the exploit small, you can usually slip by. Although some IDS systems
are better than others, in general you need an IDS like you need someone telling you you’ve
got a hole in the head. IDS systems will certainly stop dumber attackers, or brighter attackers
who were unlucky, so they may be worthwhile provided they complement — and don’t replace — skilled
staff, good lockdown, and good procedures. IPS systems, on the other hand, do prevent some classes
of exploit from working but again, every IPS system the authors have examined can be bypassed with
a little work, so your security largely depends on the attacker not knowing which commercial IPS
you’re using. Someone may bring out an IPS that prevents all arbitrary code execution attacks at
some point, which would be a truly wonderful thing. Don’t hold your breath waiting for it, though.
The threat of smartphone malware with access to on-board sensors, which opens new avenues for illicit
collection of private information. While existing work shows that such “sensory malware” can convey
raw sensor data (e.g., video and audio) to a remote server, these approaches lack stealthiness, incur
significant communication and computation overhead during data transmission and processing, and can
easily be defeated by existing protections like denying installation of applications with access
to both sensitive sensors and the network. We present Soundcomber, a Trojan with few and innocuous
permissions, that can extract a small amount of targeted private information from the audio sensor
of the phone. Using targeted profiles for context-aware analysis, Soundcomber intelligently
“pulls out” sensitive data such as credit card and PIN numbers from both tone- and speech-based
interaction with phone menu systems. Soundcomber performs efficient, stealthy local extraction,
thereby greatly reducing the communication cost for delivering stolen data.Soundcomber
automatically infers the destination phone number by analyzing audio, circumvents known security
defenses, and conveys information remotely without direct network access. We also design and
implement a defensive architecture that foils Soundcomber, identify new covert channels
specific to smartphones, and provide a video demonstration of Soundcomber.
In essence, all audio recording and phone call requests are mediated by a reference monitor,
which can disable (blank out) the recording when necessary. The decision on when to turn off
the switch is made according to the privacy policies that forbid audio recording for a set
of user-specified phone numbers, such as those of credit-card companies. We evaluate our
prototype defensive architecture and show that it can prevent our demonstrated attacks
with minimal processing overhead.
We now summarize our major contributions:
Targeted, context-aware information discovery from sound recordings. We demonstrate that
smartphone based malware can easily be made to be aware of the context of a phone
conversation, which allows it to selectively collect high-value information. This is
achieved through techniques we developed to profile the interactions with a phone menu,
and recover digits either through a side-channel in a mobile phone or by recognizing
speech. We also show how only limited permissions are needed and how Soundcomber
can determine the destination number of the phone call through IVR fingerprinting.
Stealthy data transmission. We studied various channels on the smartphone platform
that can be used to bypass existing security controls, including data transmission
via a legitimate network-facing application, which has not been mediated by the
existing approaches, and different types of covert channels. We also discovered
several new channels, such as vibration / volume settings, and demonstrated that
covert channel information leaks are completely realistic on smartphones.
Implementation and evaluation. We implemented Soundcomber on an Android phone and
evaluated our technique using realistic phone conversation data. Our study shows that
an individual’s credit-card number can be reliably identified and stealthily disclosed.
Therefore, the threat of such an attack is real.
Defensive architecture. We discuss security measures that could be used to mitigate
this threat, and in particular, we designed and implemented a defensive architecture
that prevents any application from recording audio to certain phone numbers specified
by privacy policies.
The main idea behind the scalable extension of H.264/AVC is to take the block based hybrid video
coding scheme one step further and achieve spatiotemporal and signal-to-noise-ratio (SNR)
scalability. The term scalability in the video coding context means that physically meaningful
video information can be recovered by decoding only a portion of the compressed bit stream.
For example, one should be able to recover from the compressed bit stream a video with lower
resolution than the original by decoding only the lowest spatial layer and discarding other
spatial layers. In SVC, scalability is achieved by taking advantage of the layered approach.
The structure of the encoding depends on which kind of scalability is needed. For example, Figure 1
depicts the block diagram of an SVC encoder with two spatial layers, which contain additional
SNR enhancement layers.
In each spatial layer hierarchical motion compensation and prediction is made. The redundancy
between adjacent pictures and layers is based on interand intraprediction techniques. After
motion compensated prediction, transform coding is applied using the same transformation
techniques as in the H.264/AVC standard. SNR and quality scalability is achieved by coding the
difference between transformed and not transformed slices using progressive coding. These
progressively coded slices can then be truncated at any position within each slice thus improving
the userperceived visual quality proportional to the number of bits included in the truncated
slice. Mean while, temporal scalability is achieved using hierarchical B pictures, which provide a
predictive structure already included in H.264/AVC. Motion compensated temporal filtering can also
be used but it is, for the time being, included as a non-normative option only for achieving
temporal scalability. An example of hierarchical coding structure for group of pictures (GOP)
which length is eight pictures is illustrated in Figure 2. All of these scalability modes can be
combined to achieve three dimensional (spatial, temporal and SNR) scalability.
Figure 1: Block diagram for the H.264 scalable extension
Figure 2: Hierarchical GOP structure
The dependency between layers in scalable video coding
Layers in scalable video coding are classified as a base layer and enhancement layer(s). In SVC, the
base layer can be decoded using a standard H.264/AVC decoder. Information from lower layers is used
to remove the redundancy between different layers. This increases coding efficiency but it also
increases the importance of the lowest layers during decoding process and reduce error resiliency.
If the base layer or one of the most important layers is lost, less important layers are useless
because decoding them requires redundant data from the most important layers. The dependency between
layers makes the prioritization of different layers during transmission suitable. The base layer
also usually needs less transmission bandwidth than the enhancement layers, which is also quite
important when allocating resources to different prioritization classes. Based on the SVC layer
prioritization suitability we propose a mechanism for adapting the video transmission to rapidly
changing wireless channel and network conditions. One of the main requirements for the architecture
is to be general enough to work with different access networks from IEEE 802.11 (WiFi) and
IEEE 802.16 (WiMAX) to 3GPP and UMTS.
E-negotiation process model and protocol provide plausible method for negotiation systems and remove
additional activities of the system. The model with activities related to every phase presented in
“Fig.1”. Negotiation process model is formed by goal driven negotiation and requirements assigned to
agents. Based on view point, agents are able to improve their behaviors over time. In that way, they
may get better with experiences at selecting and achieving goals by taking correct actions. So agents
can take proactive and reactive negotiation actions.
Now a day’s, because in e-market there are infinite participants, exponentially growing amount of
information available, information transparency, information overloading, different negotiation
mechanism price wars between buyers and between sellers and complexity of modern business trade
, therefore intelligent agent based automated negotiation systems are hopeful technologies that
play an important role. Intelligent agents systems can adopt various mechanisms (e.g. bidding,
auction, bargaining, and arbitration), information discovery and collection, negotiator selection,
proposal generation and evaluation, play of advisor and provide effective suggestion, fully
understand of owner requirements and preferences, coordinate interdependent relationships between
various negotiations, build comprehensive user models and finally the systems can make trust between
both users. In addition, intelligent agents allow negotiator to play both roles of buyer and seller
at the same time. Our proposed design has considered the notations and results of this research show
that proposed intelligent agent architecture would reduce negotiation time and provide rapid response
for their owners. Next step, we explain the particular design of intelligent agent’s architecture.
In order to explain the architecture, first we have to illustrate types of agents employed in the
architecture and then we show buyer and seller agent architecture.
Figure 3.Seller Agent Architecture based on Intelligent Agents
In purposed model, Owner profile agent is to present owner goals, and aid the negotiator deciding on
goals and strategies. In addition, agents of this type would be able to adapt to the changes based on
owner behavior in the process of negotiations. Searcher Agent is an agent that searches potential
buyers or sellers that are in other distributed environments and performs the role of managing,
querying, or collating information from many distributed sources. A simple rule matching mechanism is
developed to extract relevant negotiator information from the search results. This agent also performs
post processing for the retrieved items and gives a list of qualified buyers or sellers with the
offer. Information mediator agent would envelope in actively searching, fetching, filtering, and
delivering information relevant to the issues from market knowledge base. Also this agent type is
able to identify the objectives, preferences and strategies of the opponent. Recommender agent is
able to generate a set of likely offers to be considered for submission to the opponent. The proposed
of advisor agent is to evaluate the offers received from the opponent and provide a feedback on the
defect and, possibly benefits of these offers. Negotiator agent is responsible for negotiating with
potential buyers or sellers, with respect to the preferences that have been collected from a group of
participants. Here, the goal is to bargaining with candidate sellers or buyers for the best offer
that satisfies the most demands and preferences of group members. In addition, this agent may be
capable of conducting negotiations by itself in a semi-autonomous or fully autonomous mode.
Applicability of full automation depends on the degree of certainty in objectives, preferences, and
tactics of the negotiator therefore the agent is an agent that optimizes the buyer or seller utility
based upon the requirements from owners’ requirements and constraints.Buyer or seller mediator agent
is an agent that delivers the status messages of active services between negotiators agents, and
between peer agents. In addition, this agent can play a role of an expert agent and in a sense; the
agent is an intelligent administrator agent.
As more individuals transmit data through a computer network, the quality of service received by the users
begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing.
A more effective method for routing data through a computer network can assist with the new problems being
encountered with today’s growing networks. Effective routing algorithms use various techniques to determine
the most appropriate route for transmitting data. Determining the best route through a wide area network
(WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices
present on the network. The most relevant routing information involves various measures that are often
obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to
employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it
maintains the ability to learn in dynamic situations.
Once the neural network is initially designed, any alterations in the computer routing environment can
easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is
essential in today’s rapidly growing and changing computer networks. These techniques, fuzzy reasoning
and neural networks, when combined together provide a very effective routing algorithm for computer
networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the
Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the
computer network migrates from a stable network to a more variable one. The advantages of applying this
fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
Applying artificial intelligence to specific areas of network management allows the network engineer
to dedicate additional time and effort to the more specialized and intricate details of the system.
Many forms of artificial intelligence have previously been introduced to network management; however,
it appears that one of the more applicable areas, fuzzy reasoning, has been somewhat overlooked.
Computer network managers are often challenged with decision-making based on vague or partial
information. Similarly, computer networks frequently perform operational adjustments based on this
same vague or partial information. The imprecise nature of this information can lead to difficulties
and inaccuracies when automating network management using currently applied artificial intelligence
techniques. Fuzzy reasoning will allow this type of imprecise information to be dealt with in a
precise and well-defined manner, providing a more flawless method of automating the network
management decision making process.
The objective of this research is to explore the use of fuzzy reasoning in one area of network
management, namely the routing aspect of configuration management. A more effective method for
routing data through a computer network needs to be discovered to assist with the new problems
being encountered on today’s networks. Although traffic management is only one aspect of
configuration management, at this time it is one of the most visible networking issues. This
becomes apparent as consideration is given to the increasing number of network users and the
tremendous growth driven by Internet-based multimedia applications. Because of the number of
users and the distances between WAN users, efficient routing is more critical in wide area
networks than in LANs (also, many LAN architectures such as token ring do not allow any
flexibility in the nature of message passing). In order to determine the best route over the
WAN, it is necessary to obtain information concerning all of the nodes, links, and LANs present
in the wide area network. The most relevant routing information involves various measures
regarding each link. These measures include the distance a message will travel, bandwidth
available for transmitting that message (maximum signal frequency), packet size used to segment
the message (size of the data group being sent), and the likelihood of a link failure. These
are often measured in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning
is a natural method to employ in an improved routing scheme.
Utilizing fuzzy reasoning should assist in expressing these imprecise network measures; however, there
still remains the massive growth issue concerning traffic levels. Most routing algorithms currently
being implemented as a means of transmitting data from a source node to a destination node cannot
effectively handle this large traffic growth. Most network routing methods are designed to be efficient
for a current network situation; therefore, when the network deviates from the original situation, the
methods begin to lose efficiency. This suggests that an effective routing method should also be capable
of learning how to successfully adapt to network growth. Neural networks are extremely capable of
adapting to system changes, and thus will be applied as a second artificial intelligence technique to
the proposed routing method in this research. The proposed routing approach incorporates fuzzy reasoning
in order to prepare a more accurate assessment of the network’s traffic conditions, and hence provide a
faster, more reliable, or more efficient route for data exchange. Neural networks will be incorporated
into the routing method as a means for the routing method to adapt and learn how to successfully handle
network traffic growth. The combination of these two tools is expected to produce a more effective
routing method than is currently available.
In order to achieve the primary objective of more efficient routing, several minor objectives also need
to be accomplished. A method of data collection is needed throughout the different phases of the study.
Data collection will be accomplished through the use of simulation methods; therefore, a simulation model
must be accurately designed before proceeding with experimenting or analysis. Additional requirements
include building and training the neural network and defining the fuzzy system. The objective of this
research is to demonstrate the effective applicability of fuzzy reasoning to only one area of network
management, traffic routing.
In the development of humanoids, both the appearance and behavior of the robots are significant issues.
However, designing the robot’s appearance, especially to give it a humanoid one, was always a role of the
industrial designer. To tackle the problem of appearance and behavior, two approaches are necessary: one
from robotics and the other from cognitive science. The approach from robotics tries to build very
humanlike robots based on knowledge from cognitive science. The approach from cognitive science uses the
robot for verifying hypotheses for understanding humans. We call this cross- interdisciplinary framework
android science. This conceptual paper introduces the developed androids and states the key issues in
Intelligence as subjective phenomena
How can we define intelligence ? This fundamental question motivates researchers in artificial intelligence
and robotics. Previous works in artificial intelligence considered functions of memory and prediction to
realize the intelligence of artificial systems. After the big wave of artificial intelligence in the 1980’s
and 1990’s, researchers focused on the importance of embodiment and started to use robots. The behavior-
based system proposed by Brooks was a trigger for this new wave. This means the main focus on artificial
intelligence and robotics has changed from an internal mechanism to interaction with the environment.
On the other hand, there are also two ideas in cognitive science. One is to focus on the internal
mechanism for understanding human intelligent behaviors, while the other focuses on the interactions
among people. This approaches is studied in the framework of distributed cognition. The idea of
distributed cognition has similar aspects to the behavior-based system. The common concept is to
understand intelligence through human-human or human-robot interactions. This is also follows the ideas
of the behavior-based systems and distributed cognition. Because intelligence is a subjective phenomena,
it is therefore important to implement rich interactive behaviors with the robot. The author believes the
development of rich interactions among robots will provide hints of principles of communication systems,
with the design methodology of intelligent robots then being derived from those principles.
Constructive approach in robotics
First we have the question of how to develop the robots. There are explicit evaluation criteria for robot
navigation such as speed, precision, etc. On the other hand, our purpose is also to develop interactive
robots. If we have enough knowledge of humans, we may have explicit evaluation criteria. However this
knowledge is not sufficient to provide a top-down design; instead the potential approach is rather
bottom-up. By utilizing available sensors and actuators, we can design the behaviors of a robot and then
decide the execution rules among those behaviors. While doing this developing, we also evaluate the
robot’s performance and modify the behaviors and execution rules. This bottom-up approach is called the
constructive approach .In the constructive approach, interactions between a robot and a human are often
evaluated and analyzed through discussions with cognitive scientists and psychologists, with the robot
then being improved by the knowledge obtained through the discussions.
Appearance and behavior
In the evaluation, the performance measures are subjective impression of human subjects who interact with
the robot and their unconscious reactions, such as synchronized human behaviors in the interactions and eye
movements. Obviously, both the appearance and behavior of the robots are important factors in this
evaluation. There are many technical reports that compare robots with different behaviors. However nobody
has focused on appearance in the previous robotics. There many empirical discussions on very simplified
static robots, say dolls. Designing the robot’s appearance, especially to give it a humanoid one, was always
a role of the industrial designer. However we consider this to be a serious problem for developing and
evaluating interactive robots. Appearance and behavior are tightly coupled with both each other and these
problems, as the results of evaluation change with appearance. We developed several
humanoids for communicating with people, as shown in Figure 1. We empirically know the effect of appearance
is as significant as behaviors in communication. Human brain functions that recognize people support our
Figure 1: From humanoids to androids. The first robot (the left end) is Robovie II developed by ATR
Intelligent Robotics and Communications Laboratories. The second is Wakamaru developed by Mitsubishi
Heavy Industry Co. Ltd. The third is a child android, while the fourth is the master of the child android.
To tackle the problem of appearance and behavior, two approaches are necessary: one from robotics and the
other from cognitive science. The approach from robotics tries to build very humanlike robots based on
knowledge from cognitive science. The approach from cognitive science uses the robot for verifying hypotheses
for understanding humans. We call this cross-interdisciplinary framework android science.
Figure 2: The framework of android science
Previous robotics research also used knowledge of cognitive science while research in cognitive science
utilized robots. However the contribution from robotics to cognitive science was not enough as robot-like
robots were not sufficient as tools of cognitive science, because appearance and behavior cannot be separately
handled. We expect this problem to be solved by using an android that has an identical appearance to a human.
Robotics research utilizing hints from cognitive science also has a similar problem as it is difficult to
clearly recognize whether the hints are given for just robot behaviors isolated from their appearance or for
robots that have both the appearance and the behavior. In the framework of android science, androids enable us
to directly exchange knowledge between the development of androids in engineering and the understanding of
humans in cognitive science.
Embedded assessment leverages the capabilities of pervasive computing to advance early detection
of health conditions. In this approach, technologies embedded in the home setting are used to
establish personalized baselines against which later indices of health status can be compared.
Our ethnographic and concept feedback studies suggest that adoption of such health technologies
among end users will be increased if monitoring is woven into preventive and compensatory health
applications, such that the integrated system provides value beyond assessment. We review health
technology advances in the three areas of monitoring, compensation, and prevention. We then define
embedded assessment in terms of these three components. The validation of pervasive computing
systems for early detection involves unique challenges due to conflicts between the exploratory
nature of these systems and the validation criteria of medical research audiences. We discuss an
approach for demonstrating value that incorporates ethnographic observation and new ubiquitous
computing tools for behavioral observation in naturalistic settings such as the home.
Leveraging synergies in these three areas holds promise for advancing detection of disease states.
We believe this highly integrated approach will greatly increase adoption of home health
technologies among end users and ease the transition of embedded health assessment prototypes from
computing laboratories into medical research and practice. We derive our observations from a series
of exploratory and qualitative studies on ubiquitous computing for health and wellbeing.
These studies, highlighted barriers to early detection in the clinical setting, concerns about home
assessment technologies among end users, and values of target user groups related to prevention and
detection. Observations from the studies are used to identify challenges that must be overcome by
pervasive computing developers if ubiquitous computing systems are to gain wide acceptance for early
detection of health conditions.
The motivation driving research on pervasive home monitoring is that clinical diagnostic practices
frequently fail to detect health problems in their early stages. Often, clinical testing is first
conducted after the onset of a health problem when there is no data about an individual’s previous
level of functioning. Subsequent clinical assessments are conducted periodically, often with no data
other than self-report about functioning in between clinical visits. Self-report data on mundane or
repetitive health-related behaviors has been repeatedly demonstrated as unreliable. Clinical
diagnostics are also limited in ecological validity, not accounting for functioning in the home and
other daily environments. Another barrier to early detection is that agebased norms used to detect
impairment may fail to capture significant decline among people whose premorbid functioning was far
above average. Cultural differences have also been repeatedly shown to influence performance on
standardized tests. Although early detection can cut costs in the long term, most practitioners are
more accustomed to dealing with severe, late stage health issues than subclinical patterns that may
or may not be markers for more serious problems. In our participatory design interviews, clinicians
voiced concerns about false positives causing unwarranted patient concerns and additional demands
on their time. Compounding the clinical barriers to early detection listed above are psychological
and behavioral patterns among individuals contending with the possibility of illness. Our interviews
highlighted denial, perceptual biases regarding variability of health states, over-confidence in
recall and insight, preference for preventive and compensatory directives over pure assessment
results, and a disinclination towards time consuming self-monitoring as barriers to early detection.
Our ethnographic studies of households coping with cognitive decline revealed a tension between a
desire for forecasting of what illness might lie ahead and a counter current of denial. Almost all
caregivers and patients wished that they had received an earlier diagnosis to guide treatment and
lifestyle choices, but they also acknowledged that they had overlooked blatant warning signs until
the occurrence of a catastrophic incident (e.g. a car accident). This lag between awareness and
actual decline caused them to miss out on the critical window for initiation of treatments and
planning that could have had a major impact on independence and quality of life. Ethnography and
concept feedback participants attributed this denial in part to a fear of being diagnosed with a
disease for which there is no cure. They also worried about the effect of this data on insurers and
other outside parties. Participants in the three cohorts included in our studies (boomers, healthy
older adults, and older adults coping with illness themselves or in a spouse) were much more
interested in, and less conflicted about, preventive and compensatory directives than pure assessment.
Perceptual biases also appear to impede traditional assessment and selfmonitoring. Ethnography
participants reported consistently overestimating functioning before a catastrophic event and appeared,
during the interview, to consistently underestimate functioning following detection of cognitive
impairment Additionally, we observed probable over-confidence among healthy adults in their ability to
recall behaviors and analyze their relationship to both environmental factors and wellbeing. This
confidence in recall and insight seemed exaggerated given findings that recall of frequent events is
generally poor. As a result of these health perceptions, many of those interviewed felt that the time
and discipline required for journaling (e.g. of eating, sleeping, mood, etc.) outweighed the benefits.
Additionally, they expressed wariness of confronting or being reprimanded about what is already obvious
to them. They would prefer to lead investigations and develop strategies for improving their lives.
Pervasive computing systems may enable this type of integrated, contextualized inquiry if they can also
overcome the clinical and individual barriers that might otherwise impede adoption of the new technologies.
We envision a world where no exceptions are raised; instead, language semantics are changed
so that operations are total functions. Either an operation executes normally or tailored
recovery code is applied where exceptions would have been raised. As an initial step and
evaluation of this idea, we propose to transform programs so that null pointer dereferences
are handled automatically without a large runtime overhead. We increase robustness by replacing
code that raises null pointer exceptions with error handling code, allowing the program to
continue execution. Our technique first finds potential null pointer dereferences and then
automatically transforms programs to insert null checks and error-handling code. These
transformations are guided by composable, context sensitive recovery policies. Error-handling
code may, for example, create default objects of the appropriate types, or restore data
structure invariants. If no null pointers would be dereferenced, the transformed program behaves
just as the original. We applied our transformation in experiments involving multiple benchmarks,
the Java Standard Library, and externally reported null pointer exceptions. Our technique is
able to handle the reported exceptions and allow the programs to continue to do useful work, with
an average execution time overhead of less than 1% and an average byte code space overhead of 22%.
Null pointer exception management is a logical starting point for changing Java’s semantics for
exception handling, because of the simplicity and regularity and null pointer exceptions. Null pointer
exceptions, while conceptually simple, remain prevalent in practice. Null pointer dereferences are not
only frequent, but also catastrophic and are “a very serious threat to the safety of programs”. Many
classes of null pointer exceptions can be found automatically by static analyses, and they have been
reported as one of the top ten causes of common web application security risks. Addressing such risks
with fault-tolerance techniques is a promising avenue. For example, techniques that mask memory errors
have successfully eliminated security vulnerabilities in servers.
Though Java already provides an infrastructure for exceptions, the current state of the language is
only a partial solution. Java makes a clear distinction between checked and unchecked exceptions.
The former are included in method type signatures and must be addressed by the programmer; the latter
may be ignored without compiler complaint. Unchecked exceptions should also be documented and properly
handled by the language, in a systematic and universal manner. Java treats null pointer exceptions as
unchecked by default, while APPEND’s approach to null pointer prevention is similar to the way Java
treats checked exceptions: an undesirable situation or behavior is identified by the programmer, and
some error handling code is generated. One reason null pointer exceptions are not treated as checked by
Java is that there are many potential sources of null pointer dereferences and different recovery
situations would have to be embedded in multiple local catch blocks explicitly: a time-consuming and
error-prone task. First, it would be difficult to identify, for each null pointer exception that
propagated upwards, what kind of recovery code could be applied, without knowing context information.
Secondly, Java’s current exception handling mechanisms also open up the possibility of a breakdown of
encapsulation and information hiding, as implementation details from lower levels of scope are raised
to the top level of the program. A solution to null pointer exceptions that is able to prevent or mask
them in a way that is both reasonable and accessible to the programmer has yet to be implemented.
APPEND is a program transformation that changes Java’s null pointer exception handling by automatically
inserting null checks and error-handling code. No program annotations are required, and developers need
not wade through defect reports. Programs are modified according to composable recovery policies.
Recovery policies are executed at compile-time and, depending on the context, recovery code is inserted
that is then executed at run-time if the null checks fail. Recovery policies are conceptually related to
theorem prover tactics and tacticals or to certain classes of aspect-oriented programming. If no null
values are dereferenced at run-time, the transformed program behaves just as the original program. If the
original program would dereference a null value, the transformed program instead executes the policy
dictated error-handling code, such as creating a default value on the fly or not calculating that
expression. Previous research has suggested that programs might successfully continue even with discarded
instructions; we present and measure a concrete, low-level, annotation-free version of such a system, and
extend it to allow for user-specified actions.
The idea behind this approach is that null pointer dereferences are undesirable, especially in circumstances
where they are involved in non-critical computations where the program is forced to crash if one is
encountered. If we had a way of preventing the program from ceasing execution, while logging and performing
some sort of recovery code instead of raising an exception, we hypothesize that there are many applications
where this behavior would be preferred. Therefore, it becomes possible to check every pointer dereference
in the code for nullness, and to include recovery code for every such instance. We argue that this is a
practical and preferable way to deal with null pointer exceptions in Java.Because we intend to check every
pointer dereference for nullness, we could have chosen to take advantage of the existing null checking of
the Java virtual machine. Given the low overhead of our tool, we chose to work on the application level
instead, to remain portable and not have to rely on a single modified JVM instantiation.The transformation
can be implemented directly atop existing program transformation frameworks and dovetails easily with
standard development processes.
Due to the poor random write performance of flash SSDs (Solid State Drives), write optimized tree indexes have been proposed to improve the update performance. BFTL was proposed to balance the inferior random write performance and fast random read performance for flash memory based sensor nodes and embedded systems. It allows the index entries in one logical B-tree node to span over multiple physical pages, and maintains an in-memory table to map each B-tree node to multiple physical pages. Newly inserted entries are packed and then written together to some new blocks. The table entries of corresponding B-tree nodes are updated, thus reducing the number of random writes. However, BFTL entails a high search cost since it accesses multiple disk pages to search a single tree node. Furthermore, even though the in-memory mapping table is compact, the memory consumption is still high. FlashDB was proposed to implement a self-tuning scheme between standard B+-tree and BFTL, depending on the workloads and the types of flash devices. Since our proposed index mostly outperforms both B+-tree and BFTL under various workloads on different flash SSDs, we do not compare our index with this self-tuning index in this paper. More recently, LA-tree was proposed for flash memory devices by adding adaptive buffers between tree nodes. LA-tree focuses on raw, small-capacity and byte addressable flash memory devices, such as sensor nodes, whereas our work is targeted for off theshelf large flash SSDs, which provide only a block-based access interface. Different target devices of these two indexes result in their differences in design.
On the hard disk, many disk-based indexes optimized for write operations have also been proposed.
Graefe proposed a write-optimized B-tree by applying the idea of the log file system to the B-tree
index. Y-tree supports high volume insertions for data warehouses following the idea of buffer tree.
The logarithmic structures have been widely applied to optimize the write performance. O’Neil et al.
proposed LSM-tree and its variant LHAM for multi-version databases. Jagadish et al. used a similar
idea to design a stepped tree index and the hash index for data warehouses. Our FD-tree follows the
idea of logarithmic method. The major difference is that we propose a novel method based on the
fractional cascading technique to improve the search performance on the logarithmic structure.
Helix is a high-speed stream cipher with a built-in MAC functionality. On a Pentium II CPU it
is about twice as fast as Rijndael or Twofish, and comparable in speed to RC4. The overhead per
encrypted/authenticated message is low, making it suitable for small messages. It is efficient
in both hardware and software, and with some pre-computation can effectively switch keys on a
per-message basis without additional overhead.
Basic security services require both encryption and authentication. This is (almost) always done
using a symmetric cipher—public-key systems are only used to set up symmetric keys—and a Message
Authentication Code (MAC). The AES process provided a number of very good block cipher designs,
as well as a new block cipher standard. The cryptographic community learned a lot during the
selection process about the engineering criteria for a good cipher. AES candidates were compared
in performance and cost in many different implementation settings. We learned more about the
importance of fast rekeying and tiny-memory implementations, the cost of S-boxes and circuitdepth
for hardware implementations, the slowness of multiplication on some platforms, and other
The community also learned about the difference of cryptanalysis in theory versus cryptanalysis
in practice. Many block cipher modes restrict the types of attack that can be performed on the
underlying block cipher. Yet the generally accepted attack model for block ciphers is very
liberal. Any method that distinguishes the block cipher from a random permutation is considered
an attack. Each block cipher operation must protect against all types of attack. The resulting
over-engineering leads to inefficiencies.
Computer network properties like synchronization and error correction have eliminated the
traditional synchronization problems of stream-cipher modes like OFB. Furthermore, stream ciphers
have different implementation properties that restrict the cryptanalyst. They only receive their
inputs once (a key and a nonce) and then produce a long stream of pseudo-random data. A stream
cipher can start with a strong cryptographic operation to thoroughly mix the key and nonce into
a state, and then use that state and a simpler mixing operation to produce the key stream. If the
attacker tries to manipulate the inputs to the cipher he encounters the strong cryptographic
operation. Alternatively he can analyse the key stream, but this is a static analysis only. As
far as we know, static attacks are much less powerful than dynamic attacks. As there are fewer
cryptographic requirements to fulfill, we believe that the key stream generation function can be
made significantly faster, per message byte, than a block cipher can be. Given the suitability of
steam ciphers for many practical tasks and the potential for faster implementations, we believe
that stream ciphers are a fruitful area of research.
Additionally, a stream cipher is often implemented—and from a cryptographic point of view, should
always be implemented—together with a MAC. Encryption and authentication go hand in hand, and
significant vulnerabilities can result if encryption is implemented without authentication.
Outside the cryptographic literature, not using a proper MAC is one of the commonly encountered
errors in stream cipher systems. A stream cipher with built-in MAC is much more likely to be used
correctly, because it provides a MAC without the associated performance penalties.
Helix is a combined stream cipher and MAC function, and directly provides the authenticated
encryption functionality. By incorporating the plaintext into the stream cipher state Helix can
provide the authentication functionality without extra costs.Helix’s design strength is 128 bits,
which means that we expect that no attack on the cipher exists that requires fewer than 2^128
Helix block function evaluations to be carried out. Helix can process data in less than 7 clock
cycles per byte on a Pentium II CPU, more than twice as fast as AES. Helix uses a 256-bit key and
a 128-bit nonce. The key is secret, and the nonce is typically public knowledge. Helix is
optimised for 32-bit platforms; all operations are on 32-bit words.