Archive for January, 2013

What is Extreme Programming?

Kent Beck, author of Extreme Programming Explained says, “XP is a light-weight methodology for small-to-medium-sized teams developing software in the face of vague or rapidly changing requirements.” Simply stated, XP is a set of values, rights and best practices that support each other in incrementally developing software. XP values Communication, Simplicity, Feedback and Courage. Programmers and Customers have the right to do a quality job all the time and to have a real life outside of work.

XP is a collection of best practices. Some may sound familiar. Some may sound foreign.

Customer Team Member – Teams have someone (or a group of people) representing the interests of the customer. They decide what is in the product and what is not in the product.

Planning Game – XP is an iterative development process. In the planning game, the customer and the programmers determine the scope of the next release. Programmers estimating the feature costs. Customers select features and package the development of those features into small iterations (typically 2 weeks). Iterations are combined into meaningful end user releases.

User Story – A User Story represents a feature of the system. The customer writes the story on a note card. Stories are small. The estimate to complete a story is limited to no greater than what one person could complete within a single iteration.

Small Releases – Programmers build the system in small releases defined. An iteration is typically two weeks. A release is a group of iterations that provide valuable features to the users of the system.

Acceptance Testing – The customer writes acceptance tests. The tests demonstrate that the story is complete. The programmers and the customer automate acceptance tests. Programmers run the tests multiple times per day.

Open Workspace – To facilitate communications the team works in an open workspace with all the people and equipment easily accessible.

Test Driven Design – Programmers write software in very small verifiable steps. First, we write a small test. Then we write enough code to satisfy the test. Then another test is written, and so on.

Metaphor – The system metaphor provides an idea or a model for the system. It provides a context for naming things in the software, making the software communicate to the programmers.

Simple Design – The design in XP is kept as simple as possible for the current set of implemented stories. Programmers don’t build frameworks and infrastructure for the features that might be coming.

Refactoring – As programmers add new features to the project, the design may start to get messy. If this continues, the design will deteriorate. Refactoring is the process of keeping the design clean incrementally.

Continuous Integration – Programmers integrate and test the software many times a day. Big code branches and merges are avoided.

Collective Ownership – The team owns the code. Programmer pairs modify any piece of code they need to. Extensive unit tests help protect the team from coding mistakes.

Coding Standards – The code needs to have a common style to facilitate communication between programmers. The team owns the code; the team owns the coding style.

Pair Programming – Two programmers collaborate to solve one problem. Programming is not a spectator sport.

Sustainable Pace –The team needs to stay fresh to effectively produce software. One way to make sure the team makes many mistakes is to have them work a lot of overtime.


, , , , , ,

Leave a comment

smartX model for smart card application deployment

In this model, we assume OCF (OpenCard Framework) and the smartX engine are initially installed and configured on the target terminal. As explained in the previous section, the terminal application consists in two blocks: the application process and the application protocol. The application process that encapsulates the logic of the application is compiled into a Java applet signed by a trusted entity. The application protocol is described inside an SML dictionary and is card-specific. Once the Java applet is downloaded, the smartX engine identifies the smart card inserted in the terminal. A simple identification consists in verifying the historical bytes of the card ATR (Answer To Reset). After correct identification, the smartX engine dynamically downloads the SML dictionary that contains the application protocol for the card inserted inside the terminal. With this dynamic mechanism, you minimize the loading time since you only download the dictionary relevant to the card inside the terminal.

In the OCF model, you had to download with the applet all the CardService implementations. With smartX, a terminal is also not limited to a predefined set of smart cards. As long as you provide the correct SML dictionary, a terminal can dynamically accept a new smart card that was not originally supported by the application. All these advantages make smartX a platform of choice for developing and deploying smart card applications on the Internet.

, , , , , , , ,

Leave a comment

The Priority Ceiling Protocol

Each shared resource has a priority ceiling that is defined as the priority of the highest-priority task that can ever access that shared resource. The protocol is defined as follows,

  • A task runs at its original (sometimes called its base) priority when it is outside a critical section.
  • A task can lock a shared resource only if its priority is strictly higher than the priority ceilings of all shared resources currently
    locked by other tasks. Otherwise, the task must block, and the task which has locked the shared resource with the highest priority ceiling inherits the priority of task.

An interesting consequence of the above protocol is that a task may block trying to lock a shared resource, even though the resource is not locked.  The priority ceiling protocol has the interesting and very useful property that no task can be blocked for longer than the duration of the longest critical section of any lower-priority task.

Priority Ceiling Protocol Emulation

The priority ceiling of a shared resource is defined, as before, to be the priority of the highest-priority task that can ever access that resource. A task executes at a priority equal to (or higher than) the priority ceiling of a shared resource as soon as it enters a critical section associated with that resource.  Applying the Priority Ceiling Protocol Emulation to the Priority Ceiling Protocol example results in the following sequence:


, , ,

Leave a comment

Levels of immersion in VR systems

In a virtual environment system a computer generates sensory impressions that are delivered to the human senses. The type and the quality of these impressions determine the level of immersion and the feeling of presence in VR. Ideally the high-resolution, high-quality and consistent over all the displays, information should be presented to all of the user’s senses. Moreover, the environment itself should react realistically to the user’s actions. The practice, however, is very different from this ideal case. Many applications stimulate only one or a few of the senses, very often with low-quality and unsynchronized information. We can group the VR systems accordingly to the level of immersion they offer to the user.

  1. Desktop VR – sometimes called Window on World (WoW) systems. This is the simplest type of virtual reality applications. It uses a conventional monitor to display the image (generally monoscopic) of the world. No other sensory output is supported.
  2. Fish Tank VR – improved version of Desktop VR. These systems support head tracking and therefore improve the feeling of “of being there” thanks to the motion parallax effect. They still use a conventional monitor (very often with LCD shutter glasses for stereoscopic viewing) but generally do not support sensory output.
  3. Immersive systems – the ultimate version of VR systems. They let the user totally immerse in computer generated world with the help of HMD that supports a stereoscopic view of the scene accordingly to the user’s position and orientation. These systems may be enhanced by audio, haptic and sensory interfaces.

, , , , , , ,

Leave a comment

The Problem with Dynamic DNS

Consider a business traveler who has a laptop configured to automatically update a remote DNS server with its current IP address. If the FQDN that was being updated by the laptop is known, or can be guessed, then anyone with modest computer skills can issue DNS queries on that name at regular intervals and monitor the current IP address.

As the traveler moves from one location to another, the IP address will change and the public DNS record for the FQDN will reflect this. The person monitoring the domain name will be able to observe the precise network locations used whenever the laptop connects to the Internet, as well as an approximate timestamp for when each event took place. Depending on the resources available to the monitor, most notably whether or not they work for law enforcement, they may be able to map that network location to a geographic location, possibly with a high degree of resolution.

The public DNS system is distributed across thousands of servers on the Internet and is used in a wide range of Internet protocols. Dynamic DNS monitoring uses nothing more than basic DNS queries and as such it offers effectively complete anonymity to the person doing the surveillance. Not only that, the target this is unable to detect that they are being observed in this manner. This represents a new form of surveillance that might be used by law enforcement for legitimate purposes or for unethical reasons by co-workers, competitors, or even stalkers, of the target.

Dynamic DNS is used by a large number of users for various reasons. For many of these, with static residential or business computers, monitoring poses no real privacy risk. But for those who travel with their laptop it could pose a serious risk to their personal privacy and business confidentiality. This risk has not been widely recognized thus far.

, , ,

Leave a comment

Stateful page evaluation

In stateful page evaluation, the browser history file and additional history stored by SpoofGuard are used to evaluate the referring page. Since it is important to minimize the number of false alarms, SpoofGuard does not issue any warnings for visiting a site that is in the user’s history file. The rationale for this is that if the user is warned the first time, and decides to proceed, the user is assumed to have sufficient reason to trust the site.

Domain check :  If the domain of a page closely resembles a standard or previously visited domain, the page may be part of a spoof. Although crude, we currently compare domains by Hamming (edit) distance. For example will raise the domain check if is in the file of commonly spoofed sites or in the user history. Clearly, it is possible to improve our comparison algorithm by studying the way people are fooled; this is a significant direction for future work.

A related issue is that some businesses outsource some of their web operations to contractors with different domain names. This poses an interesting challenge that we believe can be addressed. However, outsourced web activity leads to false alarms in the current version of

Referring page When a user follows a link, the browser maintains a record of the referring page. Since the typical web spoofing attack begins with an email message, a referring page from a web site where the user may have been reading email (such as Hotmail) raises
the level of suspicion. One complication associated with Hotmail, for example, is that Hotmail uses numeric IP addresses instead of symbolic host names. Therefore, when a user clicks on a link in a Hotmail message, the browser provides a numeric IP address to SpoofGuard as the referring page. In this situation, SpoofGuard uses reverse DNS to find the domain name associated with a numeric address, allowing us to identify Hotmail as the referring site.

Image-domain associations The image check described on database associating images such as corporate logos with domains.
The initial static database can be assembled using a web crawler or other tool, or it can be augmented using an individual’s browsing history. An early version of SpoofGuard used a fixed database; the current SpoofGuard implementation uses a hashed image history file.

, , , , , ,

Leave a comment

How to determine applicable law in the cloud?

The identification of applicable laws in the absence of any explicit choice by the parties involved is difficult in relation to any information society service, and cloud computing service models are certainly no exception. In a European context, the provisions of the eCommerce Directive play a central role, as it contains specific rules on applicable law for information society services. However, it is clear that this will be insufficient to address all questions in this domain: the rules established by the Directive obviously apply only in Member States, and in a non-European international context will not be able to solve conflicts of law. In addition, applicability of the law remains linked to the geographical location of the information society service provider, and in a cloud model it may be difficult to identify this entity or its geographical location. Finally, certain issues including contractual consumer protection clauses and intellectual property protection are excluded from the Directive’s scope, meaning that answers to conflicts of law in these domains will have to be sought elsewhere. Thus, it is already very complicated to identify the starting point for the establishment of trust, namely the specific laws that will apply in the absence of a choice by the parties. Globally, voluntary choice of applicable law by the stakeholders in a cloud service model may be the only viable solution to identify applicable law. In practice, the importance of this issue should not be overstated, as the choice of an applicable legal system on a contractual basis has indeed become standard practice in information society service contracts.

, ,

Leave a comment

%d bloggers like this: