Archive for December, 2012

METHODOLOGY OF KB’S FOR AUTOMATED NEGOTIATION

Most multi-agent systems which have applied ontology design focus on the use of domain ontology. In contrast with domain ontology which characterizes the domain knowledge where the task is performed, task ontology characterizes the computational architecture of a knowledge-based system which performs a task. To establish the task ontology based on KB’s framework, we propose the methodology of KB’s (Knowledge Beads) for automated negotiation. Here the methodology is defined as a set of procedures employed by a discipline that is used in the negotiation life cycle. The discipline is determined on the function making use of the knowledge.

Methodology of KB

Fig. 1. Methodology of KB’s for automated negotiation

Fig. 1 shows how Negotiation Knowledge and Contextual e-Commerce Knowledge are used respectively for assisting the user to create a RFQ, and for the automated negotiation process. At the end of the process, log files are generated and added to the Contextual e-Commerce Knowledge database.

a. Meta-KB

To our knowledge, most current automated negotiation systems lack the ability of specifying the explicit use of knowledge in a systematic way, thus lack an efficient knowledge assisted automatic negotiation process. For this purpose, we define meta-KB as a meta-object for describing the procedural knowledge necessary to perform a certain task in the e-Procurement context. It contains the meta-knowledge about KB’s, which is knowledge about knowledge. The function which makes use of the meta-KB determines its discipline. Like an ordinary KB, a meta-KB contains attributes forming the knowledge. The attributes are either inherited from an existing KB or defined especially for the specific function, depending on the meta-KB’s discipline. For each attribute, the meta-KB specifies how the attribute value is obtained. The meta-KB for evaluation of a supplier inherits the attributes from the KB comprising knowledge about a supplier’s credit as shown in Table 1. It is illustrated in the following table. The tag ‘Meta-KB’ denotes it a meta-KB, and the use of the meta-KB is declared at the top of the table. It then specifies from which KB template that the meta-KB inherits its attributes. The value of Base Reputation is input from a Negotiation Expert manually. The attribute Number of Contracts Made has a returned function value evaluated on the negotiation log. The function is denoted by f in the table. The attribute Average Utility also has a returned function value evaluated on the negotiation log. The function is denoted by g in the table. The negotiation log is containing all the past successful deals committed with the particular supplier. Weights associated with attributes are also inherited from the supplier credit profile, which are not shown here.

Meta-KB for supplier evaluation

Table 1.1. Meta-KB for supplier evaluation

b. Knowledge Management Life Cycle

Knowledge management is performed throughout the proposed negotiation life cycle. Correspondingly, we propose the concept of knowledge management life cycle in automated negotiation. Our proposed knowledge management life cycle aims at facilitating the creation of negotiation expertise learning in automated negotiation. It comprises the following three phases: knowledge creation, exchanging and use of knowledge, and knowledge evaluation and renewal.

Knowledge management life cycle

Fig. 2. Knowledge management life cycle

Knowledge Creation

The knowledge creation phase corresponds to the specification and design phase in the proposed negotiation life cycle. It executes knowledge management tasks to assist in the specification of negotiation context. Old and existing knowledge which is relevant to the current negotiation context is identified. New knowledge is then created with respect to the procurement requirements and constraints. This phase involves mainly the manipulation of the Contextual e-Commerce Knowledge items, which are represented in
different KB’s.

Exchanging and Use of Knowledge

The phase of exchanging and use of knowledge corresponds to both the quotes evaluation and ranking phase and negotiation execution phase in the proposed negotiation life cycle. The knowledge management task to verify selected knowledge is performed in screening and evaluation phase, which is the core model of the quotes evaluation and ranking phase. The task to learn and apply negotiation knowledge from the history is performed in the negotiation execution phase.

Knowledge Evaluation and Renewal

The knowledge evaluation and renewal phase corresponds to the post-negotiation procession phase in the proposed negotiation life cycle. The knowledge management tasks mainly involve the capture and organization of knowledge, and the production of updated knowledge. This last phase involves re-evaluating old knowledge used in the past and using the evaluation result to create updated knowledge.

, , , , , , , , , , , , ,

Leave a comment

General HT enhancements in the Operating System

This explain the implications of HT (Hyper-Threading Technology) to the OS. Following is a summary of enhancements recommended in the OS.

Detection of HT – The OS needs to detect both the logical and processor packages if  HT is available for that processor(s).

hlt at idle loop – The IA-32 Intel Architecture has an instruction call hlt (halt) that stops processor execution and normally allows the processor to go into a lower-power mode. On a processor with HT, executing hlt transitions from a multi-task mode to a single-task mode, giving the other logical processor full use of all processor execution resources.

pause instruction at spin-waits – The OS typically uses synchronization primitives, such as spin locks in multiprocessor systems. The pause is equivalent to “rep; nop” for all known Intel architecture prior to Pentium4 or Intel Xeon processors. The instruction in spin-waits can avoid severe penalty generated when a processor is spinning on a synchronization variable at full speed.

Special handling for shared physical resources – MTRRs (Memory Type Range Registers) and the microcode are shared by the logical processors on a processor package. The OS needs to ensure the up-date to those registers is synchronized between the logical processors and it happens just once per processor package, as opposed to once per logical processor, if required by the spec.

Preventing excessive eviction in first-level data cache – Cached data in the first-level data cache are tagged and indexed by virtual addresses. This means two processes running on a different logical processors on a processor package can cause repeated evictions and allocations of cache lines when they are accessing the same virtual address or near in a competing fashion (e.g. user stack).

The original Linux kernel, for example, sets the same value to the initial user stack pointer in every user process. In our enhancement, we offset the stack pointer simply by a multiple of 128 bytes using the mod 64, i.e. ((pid%64) << 7) of the unique process ID to resolve this issue.

Scalability issues – The current Linux, for example, is scalable in most cases, at least up to 8 CPUs. How-ever, enabling HT means doubling the number of processors in the system, thus it can expose scalability issues, or it does not show performance enhancements when HT is enabled.

Linux (2.4.17 or higher) supports HT, and it has all the above changes in it. We developed and identified the essential code (about just 1000 lines code) for those changes (except scalability issues) based on performance measurements, and then improved the code with Linux community.

, , , , , , , , , , ,

Leave a comment

Determining InnoDB Resource Requirements

It is all well and good to wave one’s hands and say “InnoDB clearly requires far more memory for these reasons,” but it gets slightly difficult to pin down exactly how much more memory. This is true for several reasons:

1. How did you load your database?

InnoDB table size is not a constant. If you took a straight SQL dump from a MyISAM table and inserted it into an InnoDB table, it is likely larger than it really needs to be. This is because the data was loaded out of primary key order and the index isn’t tightly packed because of that. If you took the dump with the order by primary argument to mysql dump, you likely have a much smaller table and will need less memory to buffer it.

2. What exactly is your table size?

This is an easy question to answer with MyISAM: that information is directly in the output of “SHOW TABLE STATUS”. However, the numbers from that same source for InnoDB are known to be estimates only. The sizes shown are the physical sizes reserved for the tables and have nothing to do with the actual data size at that point. Even the row count is a best guess.

3. How large is your primary key?

It was mentioned above that InnoDB clusters the data for a table around the primary key. This means that any secondary index leaves must contain the primary key of the data they “point to.” Thus, if you have tables with a large primary key, you will need more
memory to buffer a secondary index and more disk space to hold them. This is one of the reasons some people argue for short “artificial” primary keys for InnoDB tables when there isn’t one “natural” primary key.

There is no set method that will work for everyone to predict the needed resources. Worse than that, your needed resources will change with time as more inserts to your table increase its size and fragment the packing of the BTree.  It is important to not run at 100% usage of the innodb buffer, as this likely means that you’re not buffering as much as you could for reads, and that you’re starving your write buffer which also lives in the same global innodb_buffer.

, , , , , , , , , ,

Leave a comment

Trustworthy TCB

Over the past few years the embedded-systems industry has been moving toward the use of memory protection, and operating systems which support it. With this comes the increasing popularity of commodity operating systems, particularly embedded versions of Linux and Windows. Those systems, if  stripped to a bare minimum for embedded-systems use, may have a kernel (defined as the code executing in the hardware’s privileged mode) of maybe 200,000 LOC (Lines Of Code), which is a lower bound on the size of the TCB (Trusted Computing Base). In practice, the TCB is larger than just the kernel; for example, in a Linux system every root daemon is part of the TCB. Hence the TCB will, at an optimistic estimate, still contain hundreds if not thousands of  bugs, far too many for comfort.

If we want a secure system, we need a secure, trustworthy TCB, which really means one free of bugs. Is this possible? Methods for guaranteeing the correctness of code (exhaustive testing and mathematical proof, a.k.a. formal methods) scale very poorly; they are typically limited to hundreds or, at best, thousands of lines of code. Can the TCB be made so small?  Maybe not, but maybe it doesn’t have to be.

Modularity is a proven way of dealing with complexity, as it allows one to separate the problem into more tractable segments. However, with respect to trustworthiness, modularizing the kernel does not help, as there is no protection against kernel code
violating module boundaries. As far as assertion goes, the kernel is atomic.

The situation is better for non-kernel code. If this is modularized, then individual modules (or components) can be encapsulated into their own address spaces, which means that the module boundaries are enforced by hardware mechanisms mediated by the kernel. If the kernel is trustworthy, then the trustworthiness of such a component can be established independently from other components.
That way, the TCB can be made trustworthy even if it is larger than what is tractable by exhaustive testing or formal methods.

, , , , , , , , , , , ,

1 Comment

Crystal Reports Server

Crystal Reports Server is services-oriented architecture of BusinessObjects Enterprise. BusinessObjects Enterprise is a complete business intelligence (BI) platform that provides specialized end-user tools including Crystal Reports, Web Intelligence, OLAP  Intelligence, Performance Manager, and Dashboard Manager. BusinessObjects Enterprise also includes data integration capabilities from Data Integrator. It is architected using modern web standards with an industry-standard communication framework to tie all the components and services together.

Crystal Reports Server harnesses the reporting services and components of the BusinessObjects Enterprise architecture to offer small and medium businesses a proven reporting solution. It addresses the complete reporting process—from data access and report design, to report management and delivery, to report integration with portals and enterprise applications.

Functional Architecture of Crystal Reports Server

Crystal Reports Server is comprised of separate—yet interconnected—components and services optimized for specific tasks. These components and services include:

CRS

  • Data services for comprehensive and flexible data access
  • Creation tool for flexible data formatting using Crystal Reports
  • Platform services for report publishing, security, and processing
  • Management tools for managing Crystal Reports Server services and objects
  • Web and application services for customized report integration with portals and applications
  • User interaction tier for end-user report viewing and interaction

, , , , , , , , , , , , , , ,

Leave a comment

%d bloggers like this: