Sunday 5 August 2012

2012, Java IEEE Project Abstracts - Part 3

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN - MOBILE COMPUTING
APPROXIMATION ALGORITHMS FOR DATA BROADCAST IN WIRELESS NETWORKS
Broadcasting is a fundamental operation in wireless networks and plays an important role in the communication protocol design. In multihop wireless networks, however, interference at a node due to simultaneous transmissions from its neighbors makes it nontrivial to design a minimum-latency broadcast algorithm, which is known to be NP-complete.
We present a simple 12-approximation algorithm for the one-to-all broadcast problem that improves all previously known guarantees for this problem. We then consider the all-to-all broadcast problem where each node sends its own message to all other nodes. For the all-to-all broadcast problem, we present two algorithms with approximation ratios of 20 and 34, improving the best result available in the literature.
Finally, we report experimental evaluation of our algorithms. Our studies indicate that our algorithms perform much better in practice than the worst-case guarantees provided in the theoretical analysis and achieve up to 37 percent performance improvement over existing schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
CHANNEL ESTIMATION FOR OPPORTUNISTIC SPECTRUM ACCESS: UNIFORM AND RANDOM SENSING
The knowledge of channel statistics can be very helpful in making sound opportunistic spectrum access decisions. It is therefore desirable to be able to ef ficiently and accurately estimate channel statistics. In this paper we study the problem of optimally placing sensing/sampling times over a time window so as to get the best estimate on the parameters of an on-off renewal channel. We are particularly interested in a sparse sensing regime with a small number of samples relative to the time window size.
Using Fisher information as a measur e, we analytically derive the best and worst sensing sequences under a sparsity condition. We also present a way to derive the bes t/worst sequences without this condition using a dynamic programming approach. In both cases the worst turns out to be the uniform sensing sequence, where sensing times are evenly spaced within the window. Interestingly the best sequence is also uniform but with a much smaller sensing interval that requires a priori knowledge of the channel parameters.
With these results we argue that without a priori knowledge, a robust sensing strategy should be a randomized strategy. We then compare different random schemes using a family of distributions generated by the circular β ensemble, and propose an adaptive sensing scheme to effectively track time-varying channel parameters. We further discuss the applicability of compressive sensing for this problem


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
AN MIMO CONFIGURATION MODE AND MCS LEVEL SELECTION SCHEME BY FUZZY Q-LEARNING FOR HSPA ÞSYSTEMS
In this paper, we propose afuzzy Q-learning-based MIM O configuration mode and MCS level (FQL-MOMS) selection scheme for high speed packet access evolution (HSPA þ) systems. The FQL-MOMS selection scheme intends to enhance the system throughput under the block error rate (BLER) requirement guarantee.
It will determine an appropriate MIMO configuration mode and MCS (modulation and coding scheme) level for packet data transmission in HSPA þ systems, under the situations that the channel status is varying and the channel quality indication (CQI) has report delay.
The FQL-MOMS scheme considers not only the reported CQI and the last transmission result but also the BLER performance metric and the transmission efficiency. Moreover, it is effectively configured, where the fuzzy rules and the reinforcement signals for the Q-learning algorithm are sophisticatedly designed.
Simulation results show that the proposed FQL-MOMS scheme increases the system throughput by up to 49.3 and 35.9 percent, compared to the conventional adaptive threshold selection (ATS) scheme [12] and the Q-HARQ scheme [14], respectively, under the BLER requirement fulfillment


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
A NOVEL MAC SCHEME FOR MULTICHANNEL COGNITIVE RADIO AD HOC NETWORKS
This paper proposes a novel medium access control (MAC) scheme for multichannel cognitive radio (CR) ad hoc networks, which achieves high throughput of CR system while protecting primary users (PUs) effectively.
In designing the MAC scheme, we consider that the PU signal may cover only a part of the network and the nodes can have the different sensing result for the same PU even on the same channel. By allowing the nodes to use the channel on which the PU exists as long as their transmissions do not disturb the PU, the proposed MAC scheme fully utilizes the spectrum access opportunity.
To mitigate the hidden PU problem inherent to multichannel CR networks where the PU signal is detectable only to some nodes, the proposed MAC scheme adjusts the sensing priorities of channels at each node with the PU detection information of other nodes and also limits the transmission power of a CR node to the maximum allowable power for guaranteeing the quality of service requirement of PU.
The performance of the proposed MAC scheme is evaluated by using simulation. The simulation results show that the CR system with the proposed MAC accomplishes good performance in throughput and packet delay, while protecting PUs properly.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
A COST ANALYSIS FRAMEWORK FOR NEMO PREFIX DELEGATION-BASED SCHEMES
Network Mobility (NEMO) efficiently manages the mobility of multiple nodes that moves together as a mobile network. A major limitation of the basic protocol in NEMO is the inefficient route between end hosts. A number of prefix delegation-based schemes have been proposed in the literature to solve the route optimization problem in NEMO.
Approaches used by the schemes trade off delivery of packets through the partially optimized route with signaling and other processing overheads. Cost of delivering packets through the partially optimized route along with signaling and processing cost need to be measured to find out the gain from tradeoff. However, cost analysis performed so far on NEMO protocols consider only the cost of signaling.
In this paper, we have developed analytical framework to measure the costs of the basic protocol for NEMO, and four representative prefix delegation-based schemes. Our results show that cost of packet delivery through the partially optimized route dominates over other costs.
Therefore, optimizing the route completely is preferable to reduction of signaling as far as cost of network mobility is concerned. Our cost analysis framework will help in decision making to select the best route optimization scheme depending on the load imposed by the scheme on the infrastructure.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
USING ROTATABLE AND DIRECTIONAL (R&D) SENSORS TO ACHIEVE TEMPORAL COVERAGE OF OBJECTS AND ITS SURVEILLANCE APPLICATION
Due to hardware design or cost consideration, sen-sors may possess sector-like sensing coverage. Furthermore, by stepper motors, sensors can rotate to cover the objects around them. This type of sensors are called rotatable and directional (R&D) sensors. Through rotation, R&D sensors provide temporal coverage to objects by “periodically” detecting their existence.
In the paper, we first develop an event-driven surveillance system by R&D sensors, where objects are monitored by the sensors equipped with infrared detectors and cameras. When an object is taken away, the sensor monitoring the object reports a warning message along with detailed snapshots from the surroundings.
Then, motivated by the system, we formulate an R&D sensor deployment problem, which tries to deploy the minimum number of R&D sensors to cover a given set of objects such that each object is covered by 0 <δ ≤ 1 ratio of time in every frame. We show this problem to be NP-hard and propose two efficient heuristics.
The maximum covering deployment (MCD) heuristic iteratively deploys a sensor to cover more objects, and performs well when objects congregate together. The disk-overlapping deployment (DOD)heuristic deploys sensors to cover the joint sectors of overlapped disks, so it works better when objects are arbitrarily placed in the sensing field.
The paper contributes in defining a new temporal coverage model by R&D sensors, developing a surveillance application for this model, and proposing efficient heuristics to reduce the deployment cost


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
UNDERSTANDING NODE LOCALIZABILITY OF WIRE-LESS AD-HOC AND SENSOR NETWORKS
Location awareness is highly critical for wireless ad-hoc and sensor networks. Many efforts have been made to solve the problem of whether or not a network can be local-ized. Nevertheless, based on the data collected from a working sensor network, it is observed that the network is not  always entirely localizable.
Theoretical analyses also suggest that, in most cases, it is unlikely that all nodes in a network are localizable, although a (large) portion of the nodes can be uniquely located. Existing studies merely ex-amine whether or not a network is localizable as a whole; yet two fundamental questions remain unaddressed: First, given a network configuration, whether or not a specific node is localizable? Second, how many nodes in a network can be located  and  which  are  them? In this study, we analyze the limitation of previous works an d propose a novel concept of node localizability.
By deriving the necessary and sufficient conditions for node localizability, for the first time, it is possible to analyze how many n odes one can expect to locate in sparsely or moderately connected networks.
To validate this design, we implement our solution on a real-world sys-tem and the experimental results show that node localizability provides useful guidelines for network deployment and other location-based services


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
STORM: A FRAMEWORK FOR INTEGRATED ROUTING, SCHEDULING AND TRAFFIC MANAGEMENT IN AD HOC NETWORKS
A cross-layer framework is introduced for the effective dissemination of real-time and elastic traffic in multi-hop wireless networks called STORM (Scheduling and Traffic Management in Ordered Routing Meshes).
Unicast and multicast routes are estab-lished in coordination with the scheduling of transmissions and band-width reservations in a way that bandwidth and delay guarantees can be enforced on a per-hop and end-to-end basis. The routes established in STORM are shown to be loop-free and real-time packets forwarded along these routes are shown to have bounded end-to-end delays.
Results from detailed simulation experiments show that, compared to a protocol stack consisting of 802.11 DCF for channel access, AODV or OLSR for unicast routing, and ODMRP for multicast routing, STORM attains similar or better performance for elastic traffic, and up to two orders of magnitude improvement in end-to-end delays, with twice the amount of data delivery for real-time traffic while inducing considerably less communication overhead.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
THE BOOMERANG PROTOCOL: TYING DATA TO GEOGRAPHIC LOCATIONS IN MOBILE DISCONNECTED NETWORKS
We present the boomerang protocol to efficiently retain information at a particular geographic location in a sparse network of highly mobile nodes without using infrastructure networks. To retain information around certain physical location, each mobile device passing that location will carry the information for a short while.
This approach can become challenging for remote locations around which only few nodes pass by. To address this challenge, the boomerang protocol, similar to delay-tolerant communication, first allows a mobile node to carry packets away from their location of origin and periodically returns them to the anchor location.
A unique feature of this protocol is that it records the geographical trajectory while moving away from the origin and exploits the recorded trajectory to optimize the return path. Simulations using automotive traffic traces for a southern New Jersey region show that the boomerang protocol improves packet return rate by 70 percent compared to a baseline shortest path routing protocol.
This performance gain can become even more significant when the road map is less connected. Finally, we look at adaptive protocols that can return information within specified time limits


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
PROSPECT: A PROACTIVE SPECTRUM HANDOFF FRAMEWORK FOR COGNITIVE RADIO AD HOC NETWORKS WITHOUT COMMON CONTROL CHANNEL
Cognitive Radio (CR) technology is a promising solution to enhance the spectrum utilization by enabling unlicensed users to exploit the spectrum in an opportunistic manner. Since unlicensed users are temporary visitors to the licensed spectrum, they are required to vacate the spectrum when a licensed user reclaims it.
Due to the randomness of the appearance of licensed users, disruptions to both licensed and unlicensed communications are often difficult to prevent, which may lead to low throughput of both licensed and unlicensed communications. In this paper, a proactive spectrum handoff framework for CR ad hoc networks, ProSpect, is proposed to address these concerns.
In the proposed framework, Channel-Switching (CW) policies and a proactive spectrum handoff protocol are proposed to let unlicensed users vacate a channelbefore a licensed user utilizes it to avoid unwanted interference. Network coordination schemes for unlicensed users are also incorporated into the spectrum handoff protocol design.
Moreover, a distributed channel selection scheme to eliminate collisions among unlicensed users in a multiuser spectrum handoff scenario is proposed. In our proposed framework, unlicensed users coordinate with each other without using a Common Control Channel (CCC), which is highly adaptable in a spectrum-varying environment.
We compare our proposed proactive spectrum handoff protocol with a reactive spectrum handoff protocol, under which unlicensed users switch channels after collisions with licensed transmissions occur. Simulation results show that our proactive spectrum handoff outperforms the reactive spectrum handoff approach in terms of higher throughput and fewer collisions to licensed users.
Furthermore, our distributed channel selection can achieve higher packet delivery rate in a multiuser spectrum handoff scenario, compared with existing channel selection schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
ROBUST RELATIVE LOCATION ESTIMATION IN WIRELESS SENSOR NETWORKS WITH INEXACT POSITION PROBLEMS
In this paper, the relative location estimation problem, a prominent issue faced by several applications in wireless sensor networks (WSNs), is considered. Sensors are classified into two categories: location-aware and location-unaware sensors. To estimate the positions of location-unaware sensors, exact positions are often assumed for location-aware sensors. However, in practice, such precise data may not be available.
Therefore, determining the positions of location-unaware sensors in the presence of inexact positions of location-aware sensors is the primary focus of this study. A robust min-max optimization method is proposed for the relative location estimation problem by minimizing the worst-case estimation error.
The corresponding optimization problem is originally nonconvex, but after it is transformed into a convex semidefinite program (SDP), it can be solved by existing numerical techniques. In the presence of inexact positions of location-aware sensors, the robustness of the proposed approach is validated by simulations under different WSN topologies.
Modified maximum-likelihood (ML) estimation and second-order cone programming (SOCP) relaxation methods have been used for localization in comparison with the proposed approach


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
ON THE VULNERABILITIES OF CSI IN MIMO WIRELESS COMMUNICATION SYSTEMS
Multiple-input multiple-output (MIMO) technologies are apopular choice for emerging wireless systems due to their promised gains in throughput and reliability. In order to realize any gains over traditional non-MIMO communication systems, these systems must possess accurate knowledge of the wireless channel.
In this paper, we investigate strategies for disrupting MIMO communications by developing attacks that target the often over-looked, but essential, channel estimation procedure.
Our study focuses on the two most popular and well-known MIMO techniques: the capacity achieving SVD-based MIMO scheme, and the Alamouti space-time block code (STBC), which spans many protocols including 802.11n, WiMAX, and 3GPP. We augment theoretical and simulation results with real-world experimentation using the USRP/GNU Radio software de fined radio platform.
We also present novel methodology to protect the channel estimation procedure from such attacks by embedding authentication messages into physical layer features of the transmissions


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
NATURE-INSPIRED SELF-ORGANIZATION, CONTROL, AND OPTIMIZATION IN HETEROGENEOUS WIRELESS NETWORKS
In this paper, we present new models and algorithms for control and optimization of a class of next generation communication networks: Hierarchical Heterogeneous Wireless Networks (HHWNs), under real-world physical constraints. Two biology-inspired techniques, a Flocking Algorithm (FA) and a Particle Swarm Optimizer (PSO), are investigated in this context.
Our model is based on the control framework at the physical layer presented previously by the authors. We first develop a nonconvex mathematical model for HHWNs. Second, we propose a new FA for self-organization and control of the backbone nodes in an HHWN by collecting local information from end users.
Third, we employ PSO, a widely used artificial intelligence algorithm, to directly optimize the HHWN by collecting global information from the entire system. A comprehensive evaluation measurement during the optimization process is developed.
In addition, the relationship between HHWN and FA and the comparison of FA and PSO are discussed, respectively. Our novel framework is examined in various dynamic scenarios. Experimental results demonstrate that FA and PSO both outperform current algorithms for the self-organization and optimization of HHWNs while showing different characteristics with respect to convergence speed and quality of solutions


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
MODELING AND PERFORMANCE EVALUATION OF BACKOFF MISBEHAVING NODES IN CSMA/CA NETWORKS
Backoff misbehavior, in which a wireless node deliberately manipulates its backoff time, can induce significant network problems, such as severe unfairness and denial-of-service. Although great progress has been made towards the design of countermeasures to backoff misbehavior, little attention has been focused on quantifying the gain of backoff misbehaviors.
In this paper, to assess the gain that misbehaving nodes can obtain, we define and study two general classes of backoff misbehavior: continuous misbehavior, which keeps manipulating the backoff time unless it is disabled by countermeasures, and intermittent misbehavior, which tends to evade the detection of countermeasures by performing misbehavior sporadically. Our approach is to introduce a new performance metric, namely order gain, to characterize the performance benefits of misbehaving nodes in comparison to legitimate nodes in CSMA/CA-based wireless networks.
We derive the order gains of both continuous andintermittent misbehaviors and further investigate the relation between our metric, order gain, and the throughput gain for a misbehaving node. We show that in IEEE 802.11 networks, the throughput ratio of a backoff misbehaving node to a legitimate node is either bounded above or proportional to the number of legitimate nodes.
We use both simulations and experiments to validate our theoretical analysis and to further demonstrate the impact of a wide range of backoff misbehaviors on network performance in CSMA/CA-based wireless networks


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
LOW POWER CONSUMPTION SOLUTIONS FOR MOBILE INSTANT MESSAGING
Instant messaging (IM) services enable real-time text and multimedia exchange and online presence awareness. Users typically log onto instant messaging services persistently to discover available friends and also to be discovered.
However, our analysis shows that the frequency exchange of presence information incurs massive power consumption to mobile devices over cellular or wireless local area networks. Such power consumption penalty can render persistent-instant messaging infeasible for battery-powered mobile devices.
In this paper, we propose several solutions to mitigate the power consumption problem. By reducing the network access and keeping mobile devices in the sleep mode as much as possible, these solutions achieve significant power saving. The power consumption of the proposed solutions is derived analytically in this paper and the proposed solutions are implemented using a Jabber-based architecture.
Actual power measurement results show that the power consumption of the proposed solutions agrees well with our analysis, and significant power saving can be achieved on mobile handsets with our low power consumption solutions implemented.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
GAME-THEORETIC ANALYSIS OF COOPERATION INCENTIVE STRATEGIES IN MOBILE AD HOC NETWORKS
In mobile ad hoc networks (MANETs), tasks are conducted based on the cooperation of nodes in the networks. However, since the nodes are usually constrained by limited computation resources, selfish nodes may refuse to be cooperative. Reputation systems and price-based systems are two main solutions to the node non-cooperation problem.
A reputation system evaluates node behaviors by reputation values and uses a reputation threshold to distinguish trustworthy nodes and untrustworthy nodes. A price-based system uses virtual cash to control the transactions of a packet forwarding service. Although these two kinds of systems have been widely used, very little research has been devoted to investigating the effectiveness of the node cooperation incentives provided by the systems.
In this paper, we use game theory to analyze the cooperation incentives provided by these two systems and by a system with no cooperation incentive strategy. We find that the strategies of using a threshold to determine the trustworthiness of a node in the reputation system and of rewarding cooperative nodes in the price-based system may be manipulated by clever or wealthy but selfish nodes.
Illumined by the investigation results, we propose and study an integrated system. Theoretical and simulation results show the superiority of the integrated system over an individual reputation system and a price-based system in terms of the effectiveness of cooperation incentives and selfish node detection.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
ERROR RESILIENT ESTIMATION AND ADAPTIVE BINARY SELECTION FOR FAST AND RELIABLE IDENTIFICATION OF RFID TAGS IN ERROR-PRONE CHANNEL
In RFID systems, far field passive tags send information using back scattering. The signal level is typically very small, so channel error during transmission may occur frequently. Due to channel error, performance of RFID tag identification under error-prone channel is degraded compared to that under error-free channel.
In this paper, we propose a novel error resilient estimation and adaptive binary selection to overcome the problem of channel errors. Our proposed error resilient estimation algorithm can estimate the number of tags and the channel state accurately regardless of frame errors.
And our proposed adaptive binary selection reduces the idle slots caused by frame errors. Performance analysis and simulation results show that the proposed algorithm consumes up to 20 percent less time slots than the binary tree protocol and dynamic framed slotted ALOHA (DFSA) in various packet error rate (PER) conditions


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
LEVERAGING THE ALGEBRAIC CONNECTIVITY OF A COGNITIVE NETWORK FOR ROUTING DESIGN
In this paper, we consider the implications of spectrum heterogeneity on connectivity and routing in a Cognitive Radio Ad Hoc Network (CRAHN). We study the Laplacian spectrum of the CRAHN graph when the activity of primary users is considered. We introduce the cognitive algebraic connectivity , i.e., the second smallest eigenvalue of the Laplacian of a graph, in a cognitive scenario.
Throughout this notion we provide a methodology to evaluate the connectivity of CRAHNs and consequently introduce a utility function that is shown to be effective in capturing key characteristics of CRAHN paths. This model provides a unique metric that captures network connectivity, path length, and impact of primary users.
Moreover, the proposed metric penalizes paths where spectrum band switchings are highly probable. We design all the components of our routing framework, named Gymkhana, and we present a twofold performance verification: one from a topological perspective to show all the potentialities of the proposed routing approach, and the other considering network traffic to evaluate the performance in terms of end-to-end delay and packet delivery ratio.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
FAST RELEASE/CAPTURE SAMPLING IN LARGE-SCALE SENSOR NETWORKS
Efficient estimation of global information is a com-mon requirement for many wireless sensor network applications. Examples include counting the number of nodes alive in the network and measuring the scale of physically correlated events.
These tasks must be accomplished at extremely low overhead due to the severe resource limitation of sensor nodes, which poses a challenge for large-scale sensor networks. In this paper, we develop a novel protocol FLAKE to efficiently and accurately estimate the global information of large-scale sensor networks based on the sparse sampling theory.
Specially, FLAKE dissem-inates a small number of messages called seeds to the network and issues a query about which nodes receive a seed. The number of nodes that have the information of interest can be estimated by counting the seeds disseminated, the nodes queried, and the nodes that receive a seed. FLAKE can be easily implemented in a distributed manner due to its simplicity.
Moreover, desirable trade-offs can be achieved between the accuracy of estimation and the system overhead. Our simulations show that FLAKE significantly outperforms several existing schemes on accuracy, delay and message overhead


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
EFFICIENT AND FAIR BANDWIDTH ALLOCATION IN MULTI-CHANNEL COGNITIVE RADIO NETWORKS
Cognitive radio improves spectrum efficiency by allowing secondary users (SUs) to dynamically exploit the idle spectrum owned by primary users (PUs).
This paper studies optimal bandwidth allocation of SUs for throughput efficiency. Consider the following tradeoff: an SU increases its instantaneous throughput by accessing more spectrum, but channel access/switching overhead, contention among multiple SUs, and dynamic PU activity create higher liability for larger bandwidths. So how much is too much? In this paper, we study the optimal bandwidth allocation for multiple SUs.
Our approach is two-fold. We first study the optimal bandwidth a SU should use to maximize the per-SU throughput in the long term. The optimal bandwidth is derived in the context of dynamic PU activity, where we consider both independent and correlated PU channel scenarios while accounting for the effects of channel switching overhead.
We further consider the case of sub-optimal spectrum use by SUs in the short term due to PU activity dynamics. We propose an efficient channel reconfiguration scheme to improve SUs’ performance. We use real PU channel activity traces in the simulations to validate our results. The work sheds light on the design of spectrum sharing protocols in cognitive radio networks


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
ENERGY-EFFICIENT COOPERATIVE VIDEO DISTRIBUTION WITH STATISTICAL QOS PROVISIONS OVER WIRELESS NETWORKS
For real-time video broadcast where multiple users are interested in the same content, mobile-to-mobile cooperation can be utilized to improve delivery efficiency and reduce network utilization. Under such cooperation, however, real-time video transmission requires end-to-end delay bounds.
Due to the inherently stochastic nature of wireless fading channels, deterministic delay bounds are prohibitively difficult to guarantee. For a scalable video structure, an alternative is to provide statistical guarantees using the concept of effective capacity/bandwidth by deriving quality of service exponents for each video layer.
Using this concept, we formulate the resource allocation problem for general multihop multicast network flows and derive the optimal solution that minimizes the total energy consumption while guaranteeing a statistical end-to-end delay bound on each network path. A method is described to compute the optimal resource allocation at each node in a distributed fashion.
Furthermore, we propose low complexity approximation algorithms for energy-efficient flow selection from the set of directed acyclic graphs forming the candidate network flows. The flow selection and resource allocation process is adapted for each video frame according to the channel conditions on the network links.
Considering different network topologies, results demonstrate that the proposed resource allocation and flow selection algorithms provide notable performance gains with small optimality gaps at a low computational cost.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
DELAY OPTIMAL SCHEDULING FOR COGNITIVE RADIOS WITH COOPERATIVE BEAMFORMING: A STRUCTURED MATRIX-GEOMETRIC METHOD
There have been increasing interests in integrating cooperative diversity into Cognitive Radios (CRs). However, conventional cooperative diversity protocols require at least two randomly available idle timeslots or temporal spectrum holes for one transmission, thus leading to limited throughput and/or large latency.
In this paper, we propose a novel cross-layer approach for ef ficient scheduling in CR systems with bursty secondary traf fics. Specifically, cooperative beamforming is exploited for Secondary Users (SUs) to access busy timeslots or spatial spectrum holes wi thout causing interference to primary users.
We first propose a basic cooperative beaMforming and Automatic repeat request aided oppoRtunistic speCtrum scHeduling (MARCH) scheme to balance available spectrum resources, namely temporal and spatial spectrum holes, between the source and the relays.
To analyze the proposed scheme, we develop a tandem queueing framework, which captures bursty traffic arrival, dynamic availability of spectrum holes, and time-varying channel fading. The stable throughput region and the average delay are characterized using a structured matrix-analytical method.
We then obtain delay optimal scheduling schemes for various scenarios by jointly optimizing the scheduling parameters. Finally, we propose a modi fied scheme, MARCH-IR, which combines MARCH with Incremental Relay selection to further improve the system performance. Simulation results reveal that the proposed schemes provide significant Quality of Service (QoS) gains over conventional scheduling schemes that access only temporal spectrum holes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - MOBILE COMPUTING
DETECTION OF SELFISH MANIPULATION OF CARRIER SENSING IN 802.11 NETWORKS
Recently, tuning the clear channel assessment (CCA) threshold in conjunction with power control has been considered for improving the performance of WLANs. However, we show that, CCA tuning can be exploited by selfish nodes to obtain an unfair share of the available bandwidth.
Specifically, a selfish entity can manipulate the CCA threshold to ignore ongoing transmissions; this increases the probability of accessing the medium and provides the entity a higher, unfair share of the bandwidth. We experiment on our 802.11 testbed to characterize the effects of CCA tuning on both isolated links and in 802.11 WLAN configurations.
We focus on AP-client(s) configurations, proposing a novel approach to detect this misbehavior. A misbehaving client is unlikely to recognize low power receptions as legitimate packets; by intelligently sending low power probe messages, an AP can efficiently detect a misbehaving node.
Our key contributions are: 1) We are the first to quantify the impact of selfish CCA tuning via extensive experimentation on various 802.11 configurations. 2) We propose a lightweight scheme for detecting selfish nodes that inappropriately increase their CCAs. 3) We extensively evaluate our system on our testbed; its accuracy is 95 percent while the false positive rate is less than 5 percent


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
CHARACTERIZING THE SECURITY IMPLICATIONS OF THIRD-PARTY EMERGENCY ALERT SYSTEMS OVER CELLULAR TEXT MESSAGING SERVICES
Cellular text messaging services are increasingly being relied upon to disseminate critical information during emergencies. Accordingly, a wide range of organizations including colleges and universities now partner with third-party providers that promise to improve physical security by rapidly delivering such messages.
Unfortunately, these products do not work as advertised due to limitations of cellular infrastructure and therefore provide a false sense of security to their users. In this paper, we perform the first extensive investigation and characterization of the limitations of an Emergency Alert System (EAS) using text messages as a security incident response mechanism.
We show emergency alert systems built on text messaging not only can meet the 10 minute delivery requirement mandated by the WARN Act, but also potentially cause other voice and SMS traffic to be blocked at rates upward of 80 percent. We then show that our results are representative of reality by comparing them to a number of documented but not previously understood failures.
Finally, we analyze a targeted messaging mechanism as a means of efficiently using currently deployed infrastructure and third-party EAS. In so doing, we demonstrate that this increasingly deployed security infrastructure does not achieve its stated requirements for large populations.


*------------*------------*------------*------------*------------*------------*

DOMAIN - MOBILE COMPUTING
CONTROLLED MOBILITY SENSOR NETWORKS FOR TARGET TRACKING USING ANT COLONY OPTIMIZATION
In mobile sensor networks, it is important to manage the mobility of the nodes in order to improve the performances of the network. This paper addresses the problem of single target tracking in controlled mobility sensor networks.
The proposed method consists of estimating the current position of a single target. Estimated positions are t hen used to predict the following location of the target. Once an area of interest is de fined, the proposed approach consists of moving the mobile nodes in order to cover it in an optimal way. It thus de fines a strategy for choosing the set of new sensors locations.
Each node is then assigned one position within the set in the way to minimize the total traveled distance by the nodes. While the estimation and the prediction phases are performed using the interval theory, relocating nodes employs the ant colony optimization algorithm.
Simulations results corroborate the efficiency of the proposed method compared to the target tracking methods considered for networks with static nodes


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
SCALABLE LOOKAHEAD REGULAR EXPRESSION DETECTION SYSTEM FOR DEEP PACKET INSPECTION
Regular expressions (RegExes) are widely used, yet their inherent complexity often limits the total number of RegExes that can be detected using a single chip for a reasonable throughput. This limit on the number of RegExes impairs the scalability of today’s RegEx detection systems.
The scalability of existing schemes is generally limited by the traditional detection paradigm based on per-character-state processing and state transition detection. The main foc us of existing schemes is on opti-mizing the number of states and the required transitions, but not on optimizing the suboptimal character-based detection method.
Furthermore, the potential benefi ts of allowing out-of-sequence detection, instead of detecting components of a RegEx in the order of appearance, have not been explored. Lastly, theexisting schemes do not provide ways to ad apt to the evolving RegExes. In this paper, we propose Lookahead Finite Automata (LaFA) to perform scalable RegEx detection.
LaFA requires les smemory due to these three contributions:
1) providing specialized and optimized detection modules to increase resource utilization;
2) systematically reordering the RegEx detection sequence to reduce the number of concurrent operations;
3) sharing states among automata for different RegExes to reduce resource re-quirements.
Here, we demonstrate that LaFArequires an order of magnitude less memory compared to today’s state-of-the-art RegEx detection systems.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
ACCELERATING MULTIPATTERN MATCHING ON COMPRESSED HTTP TRAFFIC
Current security tools, using “signature-based” de-tection, do not handle compressed traf fi c, whose market-share is constantly increasing.
This paper focuses on compressed HTTP traf fi c. HTTP uses GZIP compression and requires some kind of decompression phase before performing a string matching. We present a novel algorithm, Aho–C orasick-based algorithm for Compressed HTTP (ACCH), that takes advantage of information gathered by the decompression phase in order to accelerate the commonly used Aho–Corasick pattern-matching algorithm.
By analyzing real HTTP traf fi c and real Web application fi rewall signatures, we show that up to 84% of the data can be skipped in its scan. Surprisingly, we show that it is faster to perform pattern matching on the compressed data, with the penalty of decompression, than on regular traffic. As far as we know, we are the first paper that analyzes the problem of “on-the-fl y” multipattern matching on compressed HTTP traffic and suggest a solution


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
SPATIO-TEMPORAL COMPRESSIVE SENSING AND INTERNET TRAF FI C MATRICES (EXTENDED VERSION)
Despite advances in measurement technology, it is still challenging to reliably compile large-scale network datasets. For ex ample, because offlaws in the measurement systems or difficulties posed by the measurement problem itself, missing, ambiguous, or indirect data are common.
In the case where such data have spatio-temporal structure, it is natural to try to leverage this structure to deal with the challenges posed by the problem-atic nature of the data. Our work involving network datasets draws on ideas from the area of compressive sensing and matrix completion, where sparsity is exploited in estimating quantities of interest.
However, the standard results on compressive sensing are: 1) reliant on conditions that generally do not hold for network datasets; and 2) do not allow us to exploit all we know about their spatio-temporal structure.
In this paper, we overcome these limitations with an algorithm that has at its heart the same ideas espoused in compressive sensing, but adapted to the problem of network datasets.
We show how this algorithm can be used in a variety of ways, in particular on traffic data, to solve problems such as simple interpolation of missing values, traffic matrix inference from link data, prediction, and anomaly detection.
The elegance of the approach lies in the fact that it unifies all of these tasks and allows them to be performed even when as much as 98% of the data is missing


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
SPARSE WIFI DEPLOYMENT FOR VEHICULAR INTERNET ACCESS WITH BOUNDED INTERCONNECTION GAP
Vehicular Internet access via open WiFi access points (APs) has been demonstrated to be a feasible solution to provide opportunistic data service to moving vehicles. Using an in situ deployment, however, such a solution does not provide performance guarantees due to unpredictable intermittent connectivity.
On the other hand, a solution that tries to cover every point in an entire road network with Aps (a full coverage) is not very practical due to prohibitive deployment and operational costs. In this paper, we introduce a new notion of intermittent coverage for mobile users, called Alpha Coverage, which provides worst-case guarantees on the interconnection gap, i.e., the distance or expected delay between two consecutive mobile-AP contacts for a vehicle, while using significantly fewer APs than needed for full coverage.
We propose efficient algorithms to verify whether a given deployment provides Alpha Coverage. The problem of finding an economic deployment that provides coverage turns out to be NP-hard.
We hence provide both approximation algorithms that have provable guarantees on the performance as well as efficient heuristics that perform well in practice. The efficiency of our algorithms is demonstrated via simulations using data from real-world road networks


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
RELIABLE COLLECTIVE COMMUNICATIONS WITH WEIGHTED SRLGS IN OPTICAL NETWORKS
In this paper, we study the problem of reliable collective communication (broadcast or gossip) with the objective of maximizing the reliability of the collective communication. The need for collective communication arises in many problems of parallel and distributed computing, including Grid or cloud computing and database management.
We describe the network model, formulate the reliable collective communication problem, prove that the maximum reliable collective communication problem is NP-hard, and provide an integer linear program (ILP) formulation for the problem.
We then provide a greedy approximation algorithm to construct collective communication (through a spanning tree) that achieves an approximation ratio of where is the average number of shared link risk groups (SRLGs) along links, and are the total number of vertices and edges of the network, respectively.
Simulations demonstrate that our approximation algorithm achieves good performance in both small and large networks and that, in almost 95% of total cases, our algorithm outperforms the modifi ed minimum spanning tree algorithms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
POLYNOMIAL-TIME ALGORITHMS FOR MULTIRATE ANY PATH ROUTING IN WIRELESS MULTIHOP NETWORKS
In this paper, we present a new routing paradigm that generalizes opportunistic routing for wireless multihop net-works.In multirate anypath routing, each node uses both a set of next-hops and a selected transmission rate to reach a destination. Using this rate, a packet is broadcast to the nodes in the set, and one of them forwards the packet on to the destination.
To date, there is no theory capable of jointly optimizing both the set of next-hops and the transmission rate used byeach node. We solve this by introducing two polynomial-time routing algorithms and provide the proof of their optimality.
The proposed algorithms have roughly the same running time as regular shortest-path algorithms and are therefore suitable for deployment in routing protocols. We conducted measurements in an 802.11b testbed network, and our trace-driven analysis shows that multirate anypath routing is on average 80% better than 11-Mbps anypath routing, with a factor of 6.4 improvement in the best case.
If the rate is fi xed at 1 Mbps instead, pe rformance improves by a factor of 5.4 on average


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
ON IDENTIFYING ADDITIVE LINK METRICS USING LINEARLYINDEPENDENT CYCLES AND PATHS
In this paper, we study the problem of identifying con-stant additive link metrics using linearly independent monitoring cycles and paths. A monitoring cycle starts and ends at the same monitoring station, while a monitoring path starts and ends at distinct monitoring stations.
We show that three-edge connectivity is a necessary and sufficient condition to identify link metrics using one monitoring station and employing monitoring cycles. We develop a polynomial-time algorithm to compute the set of linearly independent cycles.
 For networks that are less than three-edge-connected, we show how the minimum number of monitors required and their placement may be computed. For networks with symmetric directed links, we show the relationship between the number of monitors employed, the number of directed links for which metric is known apriori, and the identi fi ability for the remaining links.
To the best of our knowledge, this is the first work that derives the necessary and sufficient conditions on the network topology for identifying additive link metrics and develops a polynomial-time algorithm to compute linearly independent cycles and paths


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
INSIGHTS ON MEDIA STREAMING PROGRESS USING BITTORRENT-LIKE PROTOCOLS FOR ON-DEMAND STREAMING
This paper develops analytical models that characterize the behavior of on-demand stored media content delivery using BitTorrent-like protocols. The models capture the effects of different piece selection policies, including Rarest-First, two vari-ants of In-Order, and two probabilistic policies (Portion and Zipf).
Our models provide insight into system behavior and help explain the sluggishness of the system with In-Order streaming. We use the models to compare different retrieval policies across a wide range of system parameters, including peer arrival rate, upload/ download bandwidth, and seed residence time.
We also provide quantitative results on the startup delays and retrieval times for streaming media delivery. Our results provide insights into the de-sign tradeoffs for on-demand media streaming in peer-to-peer net-works. Finally, the models are validated using simulations


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
GREEDY GEOGRAPHIC ROUTING IN LARGE-SCALE SENSOR NETWORKS: A MINIMUM NETWORK DECOMPOSITION APPROACH
In geographic (or geometric) routing, messages are by default routed in a greedy manner: The current node always forwards a message to its neighbor node that is closest to the destination. Despite its simplicity and general efficiency, this strategy alone does not guarantee delivery due to the existence of local minima (or dead ends).
Overcoming lo cal minima requires nodes to maintain extranonlocal state or to use auxiliary mechanisms. We study how to facilitate greedy forwarding by using a minimum amount of such nonlocal states in topologically complex networks. Specifically, we investigate the problem of decomposing a given network into a minimum number of greedily routable components (GRCs), where greedy routing is guaranteed to work.
We approach it by considering an approximate version of the problem in a continuous domain, with a central concept called the greedily routable region(GRR). A full characterization of GRR is given concerning its geometric properties and routing capability. We then develop simple approximate algorithms for the problem. These results lead to a practical routing protocol that has a routing stretch below 7 in a continuous domain, and close to 1 in several realistic network settings


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
ESM: EFFICIENT AND SCALABLE DATA CENTER MULTICAST ROUTING
Multicast benefits group communications in saving network traffic and improving application throughput, both of which are important for data center applications. However, the technical trend of data center design poses new challenges for efficient and scalable multicast routing. First, the densely connected networks make traditional receiver-driven multicast routing protocols inefficient in multicast tree formation.
Second, it is quite difficult for the low-end switches widely used in data centers to hold the routing entries of massive multicast groups. In this paper, we propose ESM, an effi cient and scalable multicast routing scheme for data center networks.
ESM addresses the challenges above by exploiting the feature of modern data center networks. Based on the regular topology of data centers, ESM uses a source-to-receiver expansion approach to build efficient multicast trees, excluding many unnecessary intermediate switches used in receiver-driven multicast routing.
For scalable multicast routing, ESM combines both in-packet Bloom Filters and in-switch entries to make the tradeoff between the number of multicast groups supported and the additional bandwidth overhead. Simulations show that ESM saves 40% 50% network traffic and doubles the application throughputs compared to receiver-driven multicast routing, and the combination routing scheme significantly reduces the number of in-switch entries required. We implement ESM on a Linux platform.
The experimental results further demonstrate that ESM can well support online tree building for large-scale groups with churns, and the overhead of the combination for-warding engine is light-weighted


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
EFFICIENT SCHEDULING FOR PERIODIC AGGREGATION QUERIES IN MULTIHOP SENSOR NETWORKS
In this paper, we study periodic query scheduling for data aggregation with minimum delay under various wireless in-terference models. Given a set o f periodic aggregation queries, each query has its own period and the subset of source nodes containing the data.
We first propose a family of efficient and effective real-time scheduling protocols that can answer every job of each query task within a relative delay under resource constraints by addressing the following tightly coupled tasks: routing, transmission plan constructions, node activity scheduling, and packet scheduling. Based on our protocol design, we further propose schedulability test schemes to efficiently and effectively test whether, for a set of queries, each query job can be finished within a finite delay.
Our theoretical analysis shows that our methods achieve at least a constant fraction of the maximum possible total utilization for query tasks, where the constant depends on wireless interference models. We also conduct extensive simulations to validate the proposed protocol and evaluate its practical performance. The simulations corroborate our theoretical analysis


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
DIFFERENTIATED QUALITY-OF-RECOVERY IN SURVIVABLE OPTICAL MESH NETWORKS USING -STRUCTURES
This paper investigates desi gn methods of protection schemes in survivable WDM networks that use preconfigured protection structures ( -structures) in order to pro vide different quality-of-recovery (QoR) classes within 100% resilient single-link protection schemes.
QoR differentiation is a practical and effective approach in order to strike different balance s among protection cost, recovery delay, and manage ment complexity.
Based on the degree of pre-cross connectivity of the protection structures, we develop three design approaches of shared protection capacity schemes based on the following: 1) fully pre-cross-connected -structures ( -structures); 2) partially pre-cross-connected -structures ( -structures); and 3) dynamically recon figured -structures ( -structures). In order to identify the optimal combinations of protection structures to meet the requirements of the three QoR classes, we use a column generation (CG) model that we solve using large-scale optimization techniques.
Our CG decomposition approach is based on the separation processes of the design and selection of the protection structures. In the design process of the protection structures, the shape and protection capability of each -structure is decided dynamically during the selection process depending on the network topology and the targeted QoR parameters.
Extensive experiments are carried out on several data instances with different design constraints in order to measure the protection capacity cost and the recovery delay for the three QoR classes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
DESIGN OF WIRELESS SENSOR NETWORKS FOR MOBILE TARGET DETECTION
We consider surveillance applications through wire-less sensor networks (WSNs) where the areas to be monitored are fully accessible and the WSN topology can be planned apriori to maximize application efficiency.
We pro pose an optimization framework for selecting the positions of wireless sensors to detect mobile targets traversing a given area. By leveraging the concept of path exposure as a measure of detection quality, we propose two problem versions: the minimization of the sensors installation cost while guaranteeing a minimum exposure, and the maximization of the exposure of the least-exposed path subject to a budget on the sensors installation cost.
We present compact mixed-integer linear programming formulations for these problems that can be solved to optimality for reasonable-sized network instances. More-over, we develop Tabu Search heuristics that are able to provide near-optimal solutions of the same instances in short computing time and also tackle large size instances.
The basic versions are extended to account for constraints on the wireless connectivity as well as heterogeneous devices and nonuniform sensing. Finally, we analyze an enhanced exposure definition based on mobile target detection probability


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - NETWORKING
CONGESTION-DEPENDENT PRICING AND FORWARD CONTRACTS FOR COMPLEMENTARY SEGMENTS OF A COMMUNICATION NETWORK
Congestion-dependent pricing is a form of traffic management that ensures the efficient allocation of bandwidth between users and applications. As the unpredictability of con-gestion prices creates revenue uncertainty for network providers and cost uncertainty for users, it has been suggested that forward contracts could be used to manage these risks.
We develop a novel game-theoretic model of a multiprovider communication network with two complementary segments and investigate whether for-ward contracts would be adopted by service providers.
Service on the upstream segment is provided by a single Internet service provider (ISP) and priced dynamically to maximize profit, while several smaller ISPs sell connectivity on the downstream network segment, with the advance possibility of entering into forward contracts with their users for some of their capacity.
We show that the equilibrium forward contracting volumes are necessarily asymmetric, with one downstream provider entering into fewer forward contracts than the other competitors, thus ensuring a high subsequent downstream price level.
In practice, network providers will choose the extent of forward contracting strategically based not only on their risk tolerance, but also on the market structure in the interprovider network and their peers’ actions.


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
DECLARATIVE POLICY-BASED ADAPTIVE MOBILE AD HOC NETWORKING
This paper presents DAWN, a declarative platform that creates highly adaptive policy-based mobile ad hoc network (MANET) protocols. DAWN leverages declarative networking techniques to achieve extensible routing and forwarding using declarative languages. We make the following contributions.
First, we demonstrate that traditional MANET protocols can be expressed in a concise fashion as declarative networks and policy-driven adaptation can be specified i n the same language to dictate the dynamic selection of different protocols based on various network and traffic conditions.
Second, we propose interprotocol forwarding techniques t hat ensure packets are able to seamlessly traverse across clusters of nodes running different protocols selected based on their respective policies.
Third, we have developed a full-fl edged implementation of DAWN using the RapidNet declarative networking system. We experimentally validate a variety of policy-based adaptive MANETs in various dynamic settings using a combination of ns-3 simulations and deployment on the ORBIT testbed.
Our experimental results demonstrate that hybrid protocols developed using DAWN out-perform traditional MANET routing protocols and are able to flexibly and dynamically adapt their routing mechanisms to achieve a good tradeoff between b and width utilization and route quality. We further demonstrate DAWN’s capabilities to achieve interprotocol forwarding across different protocols


*------------*------------*------------*------------*------------*------------*

DOMAIN - NETWORKING
CONCISE LOOKUP TABLES FOR IPV4 AND IPV6 LONGEST PREFIX MATCHING IN SCALABLE ROUTERS
We present a distinct longest prefi xmatching(LPM) lookup scheme able to achieve exceedingly concise lookup ta-bles (CoLT), suitable for scalable routers.
Based on unified hash tables for handling both IPv4 and IPv6 simultaneously, CoLT excels over previous mechanisms in: 1) lower on-chip storage for lookup tables; 2) simpler table formats to enjoy richer prefi x aggregation and easier implementation; and 3) most importantly, deemed the only design able to accommodate both IPv4 and IPv6 addresses uniformly and effectively.
As its hash tables permit multiple possible buckets to hold each prefix (following a migration rule to avoid false positives altogether), CoLT  exhibits the best memory efficiency and can launch parallel search over tables during every LPM lookup, involving fewer cycles per lookup when on-chip memory is used to implement hash tables.
With 16 (or 32) on-chip SRAM blocks clocked at 500 MHz (achievable in today’s 65-nm technology), it takes 2 (or 1.6) cycles on average to complete a lookup, yielding 250 (or 310 ) millions of packets per second (MPPS) mean throughput. Being hash-oriented, CoLT well supports incremental table updates, besides its high table utilization and lookup throughput.


*------------*------------*------------*------------*------------*------------*

DOMAIN – NETWORKING
BALANCING RELIABILITY AND UTILIZATION IN DYNAMIC SPECTRUM ACCESS
Future wireless networks will dynamically access spectrum to maximize its utilization. Conventional design of dynamic spectrum access focuses on maximizing spectrum utilization, but faces the problem of degraded reliability due to unregulated demands and access behaviors. Without providing proper reliability guarantee, dynamic spectrum access is unacceptable to many infrastructure networks and services.
In this paper, we propose SPARTA, a new architecture for dynamic spectrum access that balances access reliability and spectrum utilization. SPARTA includes two complementary techniques: proactive ad-mission control performed by a central entity to determine the set of wireless nodes to be supported with only statistical information of their spectrum demands, and online adaptation performed by admitted wireless nodes to adjust their instantaneous spectrum usage to time-varying demand.
Using both theoretical analysis and simulation, we show that SPARTA fulfills the reliability re-quirements while dynamically multiplexing spectrum demands to improve utilization.
Compared to conventional solutions, SPARTA improves spectrum utilization by 80%–200%. Finally, SPARTA also allows service providers to explore the tradeoff between utilization and reliability to make the best use of the spectrum. To our best knowledge, our work is the first to identify and address such a tradeoff


*------------*------------*------------*------------*------------*------------*

DOMAIN – NETWORKING
ADAPTIVE SELECTIVE VERIFICATION: AN EFFICIENT ADAPTIVE COUNTERMEASURE TO THWART DOS ATTACKS
Denial-of-service (DoS) attacks are considered within the province of a shared channel model in which attack rates may be large but are bounded and client request rates vary within fixed bounds. In this setting, it is shown that clients can adapt effectively to an attack by increasing their request rate based on timeout windows to estimate attack rates.
The server will be able to process client requests with high probability while pruning out most of the attack by selective random sampling. The protocol introduced here, called Adaptive Selective Verification (ASV), is shown to use bandwidth efficiently and does not require any server state or assumptions about network congestion.
The main results of the paper are a formulation of optimal performance and a proof that ASV is optimal


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – NETWORKING
A THEORY FOR THE CONNECTIVITY DISCOVERED BY ROUTING PROTOCOLS
Route-vector protocols, such as the Border Gateway Protocol (BGP), have nodes elect and exchange routes in order to discover paths over which to send traffic. We ask the following: What is the minimum number of links whose failure prevents a route-vector protocol from finding such paths?
The answer is not obvious because routing policies prohibit some paths from carrying traffic and because, on top of that, a route-vector protocol may hide paths the routing policies would allow. We develop an algebraic theory to address the above and related questions.
In particular, we characterize a broad class of routing policies for which we can compute in polynomial time the minimum number of links whose failure leaves a route-vector protocol without a communication path from one given node to another.
The theory is applied to a publicly available description of the Internet topology to quantify how much of its intrinsic connectivity is lost due to the traditional customer–provider, peer–peer r outing policies and how much can be regained with simple alternative policies


*------------*------------*------------*------------*------------*------------*

2012, Java IEEE Project Abstracts - Part 2

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
PROTECTING LOCATION PRIVACY AGAINST LOCATION-DEPENDENT ATTACKS IN MOBILE SERVICES
Privacy protection has recently received considerable attention in location-based services. A large number of location cloaking algorithms have been proposed for protecting the location privacy of mobile users. In this paper, we consider the scenario where different location-based query requests are continuously issued by mobile users while they are moving.
We show that most of the existing k-anonymity location cloaking algorithms are concerned with snapshotuser locations only and cannot effectively prevent location-dependent attacks when users’ locations are continuously updated. Therefore, adopting both the location k-anonymity and cloaking granularity as privacy metrics, we propose a new incremental clique-based cloaking algorithm, called ICliqueCloak, to defend against location-dependent attacks.
The main idea is to incrementally maintain maximal cliques needed for location cloaking in an undirected graph that takes into consideration the effect of continuous location updates. Thus, a qualified clique can be quickly identified and used to generate the cloaked region when a new request arrives.
The efficiency and effectiveness of the proposed ICliqueCloak algorithm are validated by a series of carefully designed experiments. The experimental results also show that the price paid for defending against location-dependent attacks is small


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
OPTIMIZING THE CALCULATION OF CONDITIONAL PROBABILITY TABLES IN HYBRID BAYESIAN NETWORKS USING BINARY FACTORIZATION
Reducing the computational complexity of inference in Bayesian Networks (BNs) is a key challenge. Current algorithms for inference convert a BN to a junction tree structure made up of clusters of the BN nodes and the resulting complexity is time exponential in the size of a cluster.
The need to reduce the complexity is especially acute where the BN contains continuous nodes. We propose a new method for optimizing the calculation of Conditional Probability Tables (CPTs) involving continuous nodes, approximated in Hybrid Bayesian Networks (HBNs), using an approximation algorithm called dynamic discretization.
We present an optimized solution to this problem involving binary factorization of the arithmetical expressions declared to generate the CPTs for continuous nodes for deterministic functions and statistical distributions.
The proposed algorithm is implemented and tested in a commercial Hybrid Bayesian Network software package and the results of the empirical evaluation show significant performance improvement over unfactorized models


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
MAXIMUM AMBIGUITY-BASED SAMPLE SELECTION IN FUZZY DECISION TREE INDUCTION
Sample selection is to select a number of representative samples from a large database such that a learning algorithm can have a reduced computational cost and an improved learning accuracy. This paper gives a new sample selection mechanism, i.e., the maximum ambiguity-based sample selection in fuzzy decision tree induction.
Compared with the existing sample selection methods, this mechanism selects the samples based on the principle of maximal classification ambiguity. The major advantage of this mechanism is that the adjustment of the fuzzy decision tree is minimized when adding selected samples to the training set.
This advantage is confirmed via the theoretical analysis of the leaf-nodes’ frequency in the decision trees. The decision tree generated from the selected samples usually has a better performance than that from the original database.
Furthermore, experimental results show that generalization ability of the tree based on our selection mechanism is far more superior to that based on random selection mechanism


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
MODEL-BASED METHOD FOR PROJECTIVE CLUSTERING
Clustering high-dimensional data is a major challenge due to the curse of dimensionality. To solve this problem, projective clustering has been defined as an extension to traditional clustering that attempts to find projected clusters in subsets of the dimensions of a data space.
In this paper, a probability model is first proposed to describe projected clusters in high-dimensional data space. Then, we present a model-based algorithm for fuzzy projective clustering that discovers clusters with overlapping boundaries in various projected subspaces.
The suitability of the proposal is demonstrated in an empirical study done with synthetic data set and some widely used real-world data set


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
EXTENDING RECOMMENDER SYSTEMS FOR DISJOINT USER/ITEM SETS: THE CONFERENCE RECOMMENDATION PROBLEM
In this paper, we describe the problem of recommending conference sessions to attendees and show how novel extensions to traditional model-based recommender systems, as suggested in Adomavicius and Tuzhilin [1], can address this problem.
We introduce Recommendation Engine by CONjoint Decomposition of ITems and USers ( RECONDITUS)—a technique that is an extension of preference-based recommender systems to recommend items from a new disjoint set to users from a new disjoint set. The assumption being that preferences exhibited by users with known usage behavior (e.g., past conference session attendance), which can be abstracted by projections of user and item matrices, will be similar to ones of new (different) users where the basic environment and item domain are the same (e.g., new conference). RECONDITUS requires no item ratings, but operates on observed user behavior such as past conference session attendance.
The RECONDITUS paradigm consists of projections of both user and item data, and the learning of relationships in projected space. Once established, the relationships enable predicting new relationships and provide associated recommendations.
The approach can encompass several traditional data mining problems where both clustering and prediction are necessary.


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
ENERGY-EFFICIENT REVERSE SKYLINE QUERY PROCESSING OVER WIRELESS SENSOR NETWORKS
Reverse skyline query plays an important role in many sensing applications, such as environmental monitoring, habitat monitoring, and battlefield monitoring. Due to the limited power supplies of wireless sensor nodes, the existing centralized approaches, which do not consider energy efficiency, cannot be directly applied to the distributed sensor environment.
In this paper, we investigate how to process reverse skyline queries energy efficiently in wireless sensor networks. Initially, we theoretically analyzed the properties of reverse skyline query and proposed a skyband-based approach to tackle the problem of reverse skyline query answering over wireless sensor networks.
Then, an energy-efficient approach is proposed to minimize the communication cost among sensor nodes of evaluating range reverse skyline query. Moreover, optimization mechanisms to improve the performance of multiple reverse skylines are also discussed. Extensive experiments on both real-world data and synthetic data have demonstrated the efficiency and effectiveness of our proposed approaches with various experimental settings


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
ENERGY EFFICIENT SCHEMES FOR ACCURACY-GUARANTEED SENSOR DATA AGGREGATION USING SCALABLE COUNTING
Sensor networks have received considerable attention in recent years, and are employed in many applications. In these applications, statistical aggregates such as Sum over the readings of a group of sensor nodes are often needed. One challenge for computing sensor data aggregates comes from the communication failures, which are common in sensor networks.
To enhance the robustness of the aggregate computation, multipath-based aggregation is often used. However, the multipath-based aggregation suffers from the problem of overcounting sensor readings. The approaches using the multipath-based aggregation therefore need to incorporate techniques that avoid overcounting sensor readings. In this paper, we present a novel technique named scalable counting for efficiently avoiding the overcounting problem.
We focus on having an (",   ) accuracy guarantee for computing an aggregate, which ensures that the error in computing the aggregate is within a factor of " with probability (1     ). Our schemes using the scalable counting technique efficiently compute the aggregates under a given accuracy guarantee.
We provide theoretical analyses that show the advantages of the scalable counting technique over previously proposed techniques. Furthermore, extensive experiments are made to validate the theoretical results and manifest the advantages of using the scalable counting technique for sensor data aggregation


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
EFFICIENT AND PROGRESSIVE ALGORITHMS FOR DISTRIBUTED SKYLINE QUERIES OVER UNCERTAIN DATA
The skyline operator has received considerable attention from the database community, due to its importance in many applications including multicriteria decision making, preference answering, and so forth. In many applications where uncertain data are inherently exist, i.e., data collected from different sources in distributed locations are usually with imprecise measurements, and thus exhibit kind of uncertainty.
Taking into account the network delay and economic cost associated with sharing and communicating large amounts of distributed data over an internet, an important problem in this scenario is to retrieve the global skyline tuples from all the distributed local sites with minimum communication cost.
Based on the well-known notation of the probabilistic skyline query over centralized uncertain data, in this paper, we propose the notation of distributed skyline queries over uncertain data. Furthermore, two communication- and computation-efficient algorithms are proposed to retrieve the qualified skylines from distributed local sites.
Extensive experiments have been conducted to verify the efficiency, the effectiveness and the progressiveness of our algorithms with both the synthetic and real data sets


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
EDISC: A CLASS-TAILORED DISCRETIZATION TECHNIQUE FOR RULE-BASED CLASSIFICATION
Discretization is a critical component of data mining whereby continuous attributes of a data set are converted into discrete ones by creating intervals either before or during learning. There are many good reasons for preprocessing discretization, such as increased learning efficiency and classification accuracy, comprehensibility of data mining results, as well as the inherent limitation of a great majority of learning algorithms to handle only discrete data.
Many preprocessing discretization techniques have been proposed to date, of which the Entropy-MDLP discretization has been accepted as by far the most effective in the context of both decision tree learning and rule induction algorithms. This paper presents a new discretization technique EDISC which utilizes the entropy-based principle but takes a class-tailored approach to discretization.
The technique is applicable in general to any covering algorithm, including those that use the class-per-class rule induction methodology such as CN2 as well as those that use a seed example during the learning phase, such as the RULES family.
Experimental evaluation has proved the efficiency and effectiveness of the technique as a preprocessing discretization procedure for CN2 as well as RULES-7, the latest algorithm among the RULESfamily of inductive learning algorithms .


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
DISCRIMINATIVE FEATURE SELECTION BY NONPARAMETRIC BAYES ERROR MINIMIZATION
Feature selection is fundamental to knowledge discovery from massive amount of high-dimensional data. In an effort to establish theoretical justification for feature selection algorithms, this paper presents a theoretically optimal criterion, namely, the discriminative optimal criterion (DoC) for feature selection.
Compared with the existing representative optimal criterion (RoC, [21]) which retains maximum information for modeling the relationship between input and output variables, DoC is pragmatically advantageous because it attempts to directly maximize the classification accuracy and naturally reflects the Bayes error in the objective.
To make DoC computationally tractable for practical tasks, we propose an algorithmic framework, which selects a subset of features by minimizing the Bayes error rate estimated by a nonparametric estimator.
A set of existing algorithms as well as new ones can be derived naturally from this framework. As an example, we show that the Relief algorithm [20] greedily attempts to minimize the Bayes error estimated by the k -Nearest-Neighbor ( k NN) method.
This new interpretation insightfully reveals the secret behind the family of margin-based feature selection algorithms [28], [14] and also offers a principled way to establish new alternatives for performance improvement.
In particular, by exploiting the proposed framework, we establish the Parzen-Relief (P-Relief) algorithm based on Parzen window estimator, and the MAP-Relief (M-Relief) which integrates label distribution into the max-margin objective to effectively handle imbalanced and multiclass data. Experiments on various benchmark data sets demonstrate the effectiveness of the proposed algorithms.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
DISCOVERY OF DELTA CLOSED PATTERNS AND NONINDUCED PATTERNS FROM SEQUENCES
Discovering patterns from sequence data has significant impact in many aspects of science and society, especially in genomics and proteomics. Here we consider multiple strings as input sequence data and substrings as patterns. In the real world, usually a large set of patterns could be discovered yet many of them are redundant, thus degrading the output quality.
This paper improves the output quality by removing two types of redundant patterns. First, the notion of delta tolerance closed itemset is employed to remove redundant patterns that are not delta closed. Second, the concept of statistically induced patterns is proposed to capture redundant patterns which seem to be statistically significant yet their significance is induced by their strong significant subpatterns.
It is computationally intense to mine these nonredundant patterns (delta closed patterns and noninduced patterns). To efficiently discover these patterns in very large sequence data, two efficient algorithms have been developed through innovative use of suffix tree.
Three sets of experiments were conducted to evaluate their performance. They render excellent results when applying to genomics. The experiments confirm that the proposed algorithms are efficient and that they produce a relatively small set of patterns which reveal interesting information in the sequences


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
DATA MINING FOR XML QUERY-ANSWERING SUPPORT
Extracting information from semistructured documents is a very hard task, and is going to become more and more critical as the amount of digital information available on the Internet grows. Indeed, documents are often so large that the data set returned as answer to a query may be too big to convey interpretable knowledge.
In this paper, we describe an approach based on Tree-Based Association Rules (TARs): mined rules, which provide approximate, intensional information on both the structure and the contents of Extensible Markup Language (XML) documents, and can be stored in XML format as well.
This mined knowledge is later used to provide:
1)    a concise idea—the gist—of both the structure and the content of the XML document and
2)    quick, approximate answers to queries. In this paper, we focus on the second feature. A prototype system and experimental results demonstrate the effectiveness of the approach


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
CLUSTERING WITH MULTIVIEWPOINT-BASED SIMILARITY MEASURE
All clustering methods have to assume some cluster relationship among the data objects that they are applied on. Similarity between a pair of objects can be defined either explicitly or implicitly. In this paper, we introduce a novel multiviewpoint-based similarity measure and two related clustering methods.
The major difference between a traditional dissimilarity/similarity measure and ours is that the former uses only a single viewpoint, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. Using multiple viewpoints, more informative assessment of similarity could be achieved. Theoretical analysis and empirical study are conducted to support this claim.
Two criterion functions for document clustering are proposed based on this new measure. We compare them with several well-known clustering algorithms that use other popular similarity measures on various document collections to verify the advantages of our proposal.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
CONTINUOUS DETOUR QUERIES IN SPATIAL NETWORKS
We study the problem of finding the shortest route between two locations that includes a stopover of a given type. An example scenario of this problem is given as follows: “On the way to Bob’s place, Alice searches for a nearby take-away Italian restaurant to buy a pizza.” Assuming that Alice is interested in minimizing the total trip distance, this scenario can be modeled as a query where the current Alice’s location (start) and Bob’s place (destination) function as query points.
Based on these two query points, we find the minimum detour object (MDO), i.e., a stopover that minimizes the sum of the distances: 1) from the start to the stopover, and 2) from the stopover to the destination. In a realistic location-based application environment, a user can be indecisive about committing to a particular detour option.
The user may wish to browse multiple ( k) MDOs before making a decision. Furthermore, when a user moves, thekMDOresults at one location may become obsolete. We propose a method for continuous detour query (CDQ) processing based on incremental construction of a shortest path tree.
We conducted experimental studies to compare the performance of our proposed method against two methods derived from existing k-nearest neighbor querying techniques using real road-network data sets. Experimental results show that our proposed method significantly outperforms the two competitive techniques


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
ANOMALY DETECTION FOR DISCRETE SEQUENCES: A SURVEY
This survey attempts to provide a comprehensive and structured overview of the existing research for the problem of detecting anomalies in discrete/symbolic sequences. The objective is to provide a global understanding of the sequence anomaly detection problem and how existing techniques relate to each other.
The key contribution of this survey is the classification of the existing research into three distinct categories, based on the problem formulation that they are trying to solve. These problem formulations are: 1) identifying anomalous sequences with respect to a database of normal sequences; 2) identifying an anomalous subsequence within a long sequence; and 3) identifying a pattern in a sequence whose frequency of occurrence is anomalous.
We show how each of these problem formulations is characteristically distinct from each other and discuss their relevance in various application domains. We review techniques from many disparate and disconnected application domains that address each of these formulations.
Within each problem formulation, we group techniques into categories based on the nature of the underlying algorithm. For each category, we provide a basic anomaly detection technique, and show how the existing techniques are variants of the basic technique.
This approach shows how different techniques within a category are related or different from each other. Our categorization reveals new variants and combinations that have not been investigated before for anomaly detection.
We also provide a discussion of relative strengths and weaknesses of different techniques. We show how techniques developed for one problem formulation can be adapted to solve a different formulation, thereby providing several novel adaptations to solve the different problem formulations. We also highlight the applicability of the techniques that handle discrete sequences to other related areas such as online anomaly detection and time series anomaly detection


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
ADDING TEMPORAL CONSTRAINTS TO XML SCHEMA
If past versions of XML documents are retained, what of the various integrity constraints defined in XML Schema on those documents?
This paper describes how to interpret such constraints as sequenced constraints, applicable at each point in time. We also consider how to add new variants that apply across time, so-called nonsequenced constraints.
Our approach supports temporal documents that vary over both valid and transaction time, whose schema can vary over transaction time. We do this by replacing the schema with a (possibly time-varying) temporal schema and replacing the document with a temporal document, both of which are upward compatible with conventional XML and with conventional tools like XMLLINT , which we have extended to support the temporal constraints introduced here


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A UNIFIED PROBABILISTIC FRAMEWORK FOR NAME DISAMBIGUATION IN DIGITAL LIBRARY
Despite years of research, the name ambiguity problem remains largely unresolved. Outstanding issues include how to capture all information for name disambiguation in a unified approach, and how to determine the number of peopleK in the disambiguation process.
In this paper, we formalize the problem in a unified probabilistic framework, which incorporates both attributes and relationships. Specifically, we define a disambiguation objective function for the problem and propose a two-step parameter estimation algorithm.
We also investigate a dynamic approach for estimating the number of people K . Experiments show that our proposed framework significantly outperforms four baseline methods of using clustering algorithms and two other previous methods. Experiments also indicate that the number K automatically found by our method is close to the actual number.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A PERFORMANCE ANOMALY DETECTION AND ANALYSIS FRAMEWORK FOR DBMS DEVELOPMENT
Detecting performance anomalies and finding their root causes are tedious tasks requiring much manual work. Functionality enhancements in DBMS development as in most software development often introduce performance problems in addition to bugs.
To detect the problems as soon as they are introduced, which often happens during the early phases of a development cycle, we adopt performance regression testing early in the process. In this paper, we describe a framework that we developed to manage performance anomalies after establishing a set of conditions for a problem to be considered an anomaly.
The framework uses Statistical Process Control (SPC) charts to detect performance anomalies and differential profiling to identify their root causes. By automating the tasks within the framework we were able to remove most of the manual overhead in detecting anomalies and reduce the analysis time for identifying the root causes by about 90 percent in most cases.
The tools developed and deployed based on the framework allow us continuous, automated daily monitoring of performance in addition to the usual functionality monitoring in our DBMS development


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A LOOK-AHEAD APPROACH TO SECURE MULTIPARTY PROTOCOLS
Secure multiparty protocols have been proposed to enable noncolluding parties to cooperate without a trusted server. Even though such protocols prevent information disclosure other than the objective function, they are quite costly in computation and communication.
The high overhead motivates parties to estimate the utility that can be achieved as a result of the protocol beforehand. In this paper, we propose a look-ahead approach, specifically for secure multiparty protocols to achieve distributed k -anonymity, which helps parties to decide if the utility benefit from the protocol is within an acceptable range before initiating the protocol.
The look-ahead operation is highly localized and its accuracy depends on the amount of information the parties are willing to share. Experimental results show the effectiveness of the proposed methods


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A KNOWLEDGE-DRIVEN APPROACH TO ACTIVITY RECOGNITION IN SMART HOMES
This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways.
First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition.
In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition.
The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds.


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A FRAMEWORK FOR PERSONAL MOBILE COMMERCE PATTERN MINING AND PREDICTION
Due to a wide range of potential applications, research on mobile commerce has received a lot of interests from both of the industry and academia. Among them, one of the active topic areas is the mining and prediction of users’ mobile commerce behaviors such as their movements and purchase transactions.
In this paper, we propose a novel framework, calledMobile Commerce Explorer (MCE), for mining and prediction of mobile users’ movements and purchase transactions under the context of mobile commerce.
The MCEframework consists of three major components: 1) Similarity Inference Model ðSIMÞ for measuring the similarities among stores and items, which are two basic mobile commerce entities considered in this paper; 2)Personal Mobile Commerce Pattern Mine ( PMCP-Mine) algorithm for efficient discovery of mobile users’ Personal Mobile Commerce Patterns ( PMCPs); and 3) Mobile Commerce Behavior PredictorðMCBPÞ for prediction of possible mobile user behaviors.
To our best knowledge, this is the first work that facilitates mining and prediction of mobile users’ commerce behaviors in order to recommend stores and items previously unknown to a user. We perform an extensive experimental evaluation by simulation and show that our proposals produce excellent results


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
A DECISION-THEORETIC FRAMEWORK FOR NUMERICAL ATTRIBUTE VALUE RECONCILIATION
One of the major challenges of data integration is to resolve conflicting numerical attribute values caused by data heterogeneity.
In addressing this problem, existing approaches proposed in prior literature often ignore such data inconsistencies or resolve them in an ad hoc manner. In this study, we propose a decision-theoretical framework that resolves numerical value conflicts in a systematic manner.
The framework takes into consideration the consequences of incorrect numerical values and selects the value that minimizes the expected cost of errors for all data application problems under consideration. Experimental results show that significant savings can be achieved by adopting the proposed framework instead of ad hoc approaches


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
VISUAL ROLE MINING: A PICTURE IS WORTH A THOUSAND ROLES
This paper offers a new role engineering approach to Role-Based Access Control(RBAC), referred to as visual role mining . The key idea is to graphically represent user-permission assignments to enable quick analysis and elicitation of meaningful roles.
First, we formally define the problem by introducing a metric for the quality of the visualization. Then, we prove that finding the best representation according to the defined metric is a NP-hard problem. In turn, we propose two algorithms: ADVISER and EXTRACT.
The former is a heuristic used to best represent the user-permission assignments of a given set of roles. The latter is a fast probabilistic algorithm that, when used in conjunction with ADVISER, allows for a visual elicitation of roles even in absence of predefined roles.
Besides being rooted in sound theory, our proposal is supported by extensive simulations run over real data. Results confirm the quality of the proposal and demonstrate its viability in supporting role engineering decisions


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - KNOWLEDGE AND DATA ENGINEERING
SEGMENTATION AND SAMPLING OF MOVING OBJECT TRAJECTORIES BASED ON REPRESENTATIVENESS
Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub)trajectories in the MOD.
In order to find the most representative subtrajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness.
The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness.
Finally, a sampling method over the resulting segments yields the most representative subtrajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other  sampling techniques.


*------------*------------*------------*------------*------------*------------*

DOMAIN - KNOWLEDGE AND DATA ENGINEERING
SATURN: RANGE QUERIES, LOAD BALANCING AND FAULT TOLERANCE IN DHT DATA SYSTEMS
In this paper, we present Saturn, an overlay architecture for large-scale data networks maintained over Distributed Hash Tables (DHTs) that efficiently processes range queries and ensures access load balancing and fault-tolerance.
Placing consecutive data values in neighboring peers is desirable in DHTs since it accelerates range query processing; however, such a placement is highly susceptible to load imbalances.
At the same time, DHTs may be susceptible to node departures/failures and high data availability and fault tolerance are significant issues. Saturn deals effectively with these problems through the introduction of a novel multiple ring, order-preserving architecture. The use of a novel order-preserving hash function ensures fast range query processing.
Replication across and within data rings (termed vertical and horizontal replication) forms the foundation over which our mechanisms are developed, ensuring query load balancing and fault tolerance, respectively.
Our detailed experimentation study shows strong gains in range query processing efficiency, access load balancing, and fault tolerance, with low replication overheads.
The significance of Saturn is not only that it effectively tackles all three issues together—i.e., supporting range queries, ensuring load balancing, and providing fault tolerance over DHTs—but also that it can be applied on top of any order-preserving DHT enabling it to dynamically handle replication and, thus, to trade off replication costs for fair load distribution and fault tolerance.

2012, Java IEEE Project Abstracts - Part 1

JAVA IEEE 2012 PROJECT ABSTRACTS

DOMAIN – CLOUD COMPUTING
SLA-BASED OPTIMIZATION OF POWER AND MIGRATION COST IN CLOUD COMPUTING
Cloud computing systems (or hosting datacenters) have attracted a lot of attention in recent years. Utility computing, reliable data storage, and infrastructure-independent computing are example applications of such systems. Electrical energy cost of a cloud computing system is a strong function of the consolidation and migration techniques used to assign incoming clients to existing servers.
Moreover, each client typically has a service level agreement (SLA), which specifies constraints on performance and/or quality of service that it receives from the system. These constraints result in a basic trade-off between the total energy cost and client satisfaction in the system. In this paper, a resource allocation problem is considered that aims to minimize the total energy cost of cloud computing system while meeting the specified client-level SLAs in a probabilistic sense.
The cloud computing system pays penalty for the percentage of a client’s requests that do not meet a specified upper bound on their service time. An efficient heuristic algorithm based on convex optimization and dynamic programming is presented to solve the aforesaid resource allocation problem. Simulation results demonstrate the effectiveness of the proposed algorithm compared to previous work
 

*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
SCALABLE JOIN QUERIES IN CLOUD DATA STORES
Cloud data stores provide scalability and high avail-ability properties for Web applications, but do not support complex queries such as joins. Web application developers must therefore design their programs according to the peculiarities of NoSQL data stores rather than established software engineering practice.
This results in complex and error-prone code, especially with respect to subtle issues such as data consistency under concurrent read/write queries. We present join query support in CloudTPS, a middleware layer which stands between a Web application and its data store.
The system enforces strong data consistency and scales linearly under a demanding workload com-posed of join queries and read-write transactions. In large-scale deployments, CloudTPS outperforms replicated PostgreSQL up to three times


*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
REVENUE MANAGEMENT FOR CLOUD PROVIDERS - A POLICY-BASED APPROACH UNDER STOCHASTIC DEMAND
Competition on global markets forces enterprises to make  use  of  new  applications,  reduce  process  times and cut the costs of their IT-infrastructure. To achieve this  commercial  users  harness  the  benefits  of  Cloud computing  as  they  can  outsource  data  storage  and computation facilities, while saving on the overall cost of  IT  ownership. 
Cloud services can  be  accessed  on demand at any time in a pay-as-you-go manner. However, it is this flexibility of customers that results in great challenges for Cloud service providers. They need to maximize their revenue in the presence of limited fixed resources  and  uncertainty  regarding upcoming  service  requests  while  contemporaneously considering  their  SLAs. 
To address  this  challenge  we introduce  models  that  can  predict  revenue  and utilization  achieved  with  admission-control  policy based  revenue  management  under  stochastic  demand. This allows providers to significantly increase revenue by choosing the optimum policy.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
PERFORMANCE EVALUATION OF MULTIMEDIA SERVICES OVER WIRELESS LAN USING ROUTING PROTOCOL OPTIMIZED LINK STATE ROUTING (OLSR)
Development of multimedia from year to year increasing with the growing support of computer networks such as wireless LAN (WLAN) as a media intermediary. The one of method is streaming multimedia delivery over the internet from the server to the client to respond to client requests to a video and audio contained in a computer network. Factors that affect the streaming is bandwidth.
These factors may cause the process stream is often disrupted when there is not enough bandwidth so that resulted in the loss and delay in delivery. To reduce the occurrence of loss and delay, a routing protocol is needed that can support multimedia service quality package that will be passed on wireless LAN networks.
In this paper will be evaluated on the WLAN performance multimedia services with the help of routing protocol Optimized Link State Routing (OLSR). 


 *------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
PERFORMANCE ANALYSIS OF CLOUD COMPUTINGCENTERS USING M =G=M=MÞ R QUEUING SYSTEMS
Successful development of cloud computing paradigm necessitates accurate performance evaluation of cloud data centers.
As exact modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests, we describe a novel approximate analytical model for performance evaluation of cloud server farms and solve it to obtain accurate estimation of the complete probability distribution of the request response time and other important performance indicators.
The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service, on the other

 
*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
MINING CONCEPT DRIFTING NETWORK TRAFFIC IN CLOUD COMPUTING ENVIRONMENTS
Anomaly-based   network   Intrusion   Detection   Systems   (IDS) model patterns of normal activity and detect novel network attacks. However, these systems depend on the availability of the systems normal traffic pattern profile. But the statistical fingerprint of the normal traffic pattern can change and shift over a period of time due to changes in operational or user activity at the networked site or even system updates.
The changes in normal traffic patterns  over  time  lead  to  concept  drift.  Some  changes  can  be  temporal, cyclical and can be short-lived or they can last for longer periods of time. Depending on a number of factors the speed at which the change in traffic patterns occurs can also be variable, ranging from near instantaneous to the change occurring over the span of numerous months.
These changes in traffic patterns are a cause of concern for IDSs as they can lead to a significant increase   in   false   positive   rates,   thereby   reducing   the   overall   system performance . In order to improve the reliability of the IDS, there is a need for an automated mechanism to detect valid traffic changes and avoid inappropriate ad hoc responses.
ROC curves have historically been used to evaluate the accuracy of IDSs. ROC curves generated using fixed, time- invariant classification thresholds do not characterize the best accuracy that an IDS can achieve in presence of concept-drifting network traffic.
In this paper,  we  present  integrated  supervised  machine  learning  and  control theoretic model (especially for clouds) for detecting concept drift in network traffic patterns.
The model comprises of an online support vector machine based classifier (incremental anomaly based detection), a Kullback- Leibler divergence based relative entropy measurement scheme (quantifying concept drift) and feedback control engine (adapting ROC thresholding). In our proposed  system,  any  intrusion  activity  will  cause  significant  variations, thereby  causing  a  large  error,  while  a  minor  aberration  in  the  variations (concept drift) will not be immediately reported as alert



*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
FRAMEWORK ON LARGE PUBLIC SECTOR IMPLEMENTATION OF CLOUD COMPUTING
Cloud computing enables IT systems to be scalable and elastic. One significant advantage of it is users no longer need to determine their exact computing resource requirements upfront.  Instead, they request computing resources as required, on-demand.
This paper is written to introduce a framework specific for large public sector entities on how to migrate to cloud computing. This paper can then be also be a reference for the Organizations to overcome its limitations and to convince their stakeholders to further implement various types of Cloud Computing service models.
 

*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
ENHANCED ENERGY-EFFICIENT SCHEDULING FOR PARALLEL APPLICATIONS IN CLOUD
Energy consumption has become a major concern to the widespread deployment of cloud data centers. The growing importance for parallel applications in the cloud introduces significant challenges in reducing the power consumption drawn by the hosted servers.
In this paper, we propose an enhanced energy-efficient scheduling (EES) algorithm to reduce energy consumption while meeting the performance-based service level agreement (SLA). Since slacking non-critical jobs can achieve significant power saving, we exploit the slack room and allocate them in a global manner in our schedule.
Using random generated and real-life application workflows, our results demonstrate that EES is able to reduce considerable energy consumption while still meeting SLA


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
CLOUD CHAMBER: A SELF-ORGANIZING FACILITY TO CREATE, EXERCISE, AND EXAMINE SOFTWARE AS A SERVICE TENANTS
Cloud Chamber is a testbed for understanding how web services behave as tenants in a Software as a Service (SaaS) environment.  This work describes the Cloud Chamber testbed to investigate autonomic resource management of web services in a cloud environment. 
Cloud Chamber is a virtualized environment which provides web servers as services, facilities to apply loads to the tenant services, algorithms for autonomic organization and reconfiguration of service assignments as demand changes, and sensors to capture resource consumption and performance metrics.
The testbed inserts sensors into web servers to collect the resource utilization of CPU cycles, memory consumption, and bandwidth consumption of the individual web services, the web server, and the operating system.  This high resolution performance data generates profiles of the resource usage of each web service and the resource availability of each server. 
The testbed, as described in this work, utilizes these profiles to efficiently place services on servers, thus balancing resource consumption, service performance, and service availability.  Once services have been placed, the testbed monitors changes such as traffic levels, server ch urn, and the introduction of new services. 
The information gathered is used to calculate configurations of service placement which better meet the changing requirements of the environment


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
A TIME-SERIES PATTERN BASED NOISE GENERATION STRATEGY FOR PRIVACY PROTECTION IN CLOUD COMPUTING
Cloud computing promises an open environment where customers can deploy IT services in a pay-as-you-go fashion while saving huge capital investment in their own IT infrastructure. Due to the openness, various malicious service providers may exist. Such service providers may record service information in a service process from a customer and then collectively deduce the customer’s private information.
Therefore, from the perspective of cloud computing security, there is a need to take special actions to protect privacy at client sides. Noise obfuscation is an effective approach in this regard by utilising noise data.
For instance, it generates and injects noise service requests into real customer service requests so that service providers would not be able to distinguish which requests are real ones if their occurrence probabilities are about the same. However, existing typical noise generation strategies mainly focus on the entire service usage period to achieve about the same final occurrence probabilities of service requests.
In fact, such probabilities can fluctuate in a time interval such as three months and may significantly differ than other time intervals. In this case, service providers may still be able to deduce the customers’ privacy from a specific time interval although unlikely from the overall period.
That is to say, the existing typical noise generation strategies could fail to protect customers’ privacy for local time intervals. To address this problem, we develop a novel time-series pattern based noise generation strategy.
Firstly, we analyse previous probability fluctuations and propose a group of time-series patterns for predicting future fluctuated probabilities. Then, based on these patterns, we present our strategy by forecasting future occurrence probabilities of real service requests and generating noise requests to reach about the same final probabilities in the next time interval.
The simulation evaluation demonstrates that our strategy can cope with these fluctuations to significantly improve the effectiveness of customers’ privacy protection


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
A PRICE- AND-TIME-SLOT-NEGOTIATION MECHANISM FOR CLOUD SERVICE RESERVATIONS
When making reservations for Cloud services, con-sumers and providers need to establish service-level agreements through negotiation. Whereas it is essential for both a consumer and a provider to reach an agreement on the price of a service and when to use the service, to date, there is little or no negotiation support for both price and time-slot negotiations (PTNs) for Cloud service reservations.
This paper presents a multi-issue negotiation mechanism to facilitate the following: 1)PTNs between Cloud agents and 2) tradeoff between price and time-slot utilities. Unlike many existing negotiation mechanisms in which a negotiation agent can only make one proposal at a time, agents in this work are designed to concurrently make multiple proposals in a negotiation round that generate the same aggregated utility, differing only in terms of individual price and time-slot utilities.
Another novelty of this work is formulating a novel time-slot utility function that characterizes preferences for different time slots. These ideas are implemented in an agent-based Cloud testbed.
Using the test-bed, experiments were carried out to compare this work with related approaches. Empirical results show that PTN agents reach faster agreements and achieve higher utilities than other related approaches. A case study was carried out to demonstrate the application of the PTN mechanism for pricing Cloud resources


*------------*------------*------------*------------*------------*------------*

DOMAIN – CLOUD COMPUTING
A CLOUD INFRASTRUCTURE FOR OPTIMIZATION OF A MASSIVE PARALLEL SEQUENCING WORKFLOW
Massive Parallel Sequencing is a term used to describe several revolutionary approaches to DNA sequencing, the so-called Next Generation Sequencing technologies.
These technologies generate millions of short sequence fragments in a single run and can be used to measure levels of gene expression and to identify novel splice variants of genes allowing more accurate analysis. The proposed solution provides novelty on two fields, firstly an optimization of the read mapping
algorithm has been designed, in order to parallelize processes, secondly an implementation of an architecture that consists of a Grid platform, composed of physical nodes, a Virtual platform, composed of virtual nodes set up on demand, and a scheduler that allows to integrate the two platforms


*------------*------------*------------*------------*------------*------------*
 
DOMAIN – CLOUD COMPUTING
TIME AND COST SENSITIVE DATA-INTENSIVE COMPUTING ON HYBRID CLOUDS
Purpose-built clusters permeate many of today’s or-ganizations, providing both large-scale data storage and comput-ing. Within local clusters, competition for resources complicates applications with deadlines. However, given the emergence of the cloud’s pay-as-you-go model, users are increasingly storing portions of their data remotely and allocating compute nodes on-demand to meet deadlines.
This scenario gives rise to a hybrid cloud, where data stored across local and cloud resources may be processed over both environments. While a hybrid execution environment may be used to meet time constraints, users must now attend to the costs associated with data storage, data transfer, and node allocation time on the cloud.
In this paper, we describe a modeling-driven resource allocation framework to support both time and cost sensitive execution for data-intensive applications executed in a hybrid cloud setting. We evaluate our framework using two data-intensive applications and a number of time and cost constraints.
Our experimental results show that our system is capable of meeting execution deadlines within a 3.6% margin of error. Similarly, cost constraints are met within a 1.2% margin of error, while minimizing the application’s execution time


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
LARGE MARGIN GAUSSIAN MIXTURE MODELS WITH DIFFERENTIAL PRIVACY
As increasing amounts of sensitive personal information is aggregated into data repositories, it has become important to develop mechanisms for processing the data without revealing information about individual data instances.
The differential privacy model provides a framework for the development and theoretical analysis of such mechanisms. In this paper, we propose an algorithm for learning a discriminatively trained multiclass Gaussian mixture model-based classifier that preserves differential privacy using a large margin loss function with a perturbed regularization term.
We present a theoretical upper bound on the excess risk of the classifier introduced by the perturbation


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ITERATIVE TRUST AND REPUTATION MANAGEMENT USING BELIEF PROPAGATION
In this paper, we introduce the first application of the belief propagation algorithm in the design and evaluation of trust and reputation management systems. We approach the reputation management problem as an inference problem and describe it as computing marginal likelihood distributions from complicated global functions of many variables.
However, we observe that computing the marginal probability functions is computationally prohibitive for large-scale reputation systems. Therefore, we propose to utilize the belief propagation algorithm to efficiently (in linear complexity) compute these marginal probability distributions; resulting a fully iterative probabilistic and belief propagation-based approach (referred to as BP-ITRM). BP-ITRM models the reputation system on a factor graph. By using a factor graph, we obtain a qualitative representation of how the consumers (buyers) and service providers (sellers) are related on a graphical structure.
Further, by using such a factor graph, the global functions factor into products of simpler local functions, each of which depends on a subset of the variables. Then, we compute the marginal probability distribution functions of the variables representing the reputation values (of the service providers) by message passing between nodes in the graph. We show that BP-ITRM is reliable in filtering out malicious/unreliable reports.
We provide a detailed evaluation of BP-ITRM via analysis and computer simulations. We prove that BP-ITRM iteratively reduces the error in the reputation values of service providers due to the malicious raters with a high probability. Further, we observe that this probability drops suddenly if a particular fraction of malicious raters is exceeded, which introduces a threshold property to the scheme.
Furthermore, comparison of BP-ITRM with some well-known and commonly used reputation management techniques (e.g., Averaging Scheme, Bayesian Approach, and Cluster Filtering) indicates the superiority of the proposed scheme in terms of robustness against attacks (e.g., ballot stuffing, bad mouthing). Finally, BP-ITRM introduces a linear complexity in the number of service providers and consumers, far exceeding the efficiency of other schemes


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
INCENTIVE COMPATIBLE PRIVACY-PRESERVING DISTRIBUTED CLASSIFICATION
In this paper, we propose game-theoretic mechanisms to encourage truthful data sharing for distributed data mining. One proposed mechanism uses the classic Vickrey-Clarke-Groves (VCG) mechanism, and the other relies on the Shapley value.
Neither relies on the ability to verify the data of the parties participating in the distributed data mining protocol. Instead, we incentivize truth telling based solely on the data mining result. This is especially useful for situations where privacy concerns prevent verification of the data.
Under reasonable assumptions, we prove that these mechanisms are incentive compatible for distributed data mining. In addition, through extensive experimentation, we show that they are applicable in practice


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENSURING DISTRIBUTED ACCOUNTABILITY FOR DATA SHARING IN THE CLOUD
Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate.
While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services.
To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies.
We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENHANCED PRIVACY ID: A DIRECT ANONYMOUS ATTESTATION SCHEME WITH ENHANCED REVOCATION CAPABILITIES
Direct Anonymous Attestation (DAA) is a scheme that enables the remote authentication of a Trusted Platform Module (TPM) while preserving the user’s privacy. A TPM can prove to a remote party that it is a valid TPM without revealing its identity and without linkability.
In the DAA scheme, a TPM can be revoked only if the DAA private key in the hardware has been extracted and published widely so that verifiers obtain the corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected of being compromised can be revoked even if the private key is not known.
However, with the full unlinkability requirement intact, if a TPM has been compromised but its private key has not been distributed to verifiers, the TPM cannot be revoked. Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to be compromised after the DAA issuing has occurred. In this paper, we present a new DAA scheme called Enhanced Privacy ID (EPID) scheme that addresses the above limitations.
While still providing unlinkability, our scheme provides a method to revoke a TPM even if the TPM private key is unknown. This expanded revocation property makes the scheme useful for other applications such as for driver’s license. Our EPID scheme is efficient and provably secure in the same security model as DAA, i.e., in the random oracle model under the strong RSA assumption and the decisional Diffie-Hellman assumption


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
ENFORCING MANDATORY ACCESS CONTROL IN COMMODITY OS TO DISABLE MALWARE
Enforcing a practical Mandatory Access Control (MAC) in a commercial operating system to tackle malware problem is a grand challenge but also a promising approach. The firmest barriers to apply MAC to defeat malware programs are the incompatible and unusable problems in existing MAC systems.
To address these issues, we manually analyze 2,600 malware samples one by one and two types of MAC enforced operating systems, and then design a novel MAC enforcement approach, named Tracer, which incorporates intrusion detection and tracing in a commercial operating system. The approach conceptually consists of three actions: detecting, tracing, and restricting suspected intruders.
One novelty is that it leverages light-weight intrusion detection and tracing techniques to automate security label configuration that is widely acknowledged as a tough issue when applying a MAC system in practice. The other is that, rather than restricting information flow as a traditional MAC does, it traces intruders and restricts only their critical malware behaviors, where intruders represent processes and executables that are potential agents of a remote attacker.
Our prototyping and experiments on Windows show that Tracer can effectively defeat all malware samples tested via blocking malware behaviors while not causing a significant compatibility problem


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
DOUBLEGUARD: DETECTING INTRUSIONS IN MULTITIER WEB APPLICATIONS
Internet services and applications have become an inextricable part of daily life, enabling communication and the management of personal information from anywhere. To accommodate this increase in application and data complexity, web services have moved to a multitiered design wherein the webserver runs the application front-end logic and data are outsourced to a database or file server.
In this paper, we present DoubleGuard, an IDS system that models the network behavior of user sessions across both the front-end webserver and the back-end database. By monitoring both web and subsequent database requests, we are able to ferret out attacks that an independent IDS would not be able to identify.
Furthermore, we quantify the limitations of any multitier IDS in terms of training sessions and functionality coverage. We implemented DoubleGuard using an Apache webserver with MySQL and lightweight virtualization. We then collected and processed real-world traffic over a 15-day period of system deployment in both dynamic and static web applications.
Finally, using DoubleGuard, we were able to expose a wide range of attacks with 100 percent accuracy while maintaining 0 percent false positives for static web services and 0.6 percent false positives for dynamic web services


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
DETECTING ANOMALOUS INSIDERS IN COLLABORATIVE INFORMATION SYSTEMS
Collaborative information systems (CISs) are deployed within a diverse array of environments that manage sensitive information. Current security mechanisms detect insider threats, but they are ill-suited to monitor systems in which users function in dynamic teams.
In this paper, we introduce the community anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on the access logs of collaborative environments.
The framework is based on the observation that typical CIS users tend to form community structures based on the subjects accessed (e.g., patients’ records viewed by healthcare providers). CADS consists of two components: 1) relational pattern extraction, which derives community structures and 2) anomaly prediction, which leverages a statistical model to determine when users have sufficiently deviated from communities.
We further extend CADS into MetaCADS to account for the semantics of subjects (e.g., patients’ diagnoses). To empirically evaluate the framework, we perform an assessment with three months of access logs from a real electronic health record (EHR) system in a large medical center.
The results illustrate our models exhibit significant performance gains over state-of-the-art competitors. When the number of illicit users is low, MetaCADS is the best model, but as the number grows, commonly accessed semantics lead to hiding in a crowd, such that CADS is more prudent


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
AUTOMATED SECURITY TEST GENERATION WITH FORMAL THREAT MODELS
Security attacks typically result from unintended behaviors or invalid inputs. Security testing is labor intensive because a real-world program usually has too many invalid inputs. It is highly desirable to automate or partially automate security-testing process.
This paper presents an approach to automated generation of security tests by using formal threat models represented as Predicate/ Transition nets. It generates all attack paths, i.e., security tests, from a threat model and converts them into executable test code according to the given Model-Implementation Mapping (MIM) specification.
We have applied this approach to two real-world systems, Magento (a web-based shopping system being used by many online stores) and FileZilla Server (a popular FTP server implementation in C++). Threat models are built systematically by examining all potential STRIDE (spoofing identity, tampering with data, repudiation, information disclosure, denial of service, and elevation of privilege) threats to system functions. The security tests generated from these models have found multiple security risks in each system.
The test code for most of the security tests can be generated and executed automatically. To further evaluate the vulnerability detection capability of the testing approach, the security tests have been applied to a number of security mutants where vulnerabilities are injected deliberately.
The mutants are created according to the common vulnerabilities in C++ and web applications. Our experiments show that the security tests have killed the majority of the mutants


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
A LEARNING-BASED APPROACH TO REACTIVE SECURITY
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack.
Our game-theoretic model follows common practice in the security literature by making worst case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally.
In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
SECURE FAILURE DETECTION AND CONSENSUS IN TRUSTEDPALS
We present a modular redesign ofTrustedPals, a smart card-based security framework for solving Secure Multiparty Computation (SMC). Originally, TrustedPals assumed a synchronous network setting and allowed to reduce SMC to the problem of fault-tolerant consensus among smart cards.
We explore how to make TrustedPals applicable in environments with less synchrony and show how it can be used to solve asynchronousSMC. Within the redesign we investigate the problem of solving consensus in a general omission failure model augmented with failure detectors.
To this end, we give novel definitions of both consensus and the class  Pof failure detectors in the omission model, which we call  Pð omÞ, and show how to implement  Pð omÞ and have consensus in such a system with very weak synchrony assumptions.
The integration of failure detection and consensus into the TrustedPals framework uses tools from privacy enhancing techniques such as message padding and dummy traffic


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
RESILIENT AUTHENTICATED EXECUTION OF CRITICAL APPLICATIONS IN UNTRUSTED ENVIRONMENTS
Modern computer systems are built on a foundation of software components from a variety of vendors. While critical applications may undergo extensive testing and evaluation procedures, the heterogeneity of software sources threatens the integrity of the execution environment for these trusted programs.
For instance, if an attacker can combine an application exploit with a privilege escalation vulnerability, the operating system (OS) can become corrupted. Alternatively, a malicious or faulty device driver running with kernel privileges could threaten the application. While the importance of ensuring application integrity has been studied in prior work, proposed solutions immediately terminate the application once corruption is detected.
Although, this approach is sufficient for some cases, it is undesirable for many critical applications. In order to overcome this shortcoming, we have explored techniques for leveraging a trusted virtual machine monitor (VMM) to observe the application and potentially repair damage that occurs.
In this paper, we describe our system design, which leverages efficient coding and authentication schemes, and we present the details of our prototype implementation to quantify the overhead of our approach. Our work shows that it is feasible to build a resilient execution environment, even in the presence of a corrupted OS kernel, with a reasonable amount of storage and performance overhead.


*------------*------------*------------*------------*------------*------------*

DOMAIN - DEPENDABLE AND SECURE COMPUTING
ON PRIVACY OF ENCRYPTED SPEECH COMMUNICATIONS
Silence suppression, an essential feature of speech communications over the Internet, saves bandwidth by disabling voice packet transmissions when silence is detected. However, silence suppression enables an adversary to recover talk patterns from packet timing.
In this paper, we investigate privacy leakage through the silence suppression feature. More specifically, we propose a new class of traffic analysis attacks to encrypted speech communications with the goal of detecting speakers of encrypted speech communications. These attacks are based on packet timing information only and the attacks can detect speakers of speech communications made with different codecs.
We evaluate the proposed attacks with extensive experiments over different type of networks including commercial anonymity networks and campus networks. The experiments show that the proposed traffic analysis attacks can detect speakers of encrypted speech communications with high accuracy based on traces of 15 minutes long on average


*------------*------------*------------*------------*------------*------------*
 
DOMAIN - DEPENDABLE AND SECURE COMPUTING
MITIGATING DISTRIBUTED DENIAL OF SERVICE ATTACKS IN MULTIPARTY APPLICATIONS IN THE PRESENCE OF CLOCK DRIFTS
Network-based applications commonly open some known communication port(s), making themselves easy targets for (distributed) Denial of Service (DoS) attacks. Earlier solutions for this problem are based on port-hopping between pairs of processes which are synchronous or exchange acknowledgments.
However, acknowledgments, if lost, can cause a port to be open for longer time and thus be vulnerable, while time servers can become targets to DoS attack themselves. Here, we extend port-hopping to support multiparty applications, by proposing the BIGWHEEL algorithm, for each application server to communicate with multiple clients in a port-hopping manner without the need for group synchronization.
Furthermore, we present an adaptive algorithm, HOPERAA, for enabling hopping in the presence of bounded asynchrony, namely, when the communicating parties have clocks with clock drifts. The solutions are simple, based on each client interacting with the server independently of the other clients, without the need of acknowledgments or time server(s).
Further, they do not rely on the application having a fixed port open in the beginning, neither do they require the clients to get a “first-contact” port from a third party. We show analytically the properties of the algorithms and also study experimentally their success rates, confirm the relation with the analytical bounds.


*------------*------------*------------*------------*------------*------------*