Showing posts with label Java Project Titles. Show all posts
Showing posts with label Java Project Titles. Show all posts

Sunday 7 July 2013

Java Project Titles, Java Project Abstracts, Java IEEE Project Abstracts, Java Projects abstracts for CSE IT MCA, Download Java Titles, Download Java Project Abstracts, Download IEEE Java Abstracts

JAVA PROJECTS - ABSTRACTS
A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data
ABSTRACT:
Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. 
The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent, the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method. 
The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probabilitybased Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. 
The results, on 35 publicly available real-world high-dimensional image, microarray, and text data, demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers.


A Globally-Variant Locally-Constant Model for Fusion of Labels from Multiple Diverse Experts without Using Reference Labels
Abstract:
Researchers have shown that fusion of categorical labels from multiple experts - humans or machine classifiers - improves the accuracy and generalizability of the overall classification system. Simple plurality is a popular technique for performing this fusion, but it gives equal importance to labels from all experts, who may not be equally reliable or consistent across the dataset. Estimation of expert reliability without knowing the reference labels is, however, a challenging problem. 
Most previous works deal with these challenges by modeling expert reliability as constant over the entire data (feature) space. This paper presents a model based on the consideration that in dealing with real-world data, expert reliability is variable over the complete feature space but constant over local clusters of homogeneous instances. 
This model jointly learns a classifier and expert reliability parameters without assuming knowledge of the reference labels using the Expectation-Maximization (EM) algorithm. Classification experiments on simulated data, data from the UCI Machine Learning Repository, and two emotional speech classification datasets show the benefits of the proposed model. 
Using a metric based on the Jensen-Shannon divergence, we empirically show that the proposed model gives greater benefit for datasets where expert reliability is highly variable over the feature space.


A Privacy Leakage Upper Bound Constraint-Based Approach for Cost-Effective Privacy Preserving of Intermediate Data Sets in Cloud
ABSTRACT:
Cloud computing provides massive computation power and storage capacity which enable users to deploy computation and data-intensive applications without infrastructure investment. Along the processing of such applications, a large volume of intermediate data sets will be generated, and often stored to save the cost of recomputing them. 
However, preserving the privacy of intermediate data sets becomes a challenging problem because adversaries may recover privacy-sensitive information by analyzing multiple intermediate data sets. Encrypting ALL data sets in cloud is widely adopted in existing approaches to address this challenge. But we argue that encrypting all intermediate data sets are neither efficient nor cost-effective because it is very time consuming and costly for data-intensive applications to en/decrypt data sets frequently while performing any operation on them. 
In this paper, we propose a novel upper bound privacy leakage constraint-based approach to identify which intermediate data sets need to be encrypted and which do not, so that privacy-preserving cost can be saved while the privacy requirements of data holders can still be satisfied. Evaluation results demonstrate that the privacy-preserving cost of intermediate data sets can be significantly reduced with our approach over existing ones where all data sets are encrypted.


A Resource Allocation Scheme for Scalable Video Multicast in WiMAX Relay Networks
Abstract:
This paper proposes the first resource allocation scheme in the literature to support scalable-video multicast for WiMAX relay networks. We prove that when the available bandwidth is limited, the bandwidth allocation problems of 1) maximizing network throughput and 2) maximizing the number of satisfied users are NP-hard. 
To find the near-optimal solutions to this type of maximization problem in polynomial time, this study first proposes a greedy weighted algorithm, GWA, for bandwidth allocation. By incorporating table-consulting mechanisms, the proposed GWA can intelligently avoid redundant bandwidth allocation and thus accomplish high network performance (such as high network throughput or large number of satisfied users). 
To maintain the high performance gained by GWA and simultaneously improve its worst case performance, this study extends GWA to a bounded version, BGWA, which guarantees that its performance gains are lower bounded. This study shows that the computational complexity of BGWA is also in polynomial time and proves that BGWA can provide at least 1/ρ times the performance of the optimal solution, where rho is a finite value no less than one. 
Finally, simulation results show that the proposed BGWA bandwidth allocation scheme can effectively achieve different performance objectives with different parameter settings.


A Scalable Server Architecture for Mobile Presence Services in Social Network Applications
Abstract:
Social network applications are becoming increasingly popular on mobile devices. A mobile presence service is an essential component of a social network application because it maintains each mobile user's presence information, such as the current status (online/offline), GPS location and network address, and also updates the user's online friends with the information continually. 
If presence updates occur frequently, the enormous number of messages distributed by presence servers may lead to a scalability problem in a large-scale mobile presence service. To address the problem, we propose an efficient and scalable server architecture, called PresenceCloud, which enables mobile presence services to support large-scale social network applications. 
When a mobile user joins a network, PresenceCloud searches for the presence of his/her friends and notifies them of his/her arrival. PresenceCloud organizes presence servers into a quorum-based server-to-server architecture for efficient presence searching. It also leverages a directed search algorithm and a one-hop caching strategy to achieve small constant search latency. 
We analyze the performance of PresenceCloud in terms of the search cost and search satisfaction level. The search cost is defined as the total number of messages generated by the presence server when a user arrives; and search satisfaction level is defined as the time it takes to search for the arriving user's friend list. The results of simulations demonstrate that PresenceCloud achieves performance gains in the search cost without compromising search satisfaction.


A Stochastic Model to Investigate Data Center Performance and QoS in IaaS Systems
Abstract:
Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to the federation with other clouds. Performance evaluation of Cloud Computing infrastructures is required to predict and quantify the cost-benefit of a strategy portfolio and the corresponding Quality of Service (QoS) experienced by users. 
Such analyses are not feasible by simulation or on-the-field experimentation, due to the great number of parameters that have to be investigated. In this paper, we present an analytical model, based on Stochastic Reward Nets (SRNs), that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud-specific strategies. 
Several performance metrics are defined and evaluated to analyze the behavior of a Cloud data center: utilization, availability, waiting time, and responsiveness. A resiliency analysis is also provided to take into account load bursts. 
Finally, a general methodology is presented that, starting from the concept of system capacity, can help system managers to opportunely set the data center parameters under different working conditions.


Active Learning and Effort Estimation: Finding the Essential Content of Software Effort Estimation Data
Abstract:
Background: Do we always need complex methods for software effort estimation (SEE)? Aim: To characterize the essential content of SEE data, i.e., the least number of features and instances required to capture the information within SEE data. 
If the essential content is very small, then 1) the contained information must be very brief and 2) the value added of complex learning schemes must be minimal. Method: Our QUICK method computes the euclidean distance between rows (instances) and columns (features) of SEE data, then prunes synonyms (similar features) and outliers (distant instances), then assesses the reduced data by comparing predictions from 1) a simple learner using the reduced data and 2) a state-of-the-art learner (CART) using all data. Performance is measured using hold-out experiments and expressed in terms of mean and median MRE, MAR, PRED(25), MBRE, MIBRE, or MMER. Results: For 18 datasets, QUICK pruned 69 to 96 percent of the training data (median = 89 percent). 
K = 1 nearest neighbor predictions (in the reduced data) performed as well as CART's predictions (using all data). Conclusion: The essential content of some SEE datasets is very small. Complex estimation methods may be overelaborate for such datasets and can be simplified. We offer QUICK as an example of such a simpler SEE method.


AMES-Cloud A Framework of Adaptive Mobile Video Streaming and Efficient Social Video Sharing in the Clouds
ABSTRACT
While demands on video traffic over mobile networks have been souring, the wireless link capacity cannot keep up with the traffic demand. The gap between the traffic demand and the link capacity, along with time-varying link conditions, results in poor service quality of video streaming over mobile networks such as long buffering time and intermittent disruptions. 
Leveraging the cloud computing technology, we propose a new mobile video streaming framework, dubbed AMES-Cloud, which has two main parts: AMoV (adaptive mobile video streaming) and ESoV (efficient social video sharing). AMoV and ESoV construct a private agent to provide video streaming services efficiently for each mobile user. 
For a given user, AMoV lets her private agent adaptively adjust her streaming flow with a scalable video coding technique based on the feedback of link quality. Likewise, ESoV monitors the social network interactions among mobile users, and their private agents try to prefetch video content in advance. 
We implement a prototype of the AMES-Cloud framework to demonstrate its performance. It is shown that the private agents in the clouds can effectively provide the adaptive streaming, and perform video sharing (i.e., prefetching) based on the social network analysis.


An Efficient and Robust Addressing Protocol for Node Autoconfiguration in Ad Hoc Networks
ABSTRACT:
Address assignment is a key challenge in ad hoc networks due to the lack of infrastructure. Autonomous addressing protocols require a distributed and self-managed mechanism to avoid address collisions in a dynamic network with fading channels, frequent partitions, and joining/leaving nodes. 
We propose and analyze a lightweight protocol that configures mobile ad hoc nodes based on a distributed address database stored in filters that reduces the control load and makes the proposal robust to packet losses and network partitions. 
We evaluate the performance of our protocol, considering joining nodes, partition merging events, and network initialization. Simulation results show that our protocol resolves all the address collisions and also reduces the control traffic when compared to previously proposed protocols.


Anomaly Detection via Online Over-Sampling Principal Component Analysis
ABSTRACT:
Anomaly detection has been an important research topic in data mining and machine learning. Many real-world applications such as intrusion or credit card fraud detection require an effective and efficient framework to identify deviated data instances. However, most anomaly detection methods are typically implemented in batch mode, and thus cannot be easily extended to large-scale problems without sacrificing computation and memory requirements. 
In this paper, we propose an online over-sampling principal component analysis (osPCA) algorithm to address this problem, and we aim at detecting the presence of outliers from a large amount of data via an online updating technique. Unlike prior PCA based approaches, we do not store the entire data matrix or covariance matrix, and thus our approach is especially of interest in online or large-scale problems. 
By over-sampling the target instance and extracting the principal direction of the data, the proposed osPCA allows us to determine the anomaly of the target instance according to the variation of the resulting dominant eigenvector. Since our osPCA need not perform eigen analysis explicitly, the proposed framework is favored for online applications which have computation or memory limitations. 
Compared with the well-known power method for PCA and other popular anomaly detection algorithms, our experimental results verify the feasibility of our proposed method in terms of both accuracy and efficiency.


Automatic Semantic Content Extraction in Videos Using a Fuzzy Ontology and Rule-Based Model
Abstract:
Recent increase in the use of video-based applications has revealed the need for extracting the content in videos. Raw data and low-level features alone are not sufficient to fulfill the user 's needs; that is, a deeper understanding of the content at the semantic level is required. 
Currently, manual techniques, which are inefficient, subjective and costly in time and limit the querying capabilities, are being used to bridge the gap between low-level representative features and high-level semantic content. Here, we propose a semantic content extraction system that allows the user to query and retrieve objects, events, and concepts that are extracted automatically. 
We introduce an ontology-based fuzzy video semantic content model that uses spatial/temporal relations in event and concept definitions. This metaontology definition provides a wide-domain applicable rule construction standard that allows the user to construct an ontology for a given domain. In addition to domain ontologies, we use additional rule definitions (without using ontology) to lower spatial relation computation cost and to be able to define some complex situations more effectively. 
The proposed framework has been fully implemented and tested on three different domains. We have obtained satisfactory precision and recall rates for object, event and concept extraction.


Bicriteria Optimization in Multihop Wireless Networks: Characterizing the Throughput-Energy Envelope
Abstract:
Network throughput and energy consumption are two important performance metrics for a multihop wireless network. Current state-of-the-art research is limited to either maximizing throughput under some energy constraint or minimizing energy consumption while satisfying some throughput requirement. Although many of these prior efforts were able to offer some optimal solutions, there is still a critical need to have a systematic study on how to optimize both objectives simultaneously. 
In this paper, we take a multicriteria optimization approach to offer a systematic study on the relationship between the two performance objectives. To focus on throughput and energy performance, we simplify link layer scheduling by employing orthogonal channels among the links. We show that the solution to the multicriteria optimization problem characterizes the envelope of the entire throughput-energy region, i.e., the so-called optimal throughput-energy curve. 
We prove some important properties of the optimal throughput-energy curve. For case study, we consider both linear and nonlinear throughput functions. For the linear case, we characterize the optimal throughput-energy curve precisely through parametric analysis, while for the nonlinear case, we use a piecewise linear approximation to approximate the optimal throughput-energy curve with arbitrary accuracy. Our results offer important insights on exploiting the tradeoff between the two performance metrics.


Cloud computing for mobile users can offloading computation save energy
Abstract
The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems’ battery lifetimes? Cloud computing1 is a new paradigm in which computing resources such as processing, memory, and storage are not physically present at the user’s location. 
Instead, a service provider owns and manages these resources, and users access them via the Internet. For example, Amazon Web Services lets users store personal data via its Simple Storage Service (S3) and perform computations on stored data using the Elastic Compute Cloud (EC2). 
This type of computing provides many advantages for businesses—including low initial capital investment, shorter start-up time for new services, lower maintenance and operation costs, higher utilization through virtualization, and easier disaster recovery that make cloud computing an attractive option. 
Reports suggest that there are several benefits in shifting computing from the desktop to the cloud.1,2 What about cloud computing for mobile users? The primary constraints for mobile computing are limited energy and wireless bandwidth. Cloud computing can provide energy savings as a service to mobile users, though it also poses some unique challenges.


Comparable Entity Mining from Comparative Questions
Abstract:
Comparing one thing with another is a typical part of human decision making process. However, it is not always easy to know what to compare and what are the alternatives. In this paper, we present a novel way to automatically mine comparable entities from comparative questions that users posted online to address this difficulty. 
To ensure high precision and high recall, we develop a weakly supervised bootstrapping approach for comparative question identification and comparable entity extraction by leveraging a large collection of online question archive. 
The experimental results show our method achieves F1-measure of 82.5 percent in comparative question identification and 83.3 percent in comparable entity extraction. Both significantly outperform an existing state-of-the-art method. Additionally, our ranking results show highly relevance to user's comparison intents in


Extracting Spread-Spectrum Hidden Data from Digital Media
Abstract:
We consider the problem of extracting blindly data embedded over a wide band in a spectrum (transform) domain of a digital medium (image, audio, video). We develop a novel multicarrier/signature iterative generalized least-squares (M-IGLS) core procedure to seek unknown data hidden in hosts via multicarrier spread-spectrum embedding. 
Neither the original host nor the embedding carriers are assumed available. Experimental studies on images show that the developed algorithm can achieve recovery probability of error close to what may be attained with known embedding carriers and host autocorrelation matrix.


Detection and Localization of Multiple Spoofing Attackers in Wireless Networks
ABSTRACT:
Wireless spoofing attacks are easy to launch and can significantly impact the performance of networks. Although the identity of a node can be verified through cryptographic authentication, conventional security approaches are not always desirable because of their overhead requirements. 
In this paper, we propose to use spatial information, a physical property associated with each node, hard to falsify, and not reliant on cryptography, as the basis for (1) detecting spoofing attacks; (2) determining the number of attackers when multiple adversaries masquerading as a same node identity; and (3) localizing multiple adversaries. We propose to use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to detect the spoofing attacks. 
We then formulate the problem of determining the number of attackers as a multi-class detection problem. Cluster-based mechanisms are developed to determine the number of attackers. When the training data is available, we explore using Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers. 
In addition, we developed an integrated detection and localization system that can localize the positions of multiple attackers. We evaluated our techniques through two testbeds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real office buildings. 
Our experimental results show that our proposed methods can achieve over 90% Hit Rate and Precision when determining the number of attackers. Our localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.


Distributed Cooperative Caching in Social Wireless Networks
ABSTRACT:
This paper introduces cooperative caching policies for minimizing electronic content provisioning cost in Social Wireless Networks (SWNET).SWNETs are formed by mobile devices, such as data enabled phones, electronic book readers etc., sharing common interests in electronic content, and physically gathering together in public places. 
Electronic object caching in such SWNETs are shown to be able to reduce the content provisioning cost which depends heavily on the service and pricing dependences among various stakeholders including content providers (CP), network service providers, and End Consumers (EC). Drawing motivation from Amazon’s Kindle electronic book delivery business, this paper develops practical network, service, and pricing models which are then used for creating two object caching strategies for minimizing content provisioning costs in networks with homogenous and heterogeneous object demands. 
The paper constructs analytical and simulation models for analyzing the proposed caching strategies in the presence of selfish users that deviate from network-wide cost-optimal policies. It also reports results from an Android phone based prototype SWNET, validating the presented analytical and simulation results.


Distributed Processing of Probabilistic Top-k Queries in Wireless Sensor Networks
ABSTRACT:
In this paper, we introduce the notion of sufficient set and necessary set for distributed processing of probabilistic top-k queries in cluster-based wireless sensor networks. These two concepts have very nice properties that can facilitate localized data pruning in clusters. 
Accordingly, we develop a suite of algorithms, namely, sufficient set-based (SSB), necessary set-based (NSB), and boundary-based (BB), for intercluster query processing with bounded rounds of communications. Moreover, in responding to dynamic changes of data distribution in the network, we develop an adaptive algorithm that dynamically switches among the three proposed algorithms to minimize the transmission cost. 
We show the applicability of sufficient set and necessary set to wireless sensor networks with both two-tier hierarchical and tree-structured network topologies. Experimental results show that the proposed algorithms reduce data transmissions significantly and incur only small constant rounds of data communications. The experimental results also demonstrate the superiority of the adaptive algorithm, which achieves a near-optimal performance under various conditions.


Dynamic Query Forms for Database Queries
Abstract:
Modern scientific and web databases maintain large and heterogeneous data. These real-world database schemas contain over hundreds or even thousands of attributes and relations. Traditional predefined query forms are not able to satisfy various ad-hoc queries from users. 
This paper proposes DQF, a novel database query form interface, which is able to dynamically generate query forms. The essence of DQF is to capture the user’s preference and rank query form components. The generation of the query form is an iterative process and is guided by the user. At each iteration, the system automatically generates ranking lists of form components and the user then adds the desired form components into the query form. 
The ranking of form components is based on the captured user preference. The user can also fill the query form and submit queries to view the query result at each iteration. In this way, the query form could be dynamically refined until the user satisfies with the query results. 
We propose a metric for measuring the goodness of a query form. A probabilistic model is developed for estimating the goodness of a query form in DQF. Our experimental evaluation and user study demonstrate the effectiveness and efficiency of the system.


Dynamic Trust Management for Delay Tolerant Networks and Its Application to Secure Routing
Abstract:
Delay tolerant networks (DTNs) are characterized by high end-to-end latency, frequent disconnection, and opportunistic communication over unreliable wireless links. In this paper, we design and validate a dynamic trust management protocol for secure routing optimization in DTN environments in the presence of well-behaved, selfish and malicious nodes. 
We develop a novel model-based methodology for the analysis of our trust protocol and validate it via extensive simulation. Moreover, we address dynamic trust management, i.e., determining and applying the best operational settings at runtime in response to dynamically changing network conditions to minimize trust bias and to maximize the routing application performance. 
We perform a comparative analysis of our proposed routing protocol against Bayesian trust-based and non-trust based (PROPHET and epidemic) routing protocols. The results demonstrate that our protocol is able to deal with selfish behaviors and is resilient against trust-related attacks. 
Furthermore, our trust-based routing protocol can effectively trade off message overhead and message delay for a significant gain in delivery ratio. Our trust-based routing protocol operating under identified best settings outperforms Bayesian trust-based routing and PROPHET, and approaches the ideal performance of epidemic routing in delivery ratio and message delay without incurring high message or protocol maintenance overhead.


Two-Dimensional Orthogonal DCT Expansion in Triangular and Trapezoid Regions
ABSTRACT
It is known that the 2-D DCT basis is complete and orthogonal in a rectangular region. In this paper, we introduce the way to generate the complete and orthogonal 2-D DCT basis in a trapezoid region or a triangular region without using the complicated Gram-Schmidt method. 
Moreover, since a polygon can be decomposed several triangular regions, the proposed method is also suitable for the polygonal region. Our algorithm can much generalize the JPEG algorithm. Instead of dividing an image into 8 by 8 blocks, we can divide an image into trapezoid or triangular regions and then transform and code each of them. 
In addition to the DCT basis, our method can also be used for generating the 2-D complete and orthogonal DFT basis, KLT basis, Legendre basis, Hadamard (Walsh) basis, and polynomial basis in the trapezoid and triangular regions.


A System to Filter Unwanted Messages from OSN User Walls
ABSTRACT:
One fundamental issue in today’s Online Social Networks (OSNs) is to give users the ability to control the messages posted on their own private space to avoid that unwanted content is displayed. Up to now, OSNs provide little support to this requirement.
To fill the gap, in this paper, we propose a system allowing OSN users to have a direct control on the messages posted on their walls. This is achieved through a flexible rule-based system, which allows users to customize the filtering criteria to be applied to their walls, and a Machine Learning-based soft classifier automatically labeling messages in support of content-based filtering.


Analysis of Distance-Based Location Management in Wireless Communication Networks
Abstract
The performance of dynamic distance-based location management schemes (DBLMS) in wireless communication networks is analyzed. A Markov chain is developed as a mobility model to describe the movement of a mobile terminal in 2D cellular structures. 
The paging area residence time is characterized for arbitrary cell residence time by using the Markov chain. The expected number of paging area boundary crossings and the cost of the distance-based location update method are analyzed by using the classical renewal theory for two different call handling models. For the call plus location update model, two cases are considered. 
In the first case, the intercall time has an arbitrary distribution and the cell residence time has an exponential distribution. In the second case, the intercall time has a hyper-Erlang distribution and the cell residence time has an arbitrary distribution. For the call without location update model, both intercall time and cell residence time can have arbitrary distributions. 
Our analysis makes it possible to find the optimal distance threshold that minimizes the total cost of location management in a DBLMS.


Entrusting Private Computation and Data to Untrusted Networks
Abstract:
We present sTile, a technique for distributing trust-needing computation onto insecure networks, while providing probabilistic guarantees that malicious agents that compromise parts of the network cannot learn private data. With sTile, we explore the fundamental cost of achieving privacy through data distribution and bound how much less efficient a privacy-preserving system is than a nonprivate one. 
This paper focuses specifically on NP-complete problems and demonstrates how sTile-based systems can solve important real-world problems, such as protein folding, image recognition, and resource allocation. 
We present the algorithms involved in sTile and formally prove that sTile-based systems preserve privacy. We develop a reference sTile-based implementation and empirically evaluate it on several physical networks of varying sizes, including the globally distributed PlanetLab testbed. 
Our analysis demonstrates sTile's scalability and ability to handle varying network delay, as well as verifies that problems requiring privacy-preservation can be solved using sTile orders of magnitude faster than using today's state-of-the-art alternatives.


Delay-Based Network Utility Maximization
ABSTRACT:
It is well known that max-weight policies based on a queue backlog index can be used to stabilize stochastic networks, and that similar stability results hold if a delay index is used. 
Using Lyapunov optimization, we extend this analysis to design a utility maximizing algorithm that uses explicit delay information from the head-of-line packet at each user. 
The resulting policy is shown to ensure deterministic worst-case delay guarantees and to yield a throughput utility that differs from the optimally fair value by an amount that is inversely proportional to the delay guarantee. Our results hold for a general class of 1-hop networks, including packet switches


Discovery and Verification of Neighbor Positions in Mobile Ad Hoc Networks
ABSTRACT:
A growing number of ad hoc networking protocols and location-aware services require that mobile nodes learn the position of their neighbors. However, such a process can be easily abused or disrupted by adversarial nodes. 
In absence of a priori trusted nodes, the discovery and verification of neighbor positions presents challenges that have been scarcely investigated in the literature. 
In this paper, we address this open issue by proposing a fully distributed cooperative solution that is robust against independent and colluding adversaries, and can be impaired only by an overwhelming presence of adversaries. 
Results show that our protocol can thwart more than 99 percent of the attacks under the best possible conditions for the adversaries, with minimal false positive rates.


Distributed Processing of Probabilistic Top-k Queries in Wireless Sensor Networks
ABSTRACT:
In this paper, we introduce the notion of sufficient set and necessary set for distributed processing of probabilistic top-k queries in cluster-based wireless sensor networks. These two concepts have very nice properties that can facilitate localized data pruning in clusters. 
Accordingly, we develop a suite of algorithms, namely, sufficient set-based (SSB), necessary set-based (NSB), and boundary-based (BB), for intercluster query processing with bounded rounds of communications. Moreover, in responding to dynamic changes of data distribution in the network, we develop an adaptive algorithm that dynamically switches among the three proposed algorithms to minimize the transmission cost. 
We show the applicability of sufficient set and necessary set to wireless sensor networks with both two-tier hierarchical and tree-structured network topologies. Experimental results show that the proposed algorithms reduce data transmissions significantly and incur only small constant rounds of data communications. 
The experimental results also demonstrate the superiority of the adaptive algorithm, which achieves a near-optimal performance under various conditions.


Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment
ABSTRACT:
Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. 
In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. 
We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. 
We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.


Efficient Classification for Additive Kernel SVMs
Abstract
We show that a class of non-linear kernel SVMs admit approximate classifiers with run-time and memory complexity that is independent of the number of support vectors. This class of kernels which we refer to as additive kernels, include the widely used kernels for histogram based image comparison like intersection and chi-squared kernels. 
Additive kernel SVMs can offer significant improvements in accuracy over linear SVMs on a wide variety of tasks while having the same run-time, making them practical for large scale recognition or real-time detection tasks. 
We present experiments on a variety of datasets including the INRIA person, Daimler-Chrysler pedestrians, UIUC Cars, Caltech-101, MNIST and USPS digits, to demonstrate the effectiveness of our method for efficient evaluation of SVMs with additive kernels. Since its introduction, our method has become integral to various state of the art systems for PASCAL VOC object detection/image classification, ImageNet Challenge, TRECVID, etc. 
The techniques we propose can also be applied to settings where evaluation of weighted additive kernels is required, which include kernelized versions of PCA, LDA, regression, k-means, as well as speeding up the inner loop of SVM classifier training algorithms.


Enabling Data Dynamic and Indirect Mutual Trust for Cloud Computing Storage Systems
Abstract:
Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their sensitive data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end. A data owner pays for a desired level of security and must get some compensation in case of any misbehavior committed by the CSP. 
On the other hand, the CSP needs a protection from any false accusation that may be claimed by the owner to get illegal compensations. In this paper, we propose a cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables indirect mutual trust between them. 
The proposed scheme has four important features: (i) it allows the owner to outsource sensitive data to a CSP, and perform full block-level dynamic operations on the outsourced data, i.e., block modification, insertion, deletion, and append, (ii) it ensures that authorized users (i.e., those who have the right to access the owner's file) receive the latest version of the outsourced data, (iii) it enables indirect mutual trust between the owner and the CSP, and (iv) it allows the owner to grant or revoke access to the outsourced data.
We discuss the security issues of the proposed scheme. Besides, we justify its performance through theoretical analysis and a prototype implementation on Amazon cloud platform to evaluate storage, communication, and computation overheads


Facilitating Effective User Navigation through Website Structure Improvement
Abstract:
Designing well-structured websites to facilitate effective user navigation has long been a challenge. A primary reason is that the web developers' understanding of how a website should be structured can be considerably different from that of the users. While various methods have been proposed to relink webpages to improve navigability using user navigation data, the completely reorganized new structure can be highly unpredictable, and the cost of disorienting users after the changes remains unanalyzed. 
This paper addresses how to improve a website without introducing substantial changes. Specifically, we propose a mathematical programming model to improve the user navigation on a website while minimizing alterations to its current structure. Results from extensive tests conducted on a publicly available real data set indicate that our model not only significantly improves the user navigation with very few changes, but also can be effectively solved. 
We have also tested the model on large synthetic data sets to demonstrate that it scales up very well. In addition, we define two evaluation metrics and use them to assess the performance of the improved website using the real data set. Evaluation results confirm that the user navigation on the improved structure is indeed greatly enhanced. 
More interestingly, we find that heavily disoriented users are more likely to benefit from the improved structure than the less disoriented users.


Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization
Abstract:
Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. 
Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. 
In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. 
For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets


Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks
ABSTRACT:
Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application’s lifetime to the base station despite the fact that sensor nodes have limited power supplies. We propose using lowcost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. 
First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. 
Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. 
This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.


A Secure Protocol for Spontaneous Wireless Ad Hoc Networks Creation
Abstract:
This paper presents a secure protocol for spontaneous wireless ad hoc networks which uses an hybrid symmetric/asymmetric scheme and the trust between users in order to exchange the initial data and to exchange the secret keys that will be used to encrypt the data. Trust is based on the first visual contact between users. Our proposal is a complete self-configured secure protocol that is able to create the network and share secure services without any infrastructure. 
The network allows sharing resources and offering new services among users in a secure environment. The protocol includes all functions needed to operate without any external support. We have designed and developed it in devices with limited resources. Network creation stages are detailed and the communication, protocol messages, and network management are explained. 
Our proposal has been implemented in order to test the protocol procedure and performance. Finally, we compare the protocol with other spontaneous ad hoc network protocols in order to highlight its features and we provide a security analysis of the system.


Adaptive Network Coding for Broadband Wireless Access Networks
Abstract:
Broadband wireless access (BWA) networks, such as LTE and WiMAX, are inherently lossy due to wireless medium unreliability. Although the Hybrid Automatic Repeat reQuest (HARQ) error-control method recovers from packet loss, it has low transmission efficiency and is unsuitable for delay-sensitive applications. 
Alternatively, network coding techniques improve the throughput of wireless networks, but incur significant overhead and ignore network constraints such as Medium Access Control (MAC) layer transmission opportunities and physical (PHY) layer channel conditions. The present study provides analysis of Random Network Coding (RNC) and Systematic Network Coding (SNC) decoding probabilities. 
Based on the analytical results, SNC is selected for developing an adaptive network coding scheme designated as Frame-by-frame Adaptive Systematic Network Coding (FASNC). According to network constraints per frame, FASNC dynamically utilizes either Modified Systematic Network Coding (M-SNC) or Mixed Generation Coding (MGC). 
An analytical model is developed for evaluating the mean decoding delay and mean goodput of the proposed FASNC scheme. The results derived using this model agree with those obtained from computer simulations. Simulations show that FASNC results in both lower decoding delay and reduced buffer requirements compared to MRNC and N-in-1 ReTX, while also yielding higher goodput than HARQ, MRNC, and N-in-1 ReTX.


An Adaptive System Based on Roadmap Profiling to Enhance Warning Message Dissemination in VANETs
Abstract:
In recent years, new applications, architectures, and technologies have been proposed for vehicular ad hoc networks (VANETs). Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. 
In the past, several approaches have been proposed to improve the alert dissemination process in multihop wireless networks, but none of them were tested in real urban scenarios, adapting its behavior to the propagation features of the scenario. In this paper, we present the Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. 
With respect to previous proposals, our proposed scheme uses a mapping technique based on adapting the dissemination strategy according to both the characteristics of the street area where the vehicles are moving and the density of vehicles in the target scenario. 
Our algorithm reported a noticeable improvement in the performance of alert dissemination processes in scenarios based on real city maps.


Attribute-Based Encryption With Verifiable Outsourced Decryption
Abstract:
Attribute-based encryption (ABE) is a public-key-based one-to-many encryption that allows users to encrypt and decrypt data based on user attributes. A promising application of ABE is flexible access control of encrypted data stored in the cloud, using access polices and ascribed attributes associated with private keys and ciphertexts. 
One of the main efficiency drawbacks of the existing ABE schemes is that decryption involves expensive pairing operations and the number of such operations grows with the complexity of the access policy. Recently, Green et al. proposed an ABE system with outsourced decryption that largely eliminates the decryption overhead for users. 
In such a system, a user provides an untrusted server, say a cloud service provider, with a transformation key that allows the cloud to translate any ABE ciphertext satisfied by that user's attributes or access policy into a simple ciphertext, and it only incurs a small computational overhead for the user to recover the plaintext from the transformed ciphertext. 
Security of an ABE system with outsourced decryption ensures that an adversary (including a malicious cloud) will not be able to learn anything about the encrypted message; however, it does not guarantee the correctness of the transformation done by the cloud. 
In this paper, we consider a new requirement of ABE with outsourced decryption: verifiability. Informally, verifiability guarantees that a user can efficiently check if the transformation is done correctly. We give the formal model of ABE with verifiable outsourced decryption and propose a concrete scheme. 
We prove that our new scheme is both secure and verifiable, without relying on random oracles. Finally, we show an implementation of our scheme and result of performance measurements, which indicates a significant reduction on computing resources imposed on users.


A Generalized Flow-Based Method for Analysis of Implicit Relationships on Wikipedia
Absract
We focus on measuring relationships between pairs of objects in Wikipedia whose pages can be regarded as individual objects. Two kinds of relationships between two objects exist: in Wikipedia, an explicit relationship is represented by a single link between the two pages for the objects, and an implicit relationship is represented by a link structure containing the two pages. 
Some of the previously proposed methods for measuring relationships are cohesion-based methods, which underestimate objects having high degrees, although such objects could be important in constituting relationships in Wikipedia. The other methods are inadequate for measuring implicit relationships because they use only one or two of the following three important factors: distance, connectivity, and cocitation. 
We propose a new method using a generalized maximum flow which reflects all the three factors and does not underestimate objects having high degree. We confirm through experiments that our method can measure the strength of a relationship more appropriately than these previously proposed methods do. 
Another remarkable aspect of our method is mining elucidatory objects, that is, objects constituting a relationship. We explain that mining elucidatory objects would open a novel way to deeply understand a relationship.


A Load Balancing Model Based on Cloud Partitioning for the Public Cloud
ABSTRACT
Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. 
This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.


A Rough-Set-Based Incremental Approach for Updating Approximations under Dynamic Maintenance Environments
Abstract:
Approximations of a concept by a variable precision rough-set model (VPRS) usually vary under a dynamic information system environment. It is thus effective to carry out incremental updating approximations by utilizing previous data structures. This paper focuses on a new incremental method for updating approximations of VPRS while objects in the information system dynamically alter. 
It discusses properties of information granulation and approximations under the dynamic environment while objects in the universe evolve over time. The variation of an attribute's domain is also considered to perform incremental updating for approximations under VPRS. Finally, an extensive experimental evaluation validates the efficiency of the proposed method for dynamic maintenance of VPRS approximations.


Optimizing Cloud Resources for Delivering IPTV Services through Virtualization
ABSTRACT
Virtualized cloud-based services can take advantage of statistical multiplexing across applications to yield significant cost savings to the operator. However, achieving similar benefits with real-time services can be a challenge. we seek to lower a provider’s costs of real-time IPTV services through a virtualized IPTV architecture and through intelligent time shifting of service delivery. We take advantage of the differences in the deadlines associated with Live TV versus Video-on-Demand to effectively multiplex these services. 
We provide a generalized framework for computing the amount of resources needed to support multiple services, without missing the deadline for any service. We construct the problem as an optimization formulation that uses a generic cost function. We consider multiple forms for the cost function to reflect the different pricing options. 
The solution to this formulation gives the number of servers needed at different time instants to support these services. We implement a simple mechanism for time-shifting scheduled jobs in a simulator and study the reduction in server load using real traces from an operational IPTV network. 
Our results show that we are able to reduce the load by ~24%. We also show that there are interesting open problems in designing mechanisms that allow time-shifting of load in such environments.


Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm
ABSTRACT:
The problem of efficiently and securely broadcasting to a remote cooperative group occurs in many newly emerging networks. A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. The existing key management paradigms cannot deal with these challenges effectively. 
In this paper, we circumvent these obstacles and close this gap by proposing a novel key management paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key agreement. In such a system, each member maintains a single public/secret key pair. Upon seeing the public keys of the members, a remote sender can securely broadcast to any intended subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven secure in the standard model. 
Even if all the non-intended members collude, they cannot extract any useful information from the transmitted messages. After the public group encryption key is extracted, both the computation overhead and the communication cost are independent of the group size. 
Furthermore, our scheme facilitates simple yet efficient member deletion/ addition and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and its implementation friendliness without relying on a fully trusted authority render our protocol a very promising solution to many applications.


Mining Contracts for Business Events And Temporal Constraints in Service Engagements
ABSTRACT
Contracts are legally binding descriptions of business service engagements. In particular, we consider business events as elements of a service engagement. Business events such as purchase, delivery, bill payment, bank interest accrual not only correspond to essential processes but are also inherently temporally constrained. 
Identifying and understanding the events and their temporal relationships can help a business partner determine what to deliver and what to expect from others as it participates in the service engagement specified by a contract. However, contracts are expressed in unstructured text and their insights are buried therein. Our contributions are threefold. 
We develop a novel approach employing a hybrid of surface patterns, parsing, and classification to extract (1) business events and (2) their temporal constraints from contract text. We use topic modeling to (3) automatically organize the event terms into clusters. An evaluation on a real-life contract dataset demonstrates the viability and promise of our hybrid approach, yielding an F-measure of 0.89 in event extraction and 0.90 in temporal constraints extraction. 
The topic model yields event term clusters with an average match of 85% between two independent human annotations and an expert-assigned set of class labels for the clusters.


Content Sharing over Smartphone-Based Delay-Tolerant Networks
ABSTRACT:
With the growing number of smartphone users, peer-to-peer ad hoc content sharing is expected to occur more often. Thus, new content sharing mechanisms should be developed as traditional data delivery schemes are not efficient for content sharing due to the sporadic connectivity between smartphones. 
To accomplish data delivery in such challenging environments, researchers have proposed the use of store-carry-forward protocols, in which a node stores a message and carries it until a forwarding opportunity arises through an encounter with other nodes. Most previous works in this field have focused on the prediction of whether two nodes would encounter each other, without considering the place and time of the encounter.
In this paper, we propose discover-predict-deliver as an efficient content sharing scheme for delay-tolerant smartphone networks. In our proposed scheme, contents are shared using the mobility information of individuals. Specifically, our approach employs a mobility learning algorithm to identify places indoors and outdoors. A hidden Markov model is used to predict an individual’s future mobility information. 
Evaluation based on real traces indicates that with the proposed approach, 87 percent of contents can be correctly discovered and delivered within 2 hours when the content is available only in 30 percent of nodes in the network. 
We implement a sample application on commercial smartphones, and we validate its efficiency to analyze the practical feasibility of the content sharing application. Our system approximately results in a2 percent CPU overhead and reduces the battery lifetime of a smartphone by 15 percent


Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks
ABSTRACT:
Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. 
In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. 
The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing–coding tradeoff.


Collaboration in Multicloud Computing Environments: Framework and Security Issue
Abstract:
Although the cloud computing model is considered to be a very promising internet-based computing platform, it results in a loss of security control over the cloud-hosted assets. This is due to the outsourcing of enterprise IT assets hosted on third-party cloud computing platforms. 
Moreover, the lack of security constraints in the Service Level Agreements between the cloud providers and consumers results in a loss of trust as well. Obtaining a security certificate such as ISO 27000 or NIST-FISMA would help cloud providers improve consumers trust in their cloud platforms' security. 
However, such standards are still far from covering the full complexity of the cloud computing model. We introduce a new cloud security management framework based on aligning the FISMA standard to fit with the cloud computing model, enabling cloud providers and consumers to be security certified. 
Our framework is based on improving collaboration between cloud providers, service providers and service consumers in managing the security of the cloud platform and the hosted services. It is built on top of a number of security standards that assist in automating the security management process. 
We have developed a proof of concept of our framework using. NET and deployed it on a test bed cloud platform. We evaluated the framework by managing the security of a multi-tenant SaaS application exemplar.


Mining Contracts for Business Events And Temporal Constraints in Service Engagements
ABSTRACT
Contracts are legally binding descriptions of business service engagements. In particular, we consider business events as elements of a service engagement. Business events such as purchase, delivery, bill payment, bank interest accrual not only correspond to essential processes but are also inherently temporally constrained. 
Identifying and understanding the events and their temporal relationships can help a business partner determine what to deliver and what to expect from others as it participates in the service engagement specified by a contract. However, contracts are expressed in unstructured text and their insights are buried therein. 
Our contributions are threefold. We develop a novel approach employing a hybrid of surface patterns, parsing, and classification to extract (1) business events and (2) their temporal constraints from contract text. We use topic modeling to (3) automatically organize the event terms into clusters. 
An evaluation on a real-life contract dataset demonstrates the viability and promise of our hybrid approach, yielding an F-measure of 0.89 in event extraction and 0.90 in temporal constraints extraction. The topic model yields event term clusters with an average match of 85% between two independent human annotations and an expert-assigned set of class labels for the clusters.


Discovery and Verification of Neighbor Positions in Mobile Ad Hoc Networks
ABSTRACT:
A growing number of ad hoc networking protocols and location-aware services require that mobile nodes learn the position of their neighbors. However, such a process can be easily abused or disrupted by adversarial nodes. 
In absence of a priori trusted nodes, the discovery and verification of neighbor positions presents challenges that have been scarcely investigated in the literature. 
In this paper, we address this open issue by proposing a fully distributed cooperative solution that is robust against independent and colluding adversaries, and can be impaired only by an overwhelming presence of adversaries. Results show that our protocol can thwart more than 99 percent of the attacks under the best possible conditions for the adversaries, with minimal false positive rates.


Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image Segmentation
In this paper, we present an improved fuzzy C-means (FCM) algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric. The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their gray-level difference simultaneously. 
By using this factor, the new algorithm can accurately estimate the damping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers, we introduce a kernel distance measure to its objective function. 
The new algorithm adaptively determines the kernel parameter by using a fast bandwidth selection rule based on the distance variance of all data points in the collection. 
Furthermore, the tradeoff weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on synthetic and real images show that the new algorithm is effective and efficient, and is relatively independent of this type of noise.


General Framework to Histogram-Shifting-Based Reversible Data Hiding
Abstract
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH).With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. 
Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. 
It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.


Generating Domain-Specific Visual Language Tools from Abstract Visual Specifications
Domain-specific visual languages support high-level modeling for a wide range of application domains. However, building tools to support such languages is very challenging. We describe a set of key conceptual requirements for such tools and our approach to addressing these requirements, a set of visual language-based meta tools. 
These support definition of meta models, visual notations, views, modeling behaviors, design critics, and model transformations and provide a platform to realize target visual modeling tools. Extensions support collaborative work, human-centric tool interaction, and multiplatform deployment. 
We illustrate application of the meta toolset on tools developed with our approach. We describe tool developer and cognitive evaluations of our platform and our exemplar tools, and summarize key future research directions.


Geo-Community-Based Broadcasting for Data Dissemination in Mobile Social Networks
In this paper, we consider the issue of data broadcasting in mobile social networks (MS Nets). The objective is to broadcast data from a super user to other users in the network. 
There are two main challenges under this paradigm, namely 1) how to represent and characterize user mobility in realistic MS Nets; 2) given the knowledge of regular users' movements, how to design an efficient super user route to broadcast data actively. We first explore several realistic data sets to reveal both geographic and social regularities of human mobility, and further propose the concepts of geo community and geo centrality into MS Net analysis. Then, we employ a semi-Markov process to model user mobility based on the geo community structure of the network. 
Correspondingly, the geo centrality indicating the “dynamic user density” of each geo community can be derived from the semi-Markov model. Finally, considering the geo centrality information, we provide different route algorithms to cater to the super user that wants to either minimize total duration or maximize dissemination ratio. 
To the best of our knowledge, this work is the first to study data broadcasting in a realistic MS Net setting. Extensive trace-driven simulations show that our approach consistently outperforms other existing super user route design algorithms in terms of dissemination ratio and energy efficiency.


Harnessing the Cloud for Securely Outsourcing Large-Scale Systems of Linear Equations
Cloud computing economically enables customers with limited computational resources to outsource large-scale computations to the cloud. However, how to protect customers' confidential data involved in the computations then becomes a major security concern. 
In this paper, we present a secure outsourcing mechanism for solving large-scale systems of linear equations (LE) in cloud. Because applying traditional approaches like Gaussian elimination or LU decomposition (aka. direct method) to such large-scale LEs would be prohibitively expensive, we build the secure LE outsourcing mechanism via a completely different approach-iterative method, which is much easier to implement in practice and only demands relatively simpler matrix-vector operations. 
Specifically, our mechanism enables a customer to securely harness the cloud for iteratively finding successive approximations to the LE solution, while keeping both the sensitive input and output of the computation private. 
For robust cheating detection, we further explore the algebraic property of matrix-vector operations and propose an efficient result verification mechanism, which allows the customer to verify all answers received from previous iterative approximations in one batch with high probability. Thorough security analysis and prototype experiments on Amazon EC2 demonstrate the validity and practicality of our proposed design.


Image In painting on the Basis of Spectral Structure From 2-D Non harmonic Analysis
The restoration of images by digital inpainting is an active field of research and such algorithms are, in fact, now widely used. Conventional methods generally apply textures that are most similar to the areas around the missing region or use a large image database. However, this produces discontinuous textures and thus unsatisfactory results. 
Here, we propose a new technique to overcome this limitation by using signal prediction based on the nonharmonic analysis (NHA) technique proposed by the authors. NHA can be used to extract accurate spectra, irrespective of the window function, and its frequency resolution is less than that of the discrete Fourier transform. 
The proposed method sequentially generates new textures on the basis of the spectrum obtained by NHA. Missing regions from the spectrum are repaired using an improved cost function for 2D NHA. The proposed method is evaluated using the standard images Lena, Barbara, Airplane, Pepper, and Mandrill. The results show an improvement in MSE of about 10-20 compared with the examplar-based method and good subjective quality.


Image Quality Assessment Using Multi-Method Fusion
A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. 
The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called “context-dependent MMF” (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. 
To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases.


Image Transformation Based on Learning Dictionaries across Image Spaces
In this paper, we propose a framework of transforming images from a source image space to a target image space, based on learning coupled dictionaries from a training set of paired images. The framework can be used for applications such as image super-resolution and estimation of image intrinsic components (shading and albedo). It is based on a local parametric regression approach, using sparse feature representations over learned coupled dictionaries across the source and target image spaces. 
After coupled dictionary learning, sparse coefficient vectors of training image patch pairs are partitioned into easily retrievable local clusters. For any test image patch, we can fast index into its closest local cluster and perform a local parametric regression between the learned sparse feature spaces. The obtained sparse representation (together with the learned target space dictionary) provides multiple constraints for each pixel of the target image to be estimated. 
The final target image is reconstructed based on these constraints. The contributions of our proposed framework are three-fold. 1) We propose a concept of coupled dictionary learning based on coupled sparse coding which requires the sparse coefficient vectors of a pair of corresponding source and target image patches to have the same support, i.e., the same indices of nonzero elements. 2) We devise a space partitioning scheme to divide the high-dimensional but sparse feature space into local clusters. The partitioning facilitates extremely fast retrieval of closest local clusters for query patches. 3) Benefiting from sparse feature-based image transformation, our method is more robust to corrupted input data, and can be considered as a simultaneous image restoration and transformation process. 
Experiments on intrinsic image estimation and super-resolution demonstrate the effectiveness and efficiency of our proposed method.


Incentive Compatible Privacy-Preserving Data Analysis
In many cases, competing parties who have private data may collaboratively conduct privacy-preserving distributed data analysis (PPDA) tasks to learn beneficial data models or analysis results. Most often, the competing parties have different incentives. Although certain PPDA techniques guarantee that nothing other than the final analysis result is revealed, it is impossible to verify whether participating parties are truthful about their private input data. 
Unless proper incentives are set, current PPDA techniques cannot prevent participating parties from modifying their private inputs. This raises the question of how to design incentive compatible privacy-preserving data analysis techniques that motivate participating parties to provide truthful inputs. 
In this paper, we first develop key theorems, then base on these theorems, we analyze certain important privacy-preserving data analysis tasks that could be conducted in a way that telling the truth is the best choice for any participating party.


IP-Geo location Mapping for Moderately Connected Internet Regions
Most IP-geo location mapping schemes take delay-measurement approach, based on the assumption of a strong correlation between networking delay and geographical distance between the targeted client and the landmarks. In this paper, however, we investigate a large region of moderately connected Internet and find the delay-distance correlation is weak. 
But we discover a more probable rule—with high probability the shortest delay comes from the closest distance. Based on this closest-shortest rule, we develop a simple and novel IP-geo location mapping scheme for moderately connected Internet regions, called Geo Get. In Geo Get, we take a large number of web servers as passive landmarks and map a targeted client to the geo location of the landmark that has the shortest delay. 
We further use JavaScript at targeted clients to generate HTTP/Get probing for delay measurement. To control the measurement cost, we adopt a multistep probing method to refine the geo location of a targeted client, finally to city level. 
The evaluation results show that when probing about 100 landmarks, Geo Get correctly maps 35.4 percent clients to city level, which outperforms current schemes such as Geo Lim [16] and Geo Ping [14] by 270 and 239 percent, respectively, and the median error distance in Geo Get is around 120 km, outperforming Geo Lim and Geo Ping by 37 and 70 percent, respectively.


ITA: Innocuous Topology Awareness for Unstructured P2P Networks
One of the most appealing characteristics of unstructured P2P overlays is their enhanced self-* properties, which results from their loose, random structure. In addition, most of the algorithms which make searching in unstructured P2P systems scalable, such as dynamic querying and 1-hop replication, rely on the random nature of the overlay to function efficiently. The underlying communications network (i.e., the Internet), however, is not as randomly constructed. 
This leads to a mismatch between the distance of two peers on the overlay and the hosts they reside on at the IP layer, which in turn leads to its misuse. The crux of the problem arises from the fact that any effort to provide a better match between the overlay and the IP layer will inevitably lead to a reduction in the random structure of the P2P overlay, with many adverse results. 
With this in mind, we propose ITA, an algorithm which creates a random overlay of randomly connected neighborhoods providing topology awareness to P2P systems, while at the same time has no negative effect on the self-* properties or the operation of the other P2P algorithms. Using extensive simulations, both at the IP router level and autonomous system level, we show that ITA reduces communication latencies by as much as 50 percent. 
Furthermore, it not only reduces by 20 percent the number of IP network messages which is critical for ISPs carrying the burden of transporting P2P traffic, but also distributes the traffic load more evenly on the routers of the IP network layer.


Joint Optimal Sensor Selection and Scheduling in Dynamic Spectrum Access Networks
Spectrum sensing is key to the realization of dynamic spectrum access. To protect primary users' communications from the interference caused by secondary users, spectrum sensing must meet the strict detectability requirements set by regulatory bodies, such as the FCC. Such strict detection requirements, however, can hardly be achieved using PHY-layer sensing techniques alone with one-time sensing by only a single sensor. 
In this paper, we jointly exploit two MAC-layer sensing methods—cooperative sensing and sensing scheduling— to improve spectrum sensing performance, while incurring minimum sensing overhead. While these sensing methods have been studied individually, little has been done on their combinations and the resulting benefits. 
Specifically, we propose to construct a profile of the primary signal's RSSs and design a simple, yet near-optimal, incumbent detection rule. Based on this constructed RSS profile, we develop an algorithm to find 1) an optimal set of sensors; 2) an optimal point at which to stop scheduling additional sensing; and 3) an optimal sensing duration for one-time sensing, so as to make a tradeoff between detection performance and sensing overhead.
 Our evaluation results show that the proposed sensing algorithms reduce the sensing overhead by up to 65 percent, while meeting the requirements of both false-alarm and misdetection probabilities of less than 0.01.


Learning Dynamic Hybrid Markov Random Field for Image Labeling
Using shape information has gained increasing concerns in the task of image labeling. In this paper, we present a dynamic hybrid Markov random field (DHMRF), which explicitly captures middle-level object shape and low-level visual appearance (e.g., texture and color) for image labeling. 
Each node in DHMRF is described by either a deformable template or an appearance model as visual prototype. On the other hand, the edges encode two types of intersections: co-occurrence and spatial layered context, with respect to the labels and prototypes of connected nodes. 
To learn the DHMRF model, an iterative algorithm is designed to automatically select the most informative features and estimate model parameters. The algorithm achieves high computational efficiency since a branch-and-bound schema is introduced to estimate model parameters. Compared with previous methods, which usually employ implicit shape cues, our DHMRF model seamlessly integrates color, texture, and shape cues to inference labeling output, and thus produces more accurate and reliable results. 
Extensive experiments validate its superiority over other state-of-the-art methods in terms of recognition accuracy and implementation efficiency on: the MSRC 21-class dataset, and the lotus hill institute 15-class dataset.


Load Rebalancing for Distributed File Systems in Clouds
Distributed file systems are key building blocks for cloud computing applications based on the Map Reduce programming paradigm. In such file systems, nodes simultaneously serve computing and storage functions; a file is partitioned into a number of chunks allocated in distinct nodes so that Map Reduce tasks can be performed in parallel over the nodes. 
However, in a cloud computing environment, failure is the norm, and nodes may be upgraded, replaced, and added in the system. Files can also be dynamically created, deleted, and appended. This results in load imbalance in a distributed file system; that is, the file chunks are not distributed as uniformly as possible among the nodes. Emerging distributed file systems in production systems strongly depend on a central node for chunk reallocation. 
This dependence is clearly inadequate in a large-scale, failure-prone environment because the central load balancer is put under considerable workload that is linearly scaled with the system size, and may thus become the performance bottleneck and the single point of failure. In this paper, a fully distributed load rebalancing algorithm is presented to cope with the load imbalance problem. 
Our algorithm is compared against a centralized approach in a production system and a competing distributed solution presented in the literature. The simulation results indicate that our proposal is comparable with the existing centralized approach and considerably outperforms the prior distributed algorithm in terms of load imbalance factor, movement cost, and algorithmic overhead. The performance of our proposal implemented in the Hadoop distributed file system is further investigated in a cluster environment.


Local Structure-Based Image Decomposition for Feature Extraction With Applications to Face Recognition
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. 
IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector.
All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. 
The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.


Location-Aware and Safer Cards: Enhancing RFID Security and Privacy via Location Sensing
In this paper, we report on a new approach for enhancing security and privacy in certain RFID applications whereby location or location-related information (such as speed) can serve as a legitimate access context. Examples of these applications include access cards, toll cards, credit cards, and other payment tokens. We show that location awareness can be used by both tags and back-end servers for defending against unauthorized reading and relay attacks on RFID systems. 
On the tag side, we design a location-aware selective unlocking mechanism using which tags can selectively respond to reader interrogations rather than doing so promiscuously. On the server side, we design a location-aware secure transaction verification scheme that allows a bank server to decide whether to approve or deny a payment transaction and detect a specific type of relay attack involving malicious readers. 
The premise of our work is a current technological advancement that can enable RFID tags with low-cost location (GPS) sensing capabilities. Unlike prior research on this subject, our defenses do not rely on auxiliary devices or require any explicit user involvement.


MADMatch: Many-to-Many Approximate Diagram Matching for Design Comparison
Matching algorithms play a fundamental role in many important but difficult software engineering activities, especially design evolution analysis and model comparison. We present MADMatch, a fast and scalable many-to-many approximate diagram matching approach based on an error-tolerant graph matching (ETGM) formulation. 
Diagrams are represented as graphs, costs are assigned to possible differences between two given graphs, and the goal is to retrieve the cheapest matching. We address the resulting optimization problem with a tabu search enhanced by the novel use of lexical and structural information. 
Through several case studies with different types of diagrams and tasks, we show that our generic approach obtains better results than dedicated state-of-the-art algorithms, such as AURA, PLTSDiff, or UMLDiff, on the exact same datasets used to introduce (and evaluate) these algorithms.


Maximizing Transmission Opportunities in Wireless Multihop Networks
Being readily available in most of 802.11 radios, multirate capability appears to be useful as WiFi networks are getting more prevalent and crowded. More specifically, it would be helpful in high-density scenarios because internode distance is short enough to employ high data rates. 
However, communication at high data rates mandates a large number of hops for a given node pair in a multihop network and thus, can easily be depreciated as per-hop overhead at several layers of network protocol is aggregated over the increased number of hops. This paper presents a novel multihop, multirate adaptation mechanism, called multihop transmission opportunity (MTOP), that allows a frame to be forwarded a number of hops consecutively to minimize the MAC-layer overhead between hops. 
This seemingly collision-prone nonstop forwarding is proved to be safe via analysis and USRP/GNU Radio-based experiment in this paper. The idea of MTOP is in clear contrast to the conventional opportunistic transmission mechanism, known as TXOP, where a node transmits multiple frames back-to-back when it gets an opportunity in a single-hop WLAN. 
We conducted an extensive simulation study via OPNET, demonstrating the performance advantage of MTOP under a wide range of network scenarios.


m-Privacy for Collaborative Data Publishing
ABSTRACT:
In this paper, we consider the collaborative data publishing problem for anonymizing horizontally partitioned data at multiple data providers. We consider a new type of “insider attack” by colluding data providers who may use their own data records (a subset of the overall data) to infer the data records contributed by other data providers. The paper addresses this new threat, and makes several contributions. 
First, we introduce the notion of m-privacy, which guarantees that the anonymized data satisfies a given privacy constraint against any group of up to m colluding data providers. 
Second, we present heuristic algorithms exploiting the monotonicity of privacy constraints for efficiently checking m-privacy given a group of records. 
Third, we present a data provider-aware anonymization algorithm with adaptive m-privacy checking strategies to ensure high utility and m-privacy of anonymized data with efficiency. Finally, we propose secure multi-party computation protocols for collaborative data publishing with m-privacy. All protocols are extensively analyzed and their security and efficiency are formally proved. 
Experiments on real-life datasets suggest that our approach achieves better or comparable utility and efficiency than existing and baseline algorithms while satisfying m-privacy.


Multi-View Video Representation Based on Fast Monte Carlo Surface Reconstruction
Abstract
This paper provides an alternative solution to the costly representation of multi-view video data, which can be used for both rendering and scene analyses. Initially, a new efficient Monte Carlo discrete surface reconstruction method for foreground objects with static background is presented, which outperforms volumetric techniques and is suitable for GPU environments. 
Some extensions are also presented, which allow a speeding up of the reconstruction by exploiting multi-resolution and temporal correlations. Then, a fast meshing algorithm is applied, which allows interpolating a continuous surface from the discrete reconstructed points. As shown by the experimental results, the original video frames can be approximated with high accuracy by projecting the reconstructed foreground objects onto the original viewpoints.
Furthermore, the reconstructed scene can be easily projected onto any desired virtual viewpoint, thus simplifying the design of free-viewpoint video applications. In our experimental results, we show that our techniques for reconstruction and meshing compare favorably with the state-of-the-art, and we also introduce a rule-of-thumb for effective application of the method with a good quality versus representation cost trade-off.


NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems
ABSTRACT:
Cloud security is one of most important issues that have attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). 
DDoS attacks usually involve early stage actions such as multi-step exploitation, low frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult.
This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multi-phase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph based analytical models and reconfigurable virtual network-based countermeasures. 
The proposed framework leverages Open Flow network programming APIs to build a monitor and control plane over distributed programmable virtual switches in order to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.


On Distributed and Coordinated Resource Allocation for Interference Mitigation in Self-Organizing LTE Networks
We propose a distributed and coordinated radio resource allocation algorithm for orthogonal frequency division multiple access (OFDMA)-based cellular networks to self-organize efficient and stable frequency reuse patterns. 
In the proposed radio resource allocation algorithm, each cell independently and dynamically allocates modulation and coding scheme (MCS), resource block (RB), and transmit power to its users in a way that its total downlink (DL) transmit power is minimized, while users' throughput demands are satisfied. Moreover, each cell informs neighboring cells of the RBs that have been scheduled for its cell-edge users' DL transmissions through message passing. 
Accordingly, the neighboring cells abstain from assigning high transmit powers to the specified RBs. Extensive simulation results attempt to demonstrate that DL power control on a per-RB basis may play a key role in future networks, and show that the distributed minimization of DL transmit power at each cell, supported by intercell interference coordination, is able to provide a 20% improvement of network throughput, considerably reduce the number of user outages, and significantly enhance spatial reuse, as compared to cutting-edge resource allocation scheme


On the MDP-Based Cost Minimization for Video-on-Demand Services in a Heterogeneous Wireless Network with Multihomed Terminals
In this paper, we deal with a cost minimization problem for a multihomed mobile terminal that downloads and plays a video-on-demand (VoD) stream. The cost consists of the user's dissatisfaction due to playback disruptions and communication cost for downloading the VoD stream. 
There are three components in our approach: parameter estimation, threshold adjustment, and threshold compensation. Since we do not assume any a priori knowledge about underlying random variables, necessary parameter values are estimated online. Using the resultant estimates, we formulate the problem as a Markov decision process (MDP) problem considering as if the random variables are exponentially distributed. 
To solve the MDP problem efficiently, we propose a threshold adjustment algorithm that exploits some structural properties of any optimal solution that are specific to our problem. 
Finally, to handle the cases where the random variables are not exponentially distributed, we propose a threshold compensation algorithm to compensate for the modeling error. Through extensive simulations, we compare the performance of our scheme with those of static threshold schemes.


On the Node Clone Detection in Wireless Sensor Networks
Wireless sensor networks are vulnerable to the node clone, and several distributed protocols have been proposed to detect this attack. However, they require too strong assumptions to be practical for large-scale, randomly deployed sensor networks. In this paper, we propose two novel node clone detection protocols with different tradeoffs on network conditions and performance. 
The first one is based on a distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes effectively. The protocol performance on efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. 
Although the DHT-based protocol incurs similar communication cost as previous approaches, it may be considered a little high for some scenarios. To address this concern, our second distributed detection protocol, named randomly directed exploration, presents good communication performance for dense sensor networks, by a probabilistic directed forwarding technique along with random initial direction and border determination. The simulation results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability


On the Privacy Risks of Virtual Keyboards: Automatic Reconstruction of Typed Input from Compromising Reflections
We investigate the implications of the ubiquity of personal mobile devices and reveal new techniques for compromising the privacy of users typing on virtual keyboards. Specifically, we show that so-called compromising reflections (in, for example, a victim's sunglasses) of a device's screen are sufficient to enable automated reconstruction, from video, of text typed on a virtual keyboard. 
Through the use of advanced computer vision and machine learning techniques, we are able to operate under extremely realistic threat models, in real-world operating conditions, which are far beyond the range of more traditional OCR-based attacks. In particular, our system does not require expensive and bulky telescopic lenses: rather, we make use of off-the-shelf, handheld video cameras.
In addition, we make no limiting assumptions about the motion of the phone or of the camera, nor the typing style of the user, and are able to reconstruct accurate transcripts of recorded input, even when using footage captured in challenging environments (e.g., on a moving bus). 
To further underscore the extent of this threat, our system is able to achieve accurate results even at very large distances-up to 61 m for direct surveillance, and 12 m for sunglass reflections. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies.


Opportunistic MANETs: Mobility Can Make Up for Low Transmission Power
ABSTRACT:
Opportunistic mobile ad hoc networks (MANETs) are a special class of sparse and disconnected MANETs where data communication exploits sporadic contact opportunities among nodes. We consider opportunistic MANETs where nodes move independently at random over a square of the plane. 
Nodes exchange data if they are at a distance at most within each other, where is the node transmission radius. The flooding time is the number of time-steps required to broadcast a message from a source node to every node of the network. Flooding time is an important measure of how fast information can spread in dynamic networks. 
We derive the first upper bound on the flooding time, which is a decreasing function of the maximal speed of the nodes. The bound holds with high probability, and it is nearly tight. Our bound shows that, thanks to node mobility, even when the network is sparse and disconnected, information spreading can be fast.


Performance Evaluation Methodology for Historical Document Image Binarization
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. 
In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. 
Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.


Predicting Architectural Vulnerability on Multithreaded Processors under Resource Contention and Sharing
Architectural vulnerability factor (AVF) characterizes a processor's vulnerability to soft errors. Inter thread resource contention and sharing on a multithreaded processor (e.g., SMT, CMP) shows non uniform impact on a program's AVF when it is co-scheduled with different programs. However, measuring the AVF is extremely expensive in terms of hardware and computation. 
This paper proposes a scalable two-level predictive mechanism capable of predicting a program's AVF on a SMT/CMP architecture from easily measured metrics. Essentially, the first-level model correlates the AVF in a contention-free environment with important performance metrics and the processor configuration, while the second-level model captures the inter thread resource contention and sharing via processor structures' occupancies. 
By utilizing the proposed scheme, we can accurately estimate any unseen program's soft error vulnerability under resource contention and sharing with any other program(s), on an arbitrarily configured multithreaded processor. In practice, the proposed model can be used to find soft error resilient thread-to-core scheduling for multithreaded processors.


Predicting the Impact of Measures Against P2P Networks: Transient Behavior and Phase Transition
The paper has two objectives. The first is to study rigorously the transient behavior of some peer-to-peer (P2P) networks whenever information is replicated and disseminated according to epidemic-like dynamics. The second is to use the insight gained from the previous analysis in order to predict how efficient are measures taken against P2P networks. 
We first introduce a stochastic model that extends a classical epidemic model and characterize the P2P swarm behavior in presence of free-riding peers. We then study a second model in which a peer initiates a contact with another peer chosen randomly. 
In both cases, the network is shown to exhibit phase transitions: A small change in the parameters causes a large change in the behavior of the network. We show, in particular, how phase transitions affect measures of content providers against P2P networks that distribute nonauthorized music, books, or articles and what is the efficiency of countermeasures. 
In addition, our analytical framework can be generalized to characterize the heterogeneity of cooperative peers.


Privacy-Preserving Public Auditing for Secure Cloud Storage
Using cloud storage, users can remotely store their data and enjoy the on-demand high-quality applications and services from a shared pool of configurable computing resources, without the burden of local data storage and maintenance. 
However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in cloud computing a formidable task, especially for users with constrained computing resources. Moreover, users should be able to just use the cloud storage as if it is local, without worrying about the need to verify its integrity. 
Thus, enabling public auditability for cloud storage is of critical importance so that users can resort to a third-party auditor (TPA) to check the integrity of outsourced data and be worry free. To securely introduce an effective TPA, the auditing process should bring in no new vulnerabilities toward user data privacy, and introduce no additional online burden to user. 
In this paper, we propose a secure cloud storage system supporting privacy-preserving public auditing. We further extend our result to enable the TPA to perform audits for multiple users simultaneously and efficiently. Extensive security and performance analysis show the proposed schemes are provably secure and highly efficient. 
Our preliminary experiment conducted on Amazon EC2 instance further demonstrates the fast performance of the design.


QoS Ranking Prediction for Cloud Services
ABSTRACT:
Cloud computing is becoming popular. Building high-quality cloud applications is a critical research problem. QoS rankings provide valuable information for making optimal cloud service selection from a set of functionally equivalent service candidates. To obtain QoS values, real-world invocations on the service candidates are usually required. 
To avoid the time-consuming and expensive real-world service invocations, this paper proposes a QoS ranking prediction framework for cloud services by taking advantage of the past service usage experiences of other consumers. Our proposed framework requires no additional invocations of cloud services when making QoS ranking prediction. 
Two personalized QoS ranking prediction approaches are proposed to predict the QoS rankings directly. Comprehensive experiments are conducted employing real-world QoS data, including 300 distributed users and 500 real world web services all over the world. The experimental results show that our approaches outperform other competing approaches.


Randomized Information Dissemination in Dynamic Environments
ABSTRACT
We consider randomized broadcast or information dissemination in wireless networks with switching network topologies. 
We show that an upper bound for the –dissemination time consists of the conductance bound for a network without switching, and an adjustment that accounts for the number of informed nodes in each period between topology changes. Through numerical simulations, we show that our bound is asymptotically tight. 
We apply our results to the case of mobile wireless networks with unreliable communication links and establish an upper bound for the dissemination time when the network undergoes topology changes and periods of communication link erasures


Ranking and Clustering Software Cost Estimation Models through a Multiple Comparisons Algorithm
Software Cost Estimation can be described as the process of predicting the most realistic effort required to complete a software project. Due to the strong relationship of accurate effort estimations with many crucial project management activities, the research community has been focused on the development and application of a vast variety of methods and models trying to improve the estimation procedure. 
From the diversity of methods emerged the need for comparisons to determine the best model. However, the inconsistent results brought to light significant doubts and uncertainty about the appropriateness of the comparison process in experimental studies. 
Overall, there exist several potential sources of bias that have to be considered in order to reinforce the confidence of experiments. In this paper, we propose a statistical framework based on a multiple comparisons algorithm in order to rank several cost estimation models, identifying those which have significant differences in accuracy, and clustering them in nonoverlapping groups. 
The proposed framework is applied in a large-scale setup of comparing 11 prediction models over six datasets. The results illustrate the benefits and the significant information obtained through the systematic comparison of alternative methods.


Regional bit allocation and rate distortion optimization for multiview depth video coding with view synthesis distortion model
In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. 
With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. 
In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. 
The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.


Rotation Invariant Local Frequency Descriptors for Texture Classification
This paper presents a novel rotation invariant method for texture classification based on local frequency components. The local frequency components are computed by applying 1-D Fourier transform on a neighboring function defined on a circle of radius R at each pixel. We observed that the low frequency components are the major constituents of the circular functions and can effectively represent textures. 
Three sets of features are extracted from the low frequency components, two based on the phase and one based on the magnitude. The proposed features are invariant to rotation and linear changes of illumination. Moreover, by using low frequency components, the proposed features are very robust to noise. 
While the proposed method uses a relatively small number of features, it outperforms state-of-the-art methods in three well-known datasets: Brodatz, Outex, and CUReT. In addition, the proposed method is very robust to noise and can remarkably improve the classification accuracy especially in the presence of high levels of noise.


Routing-Toward-Primary-User Attack and Belief Propagation-Based Defense in Cognitive Radio Networks
Cognitive radio (CR) networks have attracted many attentions recently, while the security issues are not fully studied yet. In this paper, we propose a new and powerful network layer attack, routing-toward-primary-user (RPU) attack in CR networks. 
In this attack, malicious nodes intentionally route a large amount of packets toward the primary users (PUs), aiming to cause interference to the PUs and to increase delay in the data transmission among the secondary users. In the RPU attack, it is difficult to detect the malicious nodes since the malicious nodes may claim that those nodes, to which they forward the packets, behave dishonestly and cause problems in the data transmission. 
To defend against this attack without introducing high complexity, we develop a defense strategy using belief propagation. First, an initial route is found from the source to the destination. Each node keeps a table recording the feedbacks from the other nodes on the route, exchanges feedback information and computes beliefs. 
Finally, the source node can detect the malicious nodes based on the final belief values. Simulation results show that the proposed defense strategy against the RPU attack is effective and efficient in terms of significant reduction in the delay and interference caused by the RPU attack.


Scalable and Secure Sharing of Personal Health Records in Cloud Computing Using Attribute-Based Encryption
ABSTRACT:
Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. 
To assure the patients’ control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access, and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. 
In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semitrusted servers. 
To achieve fine-grained and scalable data access control for PHRs, we leverage attribute-based encryption (ABE) techniques to encrypt each patient’s PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. 
A high degree of patient privacy is guaranteed simultaneously by exploiting multiauthority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability, and efficiency of our proposed scheme.


Scalable Coding of Depth Maps With R-D Optimized Embedding
Recent work on depth map compression has revealed the importance of incorporating a description of discontinuity boundary geometry into the compression scheme. We propose a novel compression strategy for depth maps that incorporates geometry information while achieving the goals of scalability and embedded representation. 
Our scheme involves two separate image pyramid structures, one for breakpoints and the other for sub-band samples produced by a breakpoint-adaptive transform. Breakpoints capture geometric attributes, and are amenable to scalable coding. We develop a rate-distortion optimization framework for determining the presence and precision of breakpoints in the pyramid representation. 
We employ a variation of the EBCOT scheme to produce embedded bit-streams for both the breakpoint and sub-band data. Compared to JPEG 2000, our proposed scheme enables the same the scalability features while achieving substantially improved rate-distortion performance at the higher bit-rate range and comparable performance at the lower rates.


Scaling Up Spike-and-Slab Models for Unsupervised Feature Learning
Abstract
We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. 
We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. 
We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. 
We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.


Secure and Efficient Data Transmission for Cluster-Based Wireless Sensor Networks
Secure data transmission is a critical issue for wireless sensor networks (WSNs). Clustering is an effective and practical way to enhance the system performance of WSNs. In this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the clusters are formed dynamically and periodically. 
We propose two Secure and Efficient data Transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the Identity-Based digital Signature (IBS) scheme and the Identity-Based Online/Offline digital Signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the Diffie-Hellman problem in the pairing domain. 
SET-IBOOS further reduces the computation overhead for protocol security, which is crucial for WSNs, while its security relies on the hardness of the discrete logarithm problem. We show the feasibility of the SET-IBS and SET-IBOOS protocols with respect to the security requirements and security analysis against various attacks. The calculations and simulations are provided to illustrate the efficiency of the proposed protocols. 
The results show that, the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption.


Secure and Reliable Routing Protocols for Heterogeneous Multihop Wireless Networks
In this paper, we propose E-STAR for establishing stable and reliable routes in heterogeneous multihop wireless networks. E-STAR combines payment and trust systems with a trust-based and energy-aware routing protocol. The payment system rewards the nodes that relay others' packets and charges those that send packets. 
The trust system evaluates the nodes' competence and reliability in relaying packets in terms of multi-dimensional trust values. The trust values are attached to the nodes’ public-key certificates to be used in making routing decisions. We develop two routing protocols to direct traffic through those highly-trusted nodes having sufficient energy to minimize the probability of breaking the route. By this way, E-STAR can stimulate the nodes not only to relay packets, but also to maintain route stability and report correct battery energy capability. 
This is because any loss of trust will result in loss of future earnings. Moreover, for the efficient implementation of the trust system, the trust values are computed by processing the payment receipts. Analytical results demonstrate that E-STAR can secure the payment and trust calculation without false accusations. Simulation results demonstrate that our routing protocols can improve the packet delivery ratio and route stability.


Secure Encounter-based Mobile Social Networks Requirements Designs and Tradeoffs
Encounter-based social networks link users who share a location at the same time, as opposed to traditional social network paradigms of linking users who have an offline friendship. This approach presents fundamentally different challenges from those tackled by previous designs. 
In this paper, we explore functional and security requirements for these new systems, such as availability, security, and privacy, and present several design options for building secure encounter-based social networks. We examine one recently proposed encounter-based social network design and compare it to a set of idealized security and functionality requirements. 
We show that it is vulnerable to several attacks, including impersonation, collusion, and privacy breaching, even though it was designed specifically for security. Mindful of the possible pitfalls, we construct a flexible framework for secure encounter-based social networks, which can be used to construct networks that offer different security, privacy, and availability guarantees. 
We describe two example constructions derived from this framework, and consider each in terms of the ideal requirements. Some of our new designs fulfill more requirements in terms of system security, reliability, and privacy than previous work. 
We also evaluate real-world performance of one of our designs by implementing a proof-of-concept iPhone application called MeetUp. Experiments highlight the potential of our system.


Security and Privacy-Enhancing Multi-cloud Architectures
Security challenges are still among the biggest obstacles when considering the adoption of cloud services. This triggered a lot of research activities, resulting in a quantity of proposals targeting the various cloud security threats. 
Alongside with these security issues, the cloud paradigm comes with a new set of unique features, which open the path toward novel security approaches, techniques, and architectures. 
This paper provides a survey on the achievable security merits by making use of multiple distinct clouds simultaneously. Various distinct architectures are introduced and discussed according to their security and privacy capabilities and prospects.


Self-Supervised Online Metric Learning With Low Rank Constraint for Scene Categorization
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. 
Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. 
By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. 
With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.


SORT: A Self-Organizing Trust Model for Peer-to-Peer Systems
Open nature of peer-to-peer systems exposes them to malicious activity. Building trust relationships among peers can mitigate attacks of malicious peers. This paper presents distributed algorithms that enable a peer to reason about trustworthiness of other peers based on past interactions and recommendations. 
Peers create their own trust network in their proximity by using local information available and do not try to learn global trust information. Two contexts of trust, service, and recommendation contexts, are defined to measure trustworthiness in providing services and giving recommendations. 
Interactions and recommendations are evaluated based on importance, recentness, and peer satisfaction parameters. Additionally, recommender's trustworthiness and confidence about a recommendation are considered while evaluating recommendations. 
Simulation experiments on a file sharing application show that the proposed model can mitigate attacks on 16 different malicious behavior models. In the experiments, good peers were able to form trust relationships in their proximity and isolate malicious peers.


Throughput-Optimal Scheduling in Multihop Wireless Networks Without Per-Flow Information
In this paper, we consider the problem of link scheduling in multihop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use per-flow or per-destination information, maintain a single data queue for each link, and exploit only local information, while guaranteeing throughput optimality. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information. 
It is usually difficult to obtain and maintain this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a complex data structure at each node, keeps exchanging queue-length information among neighboring nodes, and commonly results in poor delay performance. 
In this paper, we propose scheduling schemes that can circumvent these drawbacks and guarantee throughput optimality. These schemes use either the readily available hop-count information or only the local information for each link. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. 
We also conduct simulations to validate our theoretical results in various settings and show that the proposed schemes can substantially improve the delay performance in most scenarios.


Time-Bounded Essential Localization for Wireless Sensor Networks
In many practical applications of wireless sensor networks, it is crucial to accomplish the localization of sensors within a given time bound. We find that the traditional definition of relative localization is inappropriate for evaluating its actual overhead. 
To address this problem, we define a novel problem called essential localization, and present the first rigorous study on the essential localizability of a wireless sensor network within a given time bound.
We propose an efficient distributed algorithm for time-bounded essential localization over a sensor network, and evaluate the performance of our algorithm with extensive simulations.


Toward Secure Multi-keyword Top-k Retrieval over Encrypted Cloud Data
ABSTRACT:
Cloud computing has emerging as a promising pattern for data outsourcing and high quality data services. However, concerns of sensitive information on cloud potentially cause privacy problems. Data encryption protects data security to some extent, but at the cost of compromised efficiency. Searchable symmetric encryption (SSE) allows retrieval of encrypted data over cloud. 
In this paper, we focus on addressing data privacy issues using searchable symmetric encryption (SSE). For the first time, we formulate the privacy issue from the aspect of similarity relevance and scheme robustness. We observe that server-side ranking based on order-preserving encryption (OPE) inevitably leaks data privacy. To eliminate the leakage, we propose a two-round searchable encryption (TRSE) scheme that supports top-k multi-keyword retrieval.
In TRSE, we employ a vector space model and homomorphic encryption. The vector space model helps to provide sufficient search accuracy, and the homomorphic encryption enables users to involve in the ranking while the majority of computing work is done on the server side by operations only on ciphertext. 
As a result, information leakage can be eliminated and data security is ensured. Thorough security and performance analysis show that the proposed scheme guarantees high security and practical efficiency.


Towards a Statistical Framework for Source Anonymity in Sensor Networks
In certain applications, the locations of events reported by a sensor network need to remain anonymous. That is, unauthorized observers must be unable to detect the origin of such events by analyzing the network traffic. Known as the source anonymity problem, this problem has emerged as an important topic in the security of wireless sensor networks, with variety of techniques based on different adversarial assumptions being proposed. In this work, we present a new framework for modeling, analyzing, and evaluating anonymity in sensor networks. 
The novelty of the proposed framework is twofold: first, it introduces the notion of "interval indistinguishability” and provides a quantitative measure to model anonymity in wireless sensor networks; second, it maps source anonymity to the statistical problem of binary hypothesis testing with nuisance parameters. 
We then analyze existing solutions for designing anonymous sensor networks using the proposed model. We show how mapping source anonymity to binary hypothesis testing with nuisance parameters leads to converting the problem of exposing private source information into searching for an appropriate data transformation that removes or minimize the effect of the nuisance information. 
By doing so, we transform the problem from analyzing real-valued sample points to binary codes, which opens the door for coding theory to be incorporated into the study of anonymous sensor networks. Finally, we discuss how existing solutions can be modified to improve their anonymity.


Towards Differential Query Services in Cost-Efficient Clouds
Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In a cost-efficient cloud environment, a user can tolerate a certain degree of delay while retrieving information from the cloud to reduce costs. 
In this paper, we address two fundamental issues in such an environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme that was originally proposed by Ostrovsky. 
Their scheme allows a user to retrieve files of interest from an untrusted server without leaking any information. The main drawback is that it will cause a heavy querying overhead incurred on the cloud, and thus goes against the original intention of cost efficiency. In this paper, we present three efficient information retrieval for ranked query (EIRQ) schemes to reduce querying overhead incurred on the cloud. 
In EIRQ, queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. A user can retrieve files on demand by choosing queries of different ranks. This feature is useful when there are a large number of matched files, but the user only needs a small subset of them. Under different parameter settings, extensive evaluations have been conducted on both analytical models and on a real cloud environment, in order to examine the effectiveness of our schemes.


TrPF: A Trajectory Privacy-Preserving Framework for Participatory Sensing
The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. 
However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. 
In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory.
Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. 
The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.


Understanding the Scheduling Performance in Wireless Networks with Successive Interference Cancellation
Successive interference cancellation (SIC) is an effective way of multipacket reception to combat interference in wireless networks. We focus on link scheduling in wireless networks with SIC, and propose a layered protocol model and a layered physical model to characterize the impact of SIC. 
In both the interference models, we show that several existing scheduling schemes achieve the same order of approximation ratios, independent of whether or not SIC is available. Moreover, the capacity order in a network with SIC is the same as that without SIC. We then examine the impact of SIC from first principles. 
In both chain and cell topologies, SIC does improve the throughput with a gain between 20 and 100 percent. However, unless SIC is properly characterized, any scheduling scheme cannot effectively utilize the new transmission opportunities. 
The results indicate the challenge of designing an SIC-aware scheduling scheme, and suggest that the approximation ratio is insufficient to measure the scheduling performance when SIC is available.


Utility-Privacy Tradeoffs in Databases: An Information-Theoretic Approach
Ensuring the usefulness of electronic data sources while providing necessary privacy guarantees is an important unsolved problem. This problem drives the need for an analytical framework that can quantify the privacy of personally identifiable information while still providing a quantifiable benefit (utility) to multiple legitimate information consumers. 
This paper presents an information-theoretic framework that promises an analytical model guaranteeing tight bounds of how much utility is possible for a given level of privacy and vice-versa. 
Specific contributions include: 1) stochastic data models for both categorical and numerical data; 2) utility-privacy tradeoff regions and the encoding (sanization) schemes achieving them for both classes and their practical relevance; and 3) modeling of prior knowledge at the user and/or data source and optimal encoding schemes for both cases.


Vampire Attacks Draining Life From Wireless Ad-Hoc Sensor Networks  Print  Mail To Friend
Ad-hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. 
This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes’ battery power. These “Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. 
We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol compliant messages.


Variable-Width Channel Allocation for Access Points: A Game-Theoretic Perspective
Channel allocation is a crucial concern in variable-width wireless local area networks. This work aims to obtain the stable and fair nonoverlapped variable-width channel allocation for selfish access points (APs). In the scenario of single collision domain, the channel allocation problem reduces to a channel-width allocation problem, which can be formulated as a noncooperative game. 
The Nash equilibrium (NE) of the game corresponds to a desired channel-width allocation. A distributed algorithm is developed to achieve the NE channel-width allocation that globally maximizes the network utility. 
A punishment-based cooperation self-enforcement mechanism is further proposed to ensure that the APs obey the proposed scheme. In the scenario of multiple collision domains, the channel allocation problem is formulated as a constrained game. 
Penalty functions are introduced to relax the constraints and the game is converted into a generalized ordinal potential game. Based on the best response and randomized escape, a distributed iterative algorithm is designed to achieve a desired NE channel allocation. 
Finally, computer simulations are conducted to validate the effectiveness and practicality of the proposed schemes.




FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


SOFTWARE PROJECTS IN
Java, J2EE, J2ME, JavaFx, DotNET, ASP.NET, VB.NET, C#, PHP, NS2, Matlab, Android
For Software Projects - 044-28235816, 9841193224
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or
Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…


Java Project Titles, Java Project Abstracts, Java IEEE Project Abstracts, Java Projects abstracts for CSE IT MCA, Download Java Titles, Download Java Project Abstracts, Download IEEE Java Abstracts
Java Projects, Java Project Titles, Java IEEE Projects, Java Abstracts, Java Project Abstracts, IEEE Abstracts, IEEE Java Abstracts, Java IEEE Abstracts, Free Abstracts, Free IEEE Java Abstracts, Java Abstracts Download, Java Project Abstracts Download, IEEE Abstracts Download, IEEE Java Abstracts Download, Java IEEE Abstracts Download, Free Abstracts Download, Free IEEE Java Abstracts Download, ieee projects 2013, ieee projects titles abstract, ieee projects 2013 with abstract, abstracts for ieee papers, abstracts for projects, ieee abstracts for cse, ieee abstracts for ece, ieee abstracts with full papers, ieee abstracts download, ieee papers, ieee abstracts for it, ieee abstracts for eee, ieee abstracts full pdf papers, ieee project abstracts download, ieee project papers, project abstracts for cse, project abstracts for IT, project abstracts in java, project abstracts ieee, project abstracts for ece, mca project topics abstract, web projects topics, mca projects, j2ee project topics, abstracts for mini projects for cse, latest project titles for computer science, main project topics for computer science, new topics for project in computer science, cse project ideas, abstracts for mini projects for ece, ece projects, project abstract, ece project topics, ece project titles, BE project Abstracts, Btech Project Abstracts, ME Project Abstracts, MTech Project Abstracts, MCA Project Abstracts