IEEE 2014 : A
Secure Client Side Deduplication Scheme in Cloud Storage Environments
Project Price: Contact US
Abstract—Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on Open Stack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key.
Project Price: Contact US
Abstract—Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on Open Stack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key.
IEEE 2014: Adaptive
Algorithm for Minimizing Cloud Task Length with Prediction Errors
Abstract—compared
to traditional distributed computing like Grid system, it is non-trivial to
optimize cloud task’s execution Performance due to its more constraints like
user payment budget and divisible resource demand. In this paper, we analyze
in-depth our proposed optimal algorithm minimizing task execution length with
divisible resources and payment budget: (1) We derive the upper bound of cloud
task length, by taking into account both workload prediction errors and host
load prediction errors. With such state-of the-art bounds, the worst-case task
execution performance is predictable, which can improve the Quality of Service
in turn. (2) We design a dynamic version for the algorithm to adapt to the load
dynamics over task execution progress, further improving the resource utilization.
(3)We rigorously build a cloud prototype over a real cluster environment with
56 virtual machines, and evaluate our algorithm with different levels of
resource contention. Cloud users in our cloud system are able to compose
various tasks based on off-the-shelf web services. Experiments show that task
execution lengths under our algorithm are always close to their theoretical
optimal values, even in a competitive situation with limited available resources.
We also observe a high level of fair treatment on the resource allocation among
all tasks.
IEEE 2014:An
Efficient Information Retrieval Approach for Collaborative Cloud Computing
Project Price: Contact US
Abstract—The
collaborative cloud computing (CCC) which is collaboratively supported by
various organizations (Google, IBM, AMAZON, MICROSOFT) offers a promising
future for information retrieval. Human beings tend to keep things simple by
moving the complex aspects to computing. As a consequence, we prefer to go to
one or a limited number of sources for all our information needs. In
contemporary scenario where information is replicated, modified (value added),
and scattered geographically; retrieving information in a suitable form
requires lot more effort from the user and thus difficult. For instance, we
would like to go directly to the source of information and at the same time not
to be burdened with additional effort. This is where, we can make use of
learning systems (Neural Network based) that can intelligently decide and retrieve
the information that we need by going directly to the source of information.
This also, reduces single point of failure and eliminates bottlenecks in the
path of information flow, Reduces the Time delay and it provide remarkable
ability to overcome from traffic conjection complicated patterns. It makes
Efficient information retrieval approach for collaborative cloud computing. both
secure and verifiable, without relying on random oracles. Finally, we show an
implementation
IEEE 2014: Building
Confidential and Efficient Query Services in the Cloud with RASP Data
Perturbation
Project Price: Contact US
Project Price: Contact US
Abstract—With
the wide deployment of public cloud computing infrastructures, using clouds to
host data query services has become an appealing solution for the advantages on
scalability and cost-saving. However, some data might be sensitive that the data
owner does not want to move to the cloud unless the data confidentiality and
query privacy are guaranteed. On the other hand, a secured query service should
still provide efficient query processing and significantly reduce the in-house
workload to fully realize the benefits of cloud computing. We propose the RASP
data perturbation method to provide secure and efficient range query and kNN
query services for protected data in the cloud. The RASP data perturbation
method combines order preserving encryption, dimensionality expansion, random
noise injection, and random projection, to provide strong resilience to attacks
on the perturbed data and queries. It also preserves multidimensional ranges,
which allows existing indexing techniques to be applied to speedup range query
processing. The kNN-R algorithm is designed to work with the RASP range query
algorithm to process the kNN queries. We have carefully analyzed the attacks on
data and queries under a precisely defined threat model and realistic security
assumptions. Extensive experiments have been conducted to show the advantages
of this approach on efficiency and security.
IEEE2014:Compatibility-aware Cloud Service Composition under Fuzzy Preferences of Users
Project Price: Contact US
Abstract—When a single Cloud service (i.e., a software image and a virtual machine), on its own, cannot satisfy all the user requirements, a composition of Cloud services is required. Cloud service composition, which includes several tasks such as discovery, compatibility checking, selection, and deployment, is a complex process and users find it difficult to select the best one among the hundreds, if not thousands, of possible compositions available. Service composition in Cloud raises even new challenges caused by diversity of users with different expertise requiring their applications to be deployed across difference geographical locations with distinct legal constraints. The main difficulty lies in selecting a combination of virtual appliances (software images) and infrastructure services that are compatible and satisfy a user with vague preferences. Therefore, we
Project Price: Contact US
Abstract—When a single Cloud service (i.e., a software image and a virtual machine), on its own, cannot satisfy all the user requirements, a composition of Cloud services is required. Cloud service composition, which includes several tasks such as discovery, compatibility checking, selection, and deployment, is a complex process and users find it difficult to select the best one among the hundreds, if not thousands, of possible compositions available. Service composition in Cloud raises even new challenges caused by diversity of users with different expertise requiring their applications to be deployed across difference geographical locations with distinct legal constraints. The main difficulty lies in selecting a combination of virtual appliances (software images) and infrastructure services that are compatible and satisfy a user with vague preferences. Therefore, we
Present
a framework and algorithms which simplify Cloud service composition for
unskilled users. We develop an ontology based approach to analyze Cloud service
compatibility by applying reasoning on the expert knowledge. In addition, to
minimize effort of users in expressing their preferences, we apply combination
of evolutionary algorithms and fuzzy logic for composition optimization. This
lets users express their needs in linguistics terms which brings a great
comfort to them compared to systems that force users to assign exact weights
for all preferences.
Abstract—Cloud
storage services have become commercially popular due to their overwhelming
advantages. To provide ubiquitous always-on access, a cloud service provider
(CSP) maintains multiple replicas for each piece of data on geographically distributed
servers. A key problem of using the replication technique in clouds is that it
is very expensive to achieve strong consistency on a worldwide scale. In this
paper, we first present a novel consistency as a service (CaaS) model, which
consists of a large data cloud and multiple small audit clouds. In the CaaS
model, a data cloud is maintained by a CSP, and a group of users that
constitute an audit cloud can verify whether the data cloud provides the
promised level of consistency or not. We propose a two-level auditing
architecture, which only requires a loosely synchronized clock in the audit
cloud. Then, we design Algorithms
to quantify the severity of violations with two metrics: the commonality of
violations, and the staleness of the value of a read. Finally, we devise a
heuristic auditing strategy (HAS) to reveal as many violations as possible.
Extensive experiments were performed using a combination of simulations and
real cloud deployments to validate HAVE.
IEEE 2014:Data
Similarity-Aware Computation Infrastructure for the Cloud
Project Price: Contact US
Abstract—the cloud is emerging for scalable and efficient cloud services. To meet the needs of handling massive data and decreasing data migration, the computation infrastructure requires efficient data placement and proper management for cached data. In this paper, we propose an efficient and cost-effective multilevel caching scheme, called MERCURY, as computation infrastructure of the cloud. The idea behind MERCURY is to explore and exploit data similarity and support efficient data placement. To accurately and efficiently capture the data similarity, we leverage a low-complexity locality-sensitive hashing (LSH). In our design, in addition to the problem of space inefficiency, we identify that a conventional LSH scheme also suffers from the problem of homogeneous data placement. To address these two problems, we design a novel multi core-enabled locality-sensitive hashing (MC-LSH) that accurately captures the differentiated similarity across data. The similarity-aware MERCURY, hence, partitions data into the L1 cache, L2 cache, and main memory based on their distinct localities, which help optimize cache utilization and minimize the pollution in the last-level cache. Besides extensive evaluation through simulations, we also implemented MERCURY in a system. Experimental results based On real-world applications and data sets demonstrate the efficiency and efficacy of our proposed schemes.
Project Price: Contact US
Abstract—the cloud is emerging for scalable and efficient cloud services. To meet the needs of handling massive data and decreasing data migration, the computation infrastructure requires efficient data placement and proper management for cached data. In this paper, we propose an efficient and cost-effective multilevel caching scheme, called MERCURY, as computation infrastructure of the cloud. The idea behind MERCURY is to explore and exploit data similarity and support efficient data placement. To accurately and efficiently capture the data similarity, we leverage a low-complexity locality-sensitive hashing (LSH). In our design, in addition to the problem of space inefficiency, we identify that a conventional LSH scheme also suffers from the problem of homogeneous data placement. To address these two problems, we design a novel multi core-enabled locality-sensitive hashing (MC-LSH) that accurately captures the differentiated similarity across data. The similarity-aware MERCURY, hence, partitions data into the L1 cache, L2 cache, and main memory based on their distinct localities, which help optimize cache utilization and minimize the pollution in the last-level cache. Besides extensive evaluation through simulations, we also implemented MERCURY in a system. Experimental results based On real-world applications and data sets demonstrate the efficiency and efficacy of our proposed schemes.
IEEE 2014:Maximizing
Revenue with Dynamic Cloud Pricing: The Infinite Horizon Case
Project Price: Contact US
Project Price: Contact US
Abstract—we
study the infinite horizon dynamic pricing problem for an infrastructure cloud
provider in the emerging cloud computing paradigm. The cloud provider, such as
Amazon, provides computing capacity in the form of virtual instances and
charges customers a time-varying price for the period they use the instances.
The provider’s problem is then to find an optimal pricing policy, in face of
stochastic demand arrivals and departures, so that the average expected revenue
is maximized in the long run. We adopt a revenue management framework to tackle
the problem. Optimality conditions and structural results are obtained for our
stochastic formulation, which yield insights on the optimal pricing strategy.
Numerical results verify our analysis and reveal additional properties of
optimal pricing policies for the Infinite horizon case.
IEEE 2014:Facilitating Document
Annotation using Content and Querying Value
Project Price: Contact US
Project Price: Contact US
Abstract—A large number of organizations today generate and share textual descriptions of their products, services, and actions. Such collections of textual data contain significant amount of structured information, which remains buried in the unstructured text. While information extraction algorithms facilitate the extraction of structured relations, they are often expensive and inaccurate, especially when operating on top of text that does not contain any instances of the targeted structured information.We present a novel alternative approach that facilitates the generation of the structured metadata by identifying documents that are likely to contain information of interest and this information is going to be subsequently useful for querying the database. Our approach relies on the idea that humans are more likely to add the necessary metadata during creation time, if prompted by the interface; or that it is much easier for humans (and/or algorithms) to identify the metadata when such information actually exists in the document, instead of naively prompting users to fill in forms with information that is not available in the document. As a major contribution of this paper, we present algorithms that identify structured attributes that are likely to appear within the document, by jointly utilizing the content of the text and the query workload. Our experimental evaluation shows that our approach generates superior results compared to approaches that rely only on the textual content or only on the query workload, to identify attributes of interest.
Project Price: Contact US
Abstract—In
cryptography, secure channels enable the confidential and authenticated message
exchange between authorized users. A generic approach of constructing such
channels is by combining an encryption primitive with an authentication
primitive (MAC). In this work, we introduce the design of a new cryptographic
primitive to be used in the construction of secure channels. Instead of using
general purpose MACs, we propose the deployment of special purpose MACs, named
E-MACs. The main motivation behind this work is the observation that, since the
message must be both encrypted and authenticated, there might be some
redundancy in the computations performed by the two primitives. Therefore,
removing such redundancy can improve the efficiency of the overall composition.
Moreover, computations performed by the encryption algorithm can be further
utilized to improve the security of the authentication algorithm. In
particular, we will show how E-MACs can be designed to reduce the amount of
computation required by standard MACs based on universal hash functions, and
show how E-MACs
can be secured against key-recovery attacks.
IEEE 2014:Optimal Multicast Capacity and Delay Tradeoffs in MANETs
Project Price: Contact US
Project Price: Contact US
Abstract—In
this paper, we give a global perspective of multicast capacity and delay
analysis in Mobile Ad Hoc Networks (MANETs). Specifically, we consider four
node mobility models: (1) two-dimensional i.i.d. mobility, (2) two-dimensional
hybrid random walk, (3) one-dimensional i.i.d. mobility, and (4) one-dimensional
hybrid random walk. Two mobility time-scales are investigated in this paper:
(i) Fast mobility where node mobility is at the same time-scale as data
transmissions; (ii) Slow mobility where node mobility is assumed to occur at a
much slower time-scale than data transmissions. Given a delay constraint D, we
first characterize the optimal multicast capacity for each of the eight types
of mobility models, and then we develop a scheme that can achieve a
capacity-delay tradeoff close to the upper bound up to a logarithmic factor. In
addition, we also study heterogeneous networks with infrastructure support.
IEEE 2014:Reliable
Energy Efficient Routing Algorithms in Wireless Ad Hoc Networks
Project Price: Contact US
Abstract— Low Energy Adaptive Reliable Routing (LEARR) finds routes which require least
amount of energy for reliable packet transfer in ad hoc networks. It defines
the energy cost of packet forwarding by a node as the fraction of remaining
battery energy which is consumed by a node to forward a packet. It includes the
energy consumed for retransmission of the packet as well, when the packet or
its acknowledgment is lost. It is found that LEARR can effectively reduce the
energy consumption of nodes and balance the traffic load among them.
Furthermore, LEARR is able to find reliable routes, in which constituent links
require less number of packet retransmissions due to packet loss. It in turns
decreases the latency of packet delivery and saves energy as well. To prolong
the network lifetime, power management and energy-efficient routing techniques
become necessary. Energy-aware routing is an effective way to extend the
operational lifetime of wireless ad hoc networks.Project Price: Contact US
IEEE 2014:Automating the Integration of
Clinical Studies into Medical Ontologies
Project Price: Contact US
Project Price: Contact US
Abstract—A popular approach to knowledge extraction from clinical databases is to first define ontology (A formal specification of how to represent relationships among objects, concepts, and other entity belonging to a particular area of human experience or knowledge.) of the concepts one wishes to model and subsequently, use these concepts to test various hypotheses and make predictions about a person’s future health and wellbeing. The challenge for medical experts is in the time taken to map between their concepts/hypothesis (idea/foundation) and information contained within clinical studies. Presently, most of this work is performed manually. We have developed a method to generate links between Risk Factors in a medical ontology and the questions and result data in longitudinal studies. This can then be exploited to express complex queries based on domain concepts, to extract knowledge from external studies.
IEEE 2014:Building a Scalable System for Stealthy P2P-Botnet Detection
Project Price: Contact US
Project Price: Contact US
Abstract—Peer-to-peer (P2P) botnets have recently been adopted by botmasters for their resiliency against take-down efforts. Besides being harder to take down, modern botnets tend to be stealthier in the way they perform malicious activities, making current detection approaches ineffective. In addition, the rapidly growing volume of network traffic calls for high scalability of detection systems. In this paper, we propose a novel scalable botnet detection system capable of detecting stealthy P2P botnets. Our system first identifies all hosts that are likely engaged in P2P communications. It then deriv es statistical fingerprints to profile P2P traffic and further distinguish between P2P botnet traffic and legitimate P2P traffic. The parallelized computation with bounded complexity makes scalability a built-in feature of our system. Extensive evaluation has demonstrated both high detection accuracy and great scalability of the proposed system
IEEE 2014: STARS:
A Statistical Traffic Pattern Discovery System for Anonymous MANET communications
Project Price: Contact US
Abstract—Anonymous
MANET routing relies on techniques such as re-encryption on each hop to hide
end-to-end communication relations. However, passive signal detectors and
traffic analyzers can still retrieve sensitive information from PHY and MAC
layers to derive end-to-end communication relations through statistical traffic
analysis. In this paper, we propose a Statistical Traffic pattern discovery
System (STARS) based on Eigen analysis which can greatly improve the accuracy
to derive traffic patterns in MANETs. A STAR intends to find out the sources
and destinations of captured packets and to discover the end-to-end
communication relations. The proposed approach is purely passive. It does not
require analyzers to be actively involved in MANET transmissions and to possess
encryption keys to decrypt traffic. We present theoretical models as well as
extensive simulations to demonstrate our solutions.
IEEE 2014:Enabling
Data Integrity Protection in Regenerating-Coding-Based Cloud Storage
Project Price: Contact US
Project Price: Contact US
Abstract—to
protect outsourced data in cloud storage against corruptions, enabling
integrity protection, fault tolerance, and efficient recovery for cloud storage
becomes critical. Regenerating codes provide fault tolerance by striping data
across multiple servers, while using less repair traffic than traditional
erasure codes during failure recovery. Therefore, we study the problem of remotely
checking the integrity of regenerating-coded data against corruptions under a
real-life cloud storage setting. We Design and implement a practical data
integrity protection (DIP) scheme for a specific regenerating code, while
preserving the intrinsic properties of fault tolerance and repair traffic
saving. Our DIP scheme is designed under a Byzantine adversarial model, and
enables a client to feasibly verify the integrity of random subsets of
outsourced data against general or malicious corruptions. It works under the
simple assumption of thin-cloud storage and allows different parameters to be
fine-tuned for the performance-security trade-off. We implement and evaluate
the overhead of our DIP scheme in a real cloud
storage test bed under different parameter choices. We demonstrate that remote
integrity checking can be feasibly integrated into regenerating codes in
practical deployment.
IEEE 2014:Key-Aggregate
Cryptosystem for Scalable Data Sharing in Cloud Storage
Project Price: Contact US
Abstract—Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size cipher texts such that efficient delegation of decryption rights for any set of cipher texts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of cipher text set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known.
Project Price: Contact US
Abstract—Data sharing is an important functionality in cloud storage. In this article, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems which produce constant-size cipher texts such that efficient delegation of decryption rights for any set of cipher texts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of cipher text set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known.
IEEE 2014:Low-Carbon
Routing Algorithms for Cloud Computing Services in IP-over-WDM Networks
Project Price: Contact US
Project Price: Contact US
Abstract—Energy
consumption in telecommunication networks keeps growing rapidly, mainly due to
emergence of new Cloud Computing (CC) services that need to be supported by
large data centers that consume a huge amount of energy and, in turn, cause the
emission of enormous quantity of CO2. Given the decreasing availability of
fossil fuels and the raising concern about global warming, research is now
focusing on novel “low-carbon” telecom solutions. E.g., based on today telecom
technologies, data centers can be located near renewable energy plants and data
can then be effectively transferred to these locations via reconfigurable
optical networks, based on the principle that data can be moved more
efficiently than electricity. This paper focuses on how to dynamically route
on-demand optical circuits that are established to transfer energy-intensive
data processing towards data centers powered with renewable energy. Our main
contribution consists in devising two routing algorithms for connections
supporting CC services, aimed at minimizing the CO2 emissions of data centers
by following the current availability of renewable energy (Sun and Wind). The
trade-off with energy consumption for the transport equipments is also
considered. The results show that relevant reductions, up to about 30% in CO2
emissions can be achieved using our approaches compared to baseline shortest
path- based routing strategies, paying off only a marginal increase in terms of
network blocking probability
IEEE 2014: A
Novel Economic Sharing Model in a Federation of Selfish Cloud Providers
Project Price: Contact US
Abstract—this paper presents a novel economic model to regulate capacity sharing in a federation of hybrid cloud providers (CPs). The proposed work models the interactions among the CPs as a repeated game among selfish players that aim at maximizing their profit by selling their unused capacity in the spot market but are uncertain of future workload fluctuations. The proposed work first establishes that the uncertainty in future revenue can act as a participation incentive to sharing in the repeated game. We, then, demonstrate how an efficient sharing strategy can be obtained via solving a simple dynamic programming problem. The obtained strategy is a simple update rule that depends only on the current workloads and a single variable summarizing past interactions. In contrast to existing approaches, the model incorporates historical and expected future revenue as part of the virtual machine (VM) sharing decision. Moreover, these decisions are not enforced neither by a centralized broker nor by predefined agreements. Rather, the proposed model employs a simple grim trigger strategy where a CP is threatened by the elimination of future VM hosting by other CPs. Simulation results demonstrate the performance of the proposed model in terms of the increased profit and the reduction in the variance in the spot market VM availability and prices.
Project Price: Contact US
Abstract—this paper presents a novel economic model to regulate capacity sharing in a federation of hybrid cloud providers (CPs). The proposed work models the interactions among the CPs as a repeated game among selfish players that aim at maximizing their profit by selling their unused capacity in the spot market but are uncertain of future workload fluctuations. The proposed work first establishes that the uncertainty in future revenue can act as a participation incentive to sharing in the repeated game. We, then, demonstrate how an efficient sharing strategy can be obtained via solving a simple dynamic programming problem. The obtained strategy is a simple update rule that depends only on the current workloads and a single variable summarizing past interactions. In contrast to existing approaches, the model incorporates historical and expected future revenue as part of the virtual machine (VM) sharing decision. Moreover, these decisions are not enforced neither by a centralized broker nor by predefined agreements. Rather, the proposed model employs a simple grim trigger strategy where a CP is threatened by the elimination of future VM hosting by other CPs. Simulation results demonstrate the performance of the proposed model in terms of the increased profit and the reduction in the variance in the spot market VM availability and prices.
IEEE 2014:NCCloud:
A Network-Coding-Based Storage System in a Cloud-of-Clouds
Project Price: Contact US
Abstract—to provide fault tolerance for cloud storage, recent studies propose to stripe data across multiple cloud vendors. However, if a cloud suffers from a permanent failure and loses all its data, we need to repair the lost data with the help of the other surviving clouds to preserve data redundancy. We present a proxy-based storage system for fault-tolerant multiple-cloud storage called NCCloud, which achieves cost-effective repair for a permanent single-cloud failure. NCCloud is built on top of a network-coding-based storage scheme called the functional minimum-storage regenerating (FMSR) codes, which maintain the same fault tolerance and data redundancy as in traditional erasure codes (e.g., RAID-6), but use less repair traffic and hence incur less monetary cost due to data transfer. One key design feature of our FMSR codes is that we relax the encoding requirement of storage nodes during repair, while preserving the benefits of network coding in repair. We implement a proof-of-concept prototype of NCCloud and deploy it atop both local and commercial clouds. We validate that FMSR codes provide significant monetary cost savings in repair over RAID-6 codes, while having comparable response time performance in normal cloud storage operations such as upload/download.
Project Price: Contact US
Abstract—to provide fault tolerance for cloud storage, recent studies propose to stripe data across multiple cloud vendors. However, if a cloud suffers from a permanent failure and loses all its data, we need to repair the lost data with the help of the other surviving clouds to preserve data redundancy. We present a proxy-based storage system for fault-tolerant multiple-cloud storage called NCCloud, which achieves cost-effective repair for a permanent single-cloud failure. NCCloud is built on top of a network-coding-based storage scheme called the functional minimum-storage regenerating (FMSR) codes, which maintain the same fault tolerance and data redundancy as in traditional erasure codes (e.g., RAID-6), but use less repair traffic and hence incur less monetary cost due to data transfer. One key design feature of our FMSR codes is that we relax the encoding requirement of storage nodes during repair, while preserving the benefits of network coding in repair. We implement a proof-of-concept prototype of NCCloud and deploy it atop both local and commercial clouds. We validate that FMSR codes provide significant monetary cost savings in repair over RAID-6 codes, while having comparable response time performance in normal cloud storage operations such as upload/download.
IEEE 2014:Integrity
Verification in Multi-Cloud Storage Using Cooperative Provable Data Possession
Abstract- Storage outsourcing in cloud computing is a rising trend which prompts a number of interesting security issues. Provable data possession (PDP) is a method for ensuring the integrity of data in storage outsourcing. This research addresses the construction of efficient PDP which called as Cooperative PDP (CPDP) mechanism for distributed cloud storage to support data migration and scalability of service, which considers the existence of multiple cloud service providers to collaboratively store and maintain the clients’ data. Cooperative PDP (CPDP) mechanism is based on homomorphic verifiable response, hash index hierarchy for dynamic scalability, cryptographic encryption for security. Moreover, it proves the security of scheme based on multi-prover zero knowledge proof system, which can satisfy knowledge soundness, completeness, and zero-knowledge properties. This research introduces lower computation and communication overheads in comparison with non-cooperative approaches.
Abstract- Storage outsourcing in cloud computing is a rising trend which prompts a number of interesting security issues. Provable data possession (PDP) is a method for ensuring the integrity of data in storage outsourcing. This research addresses the construction of efficient PDP which called as Cooperative PDP (CPDP) mechanism for distributed cloud storage to support data migration and scalability of service, which considers the existence of multiple cloud service providers to collaboratively store and maintain the clients’ data. Cooperative PDP (CPDP) mechanism is based on homomorphic verifiable response, hash index hierarchy for dynamic scalability, cryptographic encryption for security. Moreover, it proves the security of scheme based on multi-prover zero knowledge proof system, which can satisfy knowledge soundness, completeness, and zero-knowledge properties. This research introduces lower computation and communication overheads in comparison with non-cooperative approaches.
IEEE 2014:Optimal
Power Allocation and Load Distribution for Multiple Heterogeneous Multi core
Server Processors across Clouds and Data Centers
Project Price: Contact US
Abstract—for multiple heterogeneous multi core server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large scale server systems in current and future data centers. The multi core processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multi core server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multi core server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.
Project Price: Contact US
Abstract—for multiple heterogeneous multi core server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large scale server systems in current and future data centers. The multi core processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multi core server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multi core server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.
IEEE 2014:Oruta:
Privacy-Preserving Public Auditing for Shared Data in the Cloud
Project Price: Contact US
Abstract—with cloud storage services, it is commonplace for data to be not onsly stored in the cloud, but also shared across multiple users. However, public auditing for such shared data — while preserving identity privacy — remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Project Price: Contact US
Abstract—with cloud storage services, it is commonplace for data to be not onsly stored in the cloud, but also shared across multiple users. However, public auditing for such shared data — while preserving identity privacy — remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
IEEE 2014:Towards
Differential Query Services in Cost-Efficient Clouds
Project Price: Contact US
Abstract—Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In a cost-efficient cloud environment, a user can tolerate a certain degree of delay while retrieving information from the cloud to reduce costs. In this paper, we address two fundamental issues in such an environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme that was originally proposed by Ostrovsky. Their scheme allows a user to retrieve files of interest from an untrusted server without leaking any information. The main drawback is that it will cause a heavy querying overhead incurred on the cloud and thus goes against the original intention of cost efficiency. In this paper, we present three efficient information retrieval for ranked query (EIRQ) schemes to reduce querying overhead incurred on the cloud. In EIRQ, queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. A user can retrieve files on demand by choosing queries of different ranks. This feature is useful when there are a large number of matched files, but the user only needs a small subset of them. Under different parameter settings, extensive evaluations have been conducted on both analytical models and on a real cloud environment, in order to examine the effectiveness of our schemes.
Project Price: Contact US
Abstract—Cloud computing as an emerging technology trend is expected to reshape the advances in information technology. In a cost-efficient cloud environment, a user can tolerate a certain degree of delay while retrieving information from the cloud to reduce costs. In this paper, we address two fundamental issues in such an environment: privacy and efficiency. We first review a private keyword-based file retrieval scheme that was originally proposed by Ostrovsky. Their scheme allows a user to retrieve files of interest from an untrusted server without leaking any information. The main drawback is that it will cause a heavy querying overhead incurred on the cloud and thus goes against the original intention of cost efficiency. In this paper, we present three efficient information retrieval for ranked query (EIRQ) schemes to reduce querying overhead incurred on the cloud. In EIRQ, queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. A user can retrieve files on demand by choosing queries of different ranks. This feature is useful when there are a large number of matched files, but the user only needs a small subset of them. Under different parameter settings, extensive evaluations have been conducted on both analytical models and on a real cloud environment, in order to examine the effectiveness of our schemes.
IEEE 2014:Scalable
Distributed Service Integrity Attestation for Software-as-a-Service Clouds
Project Price: Contact US
Project Price: Contact US
Abstract—Software-as-a-Service
(SaaS) cloud systems enable application service providers to deliver their
applications via massive cloud computing infrastructures. However, due to their
sharing nature, SaaS clouds are vulnerable to malicious attacks. In this paper,
we present IntTest, a scalable and effective service integrity attestation
framework for SaaS clouds. Int Test provides a novel integrated attestation
graph analysis scheme that can provide stronger attacker pinpointing power than
previous schemes. Moreover, IntTest can automatically enhance result quality by
replacing bad results produced by malicious attackers with good results
produced by benign service providers. We have implemented a prototype of the
IntTest system and tested it on a production cloud computing infrastructure
using IBM System S stream processing applications. Our experimental results
show that IntTest can achieve higher attacker pinpointing accuracy than
existing approaches. IntTest does not require any special hardware or secure
kernel support and imposes little performance impact to the application, which
makes it practical for large scale cloud systems.
IEEE 2014:QoS-Aware Data Replication for Data-Intensive Applications in Cloud Computing Systems.
Project Price: Contact US
Abstract—Cloud computing provides scalable computing and storage resources. More and more data-intensive applications are developed in this computing environment. Different applications have different quality-of-service (QoS) requirements. To continuously support the QoS requirement of an application after data corruption, we propose two QoS-aware data replication (QADR) algorithms in cloud computing systems. The first algorithm adopts the intuitive idea of high-QoS first-replication (HQFR) to perform data replication. However, this greedy algorithm cannot minimize the data replication cost and the number of QoS-violated data replicas. To achieve these two minimum objectives, the second algorithm transforms the QADR problem into the well-known minimum-cost maximum-flow (MCMF) problem. By applying the existing MCMF algorithm to solve the QADR problem, the second algorithm can produce the optimal solution to the QADR problem in polynomial time, but it takes more computational time than the first algorithm. Moreover, it is known that a cloud computing system usually has a large number of nodes. We also propose node combination techniques to reduce the possibly large data replication time. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed algorithms in the data replication and recovery.
Project Price: Contact US
Abstract—Cloud computing provides scalable computing and storage resources. More and more data-intensive applications are developed in this computing environment. Different applications have different quality-of-service (QoS) requirements. To continuously support the QoS requirement of an application after data corruption, we propose two QoS-aware data replication (QADR) algorithms in cloud computing systems. The first algorithm adopts the intuitive idea of high-QoS first-replication (HQFR) to perform data replication. However, this greedy algorithm cannot minimize the data replication cost and the number of QoS-violated data replicas. To achieve these two minimum objectives, the second algorithm transforms the QADR problem into the well-known minimum-cost maximum-flow (MCMF) problem. By applying the existing MCMF algorithm to solve the QADR problem, the second algorithm can produce the optimal solution to the QADR problem in polynomial time, but it takes more computational time than the first algorithm. Moreover, it is known that a cloud computing system usually has a large number of nodes. We also propose node combination techniques to reduce the possibly large data replication time. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed algorithms in the data replication and recovery.
IEEE 2014:Public
Auditing for Shared Data with Efficient User Revocation in the Cloud
Project Price: Contact US
Abstract—with data services in the cloud, users can easily modify and share data as a group. To ensure data integrity can be audited publicly, users need to compute signatures on all the blocks in shared data. Different blocks are signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks, which were previously signed by this revoked user, must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is Inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
Project Price: Contact US
Abstract—with data services in the cloud, users can easily modify and share data as a group. To ensure data integrity can be audited publicly, users need to compute signatures on all the blocks in shared data. Different blocks are signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks, which were previously signed by this revoked user, must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is Inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
IEEE 2014:Ensuring
Integrity Proof in Hierarchical Attribute Encryption Scheme using Cloud
Computing
Project Price: Contact US
Abstract— It has been widely observed that the concept of cloud computing has become one of the major theory in the world of IT industry. Data owners decide to release their burden of storing and maintaining the data locally by storing it over the cloud. Cloud storage moves the owner’s data to large data centers which are remotely located on which data owner does not have any control. However, this unique feature of the cloud poses many new security challenges. One of the important concerns that need to be addressed is access control of outsourced data in cloud. Numbers of schemes have been proposed to achieve the access control of outsourced data like hierarchical attribute set based encryption [HASBE] by extending cipher-text-policy attribute set based encryption [CP-ABE]. Even though HASBE scheme achieves scalability, flexibility and fine grained access control, it fails to prove the data integrity in the cloud. However, the fact that owners no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data, because sometimes the cloud service provider modifies or deletes the data in the cloud without the knowledge or permission of data owner. Hence in order to avoid this security risk, in this paper we propose a method which gives data integrity proof for HASBE scheme. Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle.
Project Price: Contact US
Abstract— It has been widely observed that the concept of cloud computing has become one of the major theory in the world of IT industry. Data owners decide to release their burden of storing and maintaining the data locally by storing it over the cloud. Cloud storage moves the owner’s data to large data centers which are remotely located on which data owner does not have any control. However, this unique feature of the cloud poses many new security challenges. One of the important concerns that need to be addressed is access control of outsourced data in cloud. Numbers of schemes have been proposed to achieve the access control of outsourced data like hierarchical attribute set based encryption [HASBE] by extending cipher-text-policy attribute set based encryption [CP-ABE]. Even though HASBE scheme achieves scalability, flexibility and fine grained access control, it fails to prove the data integrity in the cloud. However, the fact that owners no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data, because sometimes the cloud service provider modifies or deletes the data in the cloud without the knowledge or permission of data owner. Hence in order to avoid this security risk, in this paper we propose a method which gives data integrity proof for HASBE scheme. Data integrity refers to maintaining and assuring the accuracy and consistency of data over its entire life-cycle.
IEEE 2014 : Privacy-Preserving
Multi-Keyword Ranked Search over Encrypted Cloud Data
Project Price: Contact US
Abstract—With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data have to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted data in cloud computing (MRSE). We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching,” i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use “inner product similarity” to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. To improve search experience of the data search service, we further extend these two schemes to support more search semantics. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world data set further show proposed schemes indeed introduce low overhead on computation and communication
Project Price: Contact US
Abstract—With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data have to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted data in cloud computing (MRSE). We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of “coordinate matching,” i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use “inner product similarity” to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. To improve search experience of the data search service, we further extend these two schemes to support more search semantics. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world data set further show proposed schemes indeed introduce low overhead on computation and communication
IEEE 2014:Proactive
Workload Management in Hybrid Cloud Computing
Project Price: Contact US
Abstract—The hindrances to the adoption of public cloud computing services include service reliability, data security and privacy, regulation compliant requirements, and so on. To address those concerns, we propose a hybrid cloud computing model which users may adopt as a viable and cost-saving methodology to make the best use of public cloud services along with their privately-owned (legacy) data centers. As the core of this hybrid cloud computing model, an intelligent workload factoring service is designed for proactive workload management. It enables federation between on- and off-premise infrastructures for hosting Internet-based applications, and the intelligence lies in the explicit segregation of base workload and flash crowd workload, the two naturally different components composing the application workload. The core technology of the intelligent workload factoring service is a fast frequent data item detection algorithm, which enables factoring incoming requests not only on volume but also on data content, upon a changing application data popularity. Through analysis and extensive evaluation with real-trace driven simulations and experiments on a hybrid test bed consisting of local computing platform and Amazon Cloud service platform, we showed that the proactive workload management technology can enable reliable workload prediction in the base workload zone (with simple statistical methods), achieve resource efficiency (e.g., 78% higher server capacity than that in base workload zone) and reduce data cache/replication overhead (up to two orders of magnitude) in the flash crowd workload zone, and react fast (with an X2 speed-up factor) to the changing application data popularity upon the arrival of load spikes.
Project Price: Contact US
Abstract—The hindrances to the adoption of public cloud computing services include service reliability, data security and privacy, regulation compliant requirements, and so on. To address those concerns, we propose a hybrid cloud computing model which users may adopt as a viable and cost-saving methodology to make the best use of public cloud services along with their privately-owned (legacy) data centers. As the core of this hybrid cloud computing model, an intelligent workload factoring service is designed for proactive workload management. It enables federation between on- and off-premise infrastructures for hosting Internet-based applications, and the intelligence lies in the explicit segregation of base workload and flash crowd workload, the two naturally different components composing the application workload. The core technology of the intelligent workload factoring service is a fast frequent data item detection algorithm, which enables factoring incoming requests not only on volume but also on data content, upon a changing application data popularity. Through analysis and extensive evaluation with real-trace driven simulations and experiments on a hybrid test bed consisting of local computing platform and Amazon Cloud service platform, we showed that the proactive workload management technology can enable reliable workload prediction in the base workload zone (with simple statistical methods), achieve resource efficiency (e.g., 78% higher server capacity than that in base workload zone) and reduce data cache/replication overhead (up to two orders of magnitude) in the flash crowd workload zone, and react fast (with an X2 speed-up factor) to the changing application data popularity upon the arrival of load spikes.
IEEE 2013: Vampire Attacks: Draining Life from Wireless
Ad Hoc Sensor Networks
Project Price: Contact US
Abstract—
Ad
hoc low-power wireless networks are an exciting research direction in
sensing and pervasive computing. Prior security work in this area has
focused primarily on denial of communication at the routing or medium
access control levels. This paper explores resource depletion attacks at
the routing protocol layer, which permanently disable networks by
quickly draining nodes' battery power. These "Vampire” attacks are not
specific to any specific protocol, but rather rely on the properties of
many popular classes of routing protocols. We find that all examined
protocols are susceptible to Vampire attacks, which are devastating,
difficult to detect, and are easy to carry out using as few as one
malicious insider sending only protocol-compliant messages. In the worst
case, a single Vampire can increase network-wide energy usage by a
factor of O(N), where N in the number of network nodes. We discuss
methods to mitigate these types of attacks, including a new
proof-of-concept protocol that provably bounds the damage caused by
Vampires during the packet forwarding phase.
Ad
hoc low-power wireless networks are an exciting research direction in
sensing and pervasive computing. Prior security work in this area has
focused primarily on denial of communication at the routing or medium
access control levels. This paper explores resource depletion attacks at
the routing protocol layer, which permanently disable networks by
quickly draining nodes' battery power. These "Vampire” attacks are not
specific to any specific protocol, but rather rely on the properties of
many popular classes of routing protocols. We find that all examined
protocols are susceptible to Vampire attacks, which are devastating,
difficult to detect, and are easy to carry out using as few as one
malicious insider sending only protocol-compliant messages. In the worst
case, a single Vampire can increase network-wide energy usage by a
factor of O(N), where N in the number of network nodes. We discuss
methods to mitigate these types of attacks, including a new
proof-of-concept protocol that provably bounds the damage caused by
Vampires during the packet forwarding phase.
IEEE 2013: SPOC: A Secure and Privacy-preserving Opportunistic Computing Framework for Mobile-Healthcare Emergency
Project Price: Contact US
Abstract—With the pervasiveness of smart phones and the advance of wireless body sensor networks (BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. In this paper, we propose a secure and privacy-preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency. With SPOC, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive personal health information (PHI) during m-Healthcare emergency with minimal privacy disclosure. In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient user-centric privacy access control in SPOC framework, which is based on an attribute-based access control and a new privacy-preserving scalar product computation (PPSPC) technique, and allows a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. Detailed security analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy access control in m-Healthcare emergency. In addition, performance evaluations via extensive simulations demonstrate the SPOC’s effectiveness in term of providing high reliable PHI process and transmission while minimizing the privacy disclosure during m-Healthcare emergency.
Project Price: Contact US
Abstract:
Load balancing in the cloud computing environment has an important
impact on the performance. Good load balancing makes cloud computing
more efficient and improves user satisfaction. This article introduces a
better load balance model for the public cloud based on the cloud
partitioning concept with a switch mechanism to choose different
strategies for different situations. The algorithm applies the game
theory to the load balancing strategy to improve the efficiency in the
public cloud environment.
IEEE 2013: Optimal Multicast Capacity and Delay Tradeoffs in MANETs
Project Price: Contact US
In this paper, we give a global perspective of multicast capacity and delay analysis in Mobile Ad Hoc Networks (MANETs). Specifically, we consider four node mobility models: two-dimensional i.i.d. mobility, wo-dimensional hybrid random walk, one-dimensional i.i.d. mobility, and one-dimensional hybrid random walk. Two mobility time-scales are investigated in this paper: Fast mobility where node mobility is at the same time-scale as data transmissions; Slow mobility where node mobility is assumed to occur at a much slower time-scale than data transmissions. Given a delay constraint D, we first characterize the optimal multicast capacity for each of the eight types of mobility models, and then we develop a scheme that can achieve a capacity-delay tradeoff close to the upper bound up to a logarithmic factor. In addition, we also study heterogeneous networks with infrastructure support.
EMAP: Expedite Message Authentication Protocol for Vehicular Ad Hoc Networks
Project Price: Contact US
Energy saving is an important issue in Mobile Ad Hoc Networks (MANETs). Recent studies show that network coding can help reduce the energy consumption in MANETs by using less transmission. However, apart from transmission cost, there are other sources of energy consumption, e.g., data encryption/decryption. In this paper, we study how to leverage network coding to reduce the energy consumed by data encryption in MANETs. It is interesting that network coding has a nice property of intrinsic security, based on which encryption can be done quite efficiently. To this end, we propose P-Coding, a lightweight encryption scheme to provide confidentiality for network-coded MANETs in an energy-efficient way. The basic idea of P-Coding is to let the source randomly permutes the symbols of each packet (which is prefixed with its coding vector), before performing network coding operations. Without knowing the permutation, eavesdroppers cannot locate coding vectors for correct decoding, and thus cannot obtain any meaningful information. We demonstrate that due to its lightweight nature, P-Coding incurs minimal energy consumption compared to other encryption schemes.
IEEE 2013: Detection and Localization of Multiple Spoofing Attackers in Wireless Networks
Project Price: Contact US
IEEE 2013: Optimal Multicast Capacity and Delay Tradeoffs in MANETs
Project Price: Contact US
In this paper, we give a global perspective of multicast capacity and delay analysis in Mobile Ad Hoc Networks (MANETs). Specifically, we consider four node mobility models: two-dimensional i.i.d. mobility, wo-dimensional hybrid random walk, one-dimensional i.i.d. mobility, and one-dimensional hybrid random walk. Two mobility time-scales are investigated in this paper: Fast mobility where node mobility is at the same time-scale as data transmissions; Slow mobility where node mobility is assumed to occur at a much slower time-scale than data transmissions. Given a delay constraint D, we first characterize the optimal multicast capacity for each of the eight types of mobility models, and then we develop a scheme that can achieve a capacity-delay tradeoff close to the upper bound up to a logarithmic factor. In addition, we also study heterogeneous networks with infrastructure support.
EMAP: Expedite Message Authentication Protocol for Vehicular Ad Hoc Networks
Project Price: Contact US
Abstract
- Vehicular Ad Hoc Networks (VANETs) adopt the Public Key
Infrastructure (PKI) and Certificate Revocation Lists (CRLs) for their
security. In any PKI system, the authentication of a received message is
performed by checking if the certificate of the sender is included in the
current CRL, and verifying the authenticity of the certificate and signature of
the sender. In this paper, we propose an Expedite Message Authentication
Protocol (EMAP) for VANETs, which replaces the time-consuming CRL checking
process by an efficient revocation checking process. The revocation check
process in EMAP uses a keyed Hash Message Authentication Code (HMAC), where the
key used in calculating the HMAC is shared only between non-revoked On-Board
Units (OBUs). In addition, EMAP uses a novel probabilistic key distribution,
which enables non-revoked OBUs to securely share and update a secret key. EMAP
can significantly decrease the message loss ratio due to the message
verification delay compared with the conventional authentication methods
employing CRL. By conducting security analysis and performance evaluation, EMAP
is demonstrated to be secure and efficient. Index Terms - Vehicular networks, Communication security,
Message authentication, Certificate revocation.
IEEE 2013: Community-Aware Opportunistic Routing in Mobile Social Networks
Project Price: Contact US
Project Price: Contact US
Abstract—Mobile
social networks (MSNs) are a kind of delay tolerant network that consists of
lots of mobile nodes with social characteristics. Recently, many social-aware
algorithms have been proposed to address routing problems in MSNs. However, these
algorithms tend to forward messages to the nodes with locally optimal social
characteristics, and thus cannot achieve the optimal performance. In this
paper, we propose a distributed optimal Community-Aware Opportunistic Routing
(CAOR) algorithm. Our main contributions are that we propose a home-aware
community model, whereby we turn an MSN into a network that only includes
community homes. We prove that, in the network of community homes, we still can
compute the minimum expected delivery delays of nodes through a reverse
Dijkstra algorithm and achieve the optimal opportunistic routing performance.
Since the number of communities is far less than the number of nodes in
magnitude, the computational cost and maintenance cost for contact information are greatly reduced. We
demonstrate how our algorithm significantly out performs the previous ones
through extensive simulations, based on a real MSN trace and a synthetic MSN
trace.
IEEE 2013: Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks
Project Price: Contact US
Project Price: Contact US
Abstract—In this
paper we propose redundancy management of heterogeneous wireless sensor
networks (HWSNs), utilizing multipath routing to answer user queries in the
presence of unreliable and malicious nodes. The ke concept of our redundancy
management is to exploit the tradeoff between energy consumption vs. the gain
in reliability, timeliness, and security to maximize the system useful
lifetime. We formulate the tradeoff as an optimization problem for dynamically
determining the best redundancy level to apply to multipath routing for
intrusion tolerance so that the query response success probability is maximized
while prolonging the useful lifetime.
Furthermore, we consider this optimization problem for the case in which
a voting-based distributed intrusion detection algorithm is applied to detect
and evict malicious nodes in a HWSN. We develop a novel probability model to
analyze the best redundancy level in terms of path redundancy and source
redundancy, as well as the best intrusion detection settings in terms of the
number of voters and the intrusion invocation interval under which the lifetime
of a HWSN is maximized. We then apply the analysis results obtained to the
design of a dynamic redundancy management algorithm to identify and apply the
best design parameter settings at runtime in response to environment changes,
to maximize the HWSN lifetime.
IEEE 2013: Optimizing Cloud Resources
for Delivering IPTV Services through Virtualization
Project Price: Contact US
Project Price: Contact US
Abstract:Virtualized
cloud-based services can take advantage of statistical multiplexing across
applications to yield significant cost savings to the operator. However,
achieving similar benefits with real-time services can be a challenge. In this
paper, we seek to lower a provider’s costs of real-time IPTV services through a
virtualized IPTV architecture and through intelligent time-shifting of service
delivery. We take advantage of the differences in the deadlines associated with
Live TV versus Video-on-Demand (VoD) to
effectively multiplex these services. We provide a generalized framework for
computing the amount of resources needed to support multiple services, without missing
the deadline for any service. We construct the problem as an optimization
formulation that uses a generic cost function. We consider multiple forms for
the cost function (e.g., maximum, convex and concave functions) to reflect the
different pricing options. The solution to this formulation gives the number of
servers needed at different time instants to support these services. We
implement a simple mechanism for time-shifting scheduled jobs in a simulator
and study the reduction in server load using real traces from an operational
IPTV network. Our results show that we are able to reduce the load by ∼ 24%
(compared to a possible ∼ 31%). We also show that there are interesting open problems
in designing mechanisms that allow time-shifting of load in such environments.
IEEE 2013: A
Lightweight Encryption Scheme for Network-Coded Mobile Ad Hoc Networks
Project Price: Contact US
Project Price: Contact US
Energy saving is an important issue in Mobile Ad Hoc Networks (MANETs). Recent studies show that network coding can help reduce the energy consumption in MANETs by using less transmission. However, apart from transmission cost, there are other sources of energy consumption, e.g., data encryption/decryption. In this paper, we study how to leverage network coding to reduce the energy consumed by data encryption in MANETs. It is interesting that network coding has a nice property of intrinsic security, based on which encryption can be done quite efficiently. To this end, we propose P-Coding, a lightweight encryption scheme to provide confidentiality for network-coded MANETs in an energy-efficient way. The basic idea of P-Coding is to let the source randomly permutes the symbols of each packet (which is prefixed with its coding vector), before performing network coding operations. Without knowing the permutation, eavesdroppers cannot locate coding vectors for correct decoding, and thus cannot obtain any meaningful information. We demonstrate that due to its lightweight nature, P-Coding incurs minimal energy consumption compared to other encryption schemes.
IEEE
2013: Toward Privacy Preserving and Collusion Resistance in a Location Proof
Updating System
Project Price: Contact US
Today’s location-sensitive service relies on user’s mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. We also develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. APPLAUS can be implemented with existing network infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.
Project Price: Contact US
Today’s location-sensitive service relies on user’s mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. To address this issue, we propose A Privacy-Preserving LocAtion proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server. We also develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. APPLAUS can be implemented with existing network infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.
IEEE 2013: Detection and Localization of Multiple Spoofing Attackers in Wireless Networks
Project Price: Contact US
Wireless spoofing attacks are easy to launch and can significantly
impact the performance of networks. Although the identity of a node can be
verified through cryptographic authentication, conventional security approaches
are not always desirable because of their overhead requirements. In this paper,
we propose to use spatial information, a physical property associated with each
node, hard to falsify, and not reliant on cryptography, as the basis for
detecting spoofing attacks; determining the number of attackers when
multiple adversaries masquerading as the same node identity; and localizing
multiple adversaries. We propose to use the spatial correlation of received
signal strength (RSS) inherited from wireless nodes to detect the spoofing
attacks. We then formulate the problem of determining the number of attackers
as a multi class detection problem. Cluster-based mechanisms are developed to
determine the number of attackers. When the training data are available, we
explore using the Support Vector Machines (SVM) method to further improve the
accuracy of determining the number of attackers. In addition, we developed an
integrated detection and localization system that can localize the positions of
multiple attackers. We evaluated our techniques through two test beds using
both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real
office buildings. Our experimental results show that our proposed methods can
achieve over 90 percent Hit Rate and Precision when determining the number of
attackers. Our localization results using a representative set of algorithms
provide strong evidence of high accuracy of localizing multiple adversaries
Abstract—The migration to wireless
network from wired net-work has been a global trend in the past few decades.
The mobility and scalability brought by wireless network made it possible in
many applications. Among all the contemporary wireless net-works, Mobile Ad hoc
NET work (MANET) is one of the most important and unique applications. On the
contrary to traditional network architecture, MANET does not require a fixed
network infrastructure; every single node works as both a transmitter and a
receiver. Nodes communicate directly with each other when they are both within
the same communication range. Otherwise, they rely on their neighbors to relay
messages. The self-configuring ability of nodes in MANET made it popular among
critical mission applications like military use or emergency recovery. However,
the open medium and wide distribution of nodes make MANET vulnerable to
malicious attackers. In this case, it is crucial to develop efficient
intrusion-detection mechanisms to protect MANET from attacks. With the
improvements of the technology and cut in hardware costs, we are witnessing a
current trend of expanding MANETs into industrial applications. To adjust to such
trend, we strongly believe that it is vital to address its potential security
issues. In this paper, we propose and implement a new intrusion-detection
system named Enhanced Adaptive ACKnowl-edgment (EAACK) specially designed for
MANETs. Compared to contemporary approaches, EAACK demonstrates higher
mali-cious-behavior-detection rates in certain circumstances while does not
greatly affect the network performances.
IEEE 2013: CPU
Scheduling for Power/Energy Management on Multi core Processors Using Cache
Miss and Context Switch Data
Project Price: Contact US
Project Price: Contact US
Abstract— Power and energy have become increasingly important concerns in
the design and implementation of today’s multi core/many core chips. In this
paper we present two priority-based CPU scheduling algorithms, Algorithm Cache
Miss Priority CPU Scheduler (CM−PCS) and Algorithm Context Switch Priority CPU
Scheduler(CS−PCS), which take advantage of often ignored dynamic performance
data, in order to reduce power consumption by over 20% with a significant
increase in performance. Our algorithms utilize Linux cpu sets and cores
operating at different fixed frequencies. Many other techniques, including
dynamic frequency scaling, can lower a core’s frequency during the execution of
a non-CPU intensive task, thus lowering performance. Our algorithms match
processes to cores better suited to execute those processes in an effort to
lower the average completion time of all processes in an entire task, thus
improving performance. They also consider a process’s cache miss/cache
reference ratio, number of context switches and CPU migrations, and system
load. Finally, our algorithms use dynamic process priorities as scheduling
criteria. We have tested our algorithms using a real AMD Opteron 6134 multi
core chip and measured results directly using the “Kill A Watt” meter, which samples
power periodically during execution. Our results show not only a power
(energy/execution time) savings of 39 watts (21.43%) and 38 watts (20.88%), but
also a significant improvement in the performance, performance per watt, and
execution time ·watt (energy) for a task consisting of twenty-four concurrently
executing benchmarks, when compared to the default Linux scheduler and CPU
frequency scaling governor.
IEEE 2013: DCIM:
Distributed Cache Invalidation Method for Maintaining Cache Consistency in
Wireless Mobile Networks
Project Price: Contact US
Project Price: Contact US
Abstract—This paper proposes distributed cache invalidation mechanism
(DCIM), a client-based cache consistency scheme that is implemented on top of a
previously proposed architecture for caching data items in mobile ad hoc
networks (MANETs), namely COACS, where special nodes cache the queries and the
addresses of the nodes that store the responses to these queries. We have also
previously proposed a server-based consistency scheme, named SSUM, whereas in
this paper, we introduce DCIM that is totally client-based. DCIM is a
pull-based algorithm that implements adaptive time to live (TTL), piggybacking,
and perfecting, and provides near strong consistency capabilities. Cached data
items are assigned adaptive TTL values that correspond to their update rates at
the data source, where items with expired TTL values are grouped in validation
requests to the data source to refresh them, whereas unexpired ones but with
high request rates are prefetched from the server. In this paper, DCIM is
analyzed to assess the delay and bandwidth gains (or costs) when compared to
polling every time and push-based schemes. DCIM was also implemented using ns2,
and compared against client-based and server-based schemes to assess its
performance experimentally. The consistency ratio, delay, and overhead traffic
are reported versus several variables, where DCIM showed to be superior when
compared to the other systems.
Abstract—In this paper, we consider the issue of data broadcasting in
mobile social Networks (MSNets). The objective is to broadcast data from a
super user to other users in the network. There are two main challenges under
this paradigm, namely, how to represent
and characterize user mobility in realistic MSN ets; given the knowledge of
regular users’ movements, how to design an efficient super user route to
broadcast data actively. We first explore several realistic data sets to reveal both
geographic and social regularities of human mobility, and further propose the
concepts of Geo-community and Geo-centrality into MSNet analysis. Then, we
employ a semi-Markov process to model user mobility based on the Geo-community
structure of the network. Correspondingly, the Geo-centrality indicating the
“dynamic user density” of each Geo-community can be derived from the
semi-Markov model. Finally, considering the Geo-centrality information, we
provide different route algorithms to cater to the superuser that wants to
either minimize total duration or maximize dissemination ratio. To the best of
our knowledge, this work is the first to study data broadcasting in a realistic
MSNet setting. Extensive trace-driven simulations show that our approach consistently
outperforms other existing super user route design algorithms in terms of
dissemination ratio and energy efficiency.
Abstract—This paper introduces cooperative caching policies for
minimizing electronic content provisioning cost in Social Wireless Networks
(SWNET). SWNETs are formed by mobile devices, such as data enabled phones,
electronic book readers etc., sharing common interests in electronic content,
and physically gathering together in public places. Electronic object caching
in such SWNETs are shown to be able to reduce the content provisioning cost
which depends heavily on the service and pricing dependence among various
stakeholders including content providers (CP), network service providers, and
End Consumers (EC). Drawing motivation from Amazon’s Kindle electronic book
delivery business, this paper develops practical network, service, and pricing
models which are then used for creating two object caching strategies for minimizing
content provisioning costs in networks with homogenous and heterogeneous object
demands. The paper constructs analytical and simulation models for analyzing
the proposed caching strategies in the presence of selfish users that deviate
from network-wide cost-optimal policies. It also reports results from an
Android phone-based prototype SWNET, validating the presented analytical and
simulation results.
Visual cryptography is a secret
sharing scheme which uses images distributed as shares such that, when the
shares are superimposed, a hidden secret image is revealed. In extended visual
cryptography, the share images are constructed to contain meaningful cover
images, thereby providing opportunities for integrating visual cryptography and
biometric security techniques. In this paper, we propose a method for
processing halftone images that improves the quality of the share images and
the recovered secret image in an extended visual cryptography scheme for which
the size of the share images and there covered image is the same as for the
original halftone secret image. The resulting scheme maintains the perfect
security of the original extended visual cryptography approach
A novel image
encryption algorithm based on DNA sequence addition operation. This initiation
and increasing escalation of Internet has caused the information to be
paperless and the makeover into electronic compared to the conventional digital
image distribution. In this paper we proposed and implement four phase. First
phase, image is renovating into binary matrix. Afterward matrix is apportioning
into equal blocks. Second phase, each block is then encoded into DNA sequences
and DNA sequence addition operation used to add these blocks. For that result
of added matrix is achieved by using two Logistic maps. At the time of decoding
the DNA sequence matrix is complemented and we encrypt that result by using DES
then we get encrypted image. Our paper includes a novel encryption technique
for providing security to image. We have proposed an algorithm which is based
on suitable encryption method
Image
retrieval refers to extracting desired images from a large
database. The retrieval may be of text based or content based.
Here content based image retrieval (CBIR) is performed. CBIR is a
long standing research topic in the field of multimedia. Here
features such as texture & shape are analyzed. Gabor filter
is used to extract texture features from images. Morphological
c10sing operation combined with Gabor filter gives better retrieval
accuracy. The parameters considered are scale and orientation.
After applying Gabor filter on the image, texture features such
as mean and standard deviations are calculated. This forms the
feature vector. Shape feature is extracted by using Fourier
Descriptor and the centroid distance. In order to improve the
retrieval performance, combined texture and shape features are
utilized, because many features provide more information than
the single feature. The images are extracted based on their
Euclidean distance. The performance is evaluated using
precision-recall graph.
IEEE 2013: Beyond Text QA: Multimedia Answer Generation by
Harvesting Web Information
Community question answering (cQA) services have gained popularity over the past years. It not only allows community members to post and answer questions but also enables general users to seek information from a comprehensive set of well-answered questions. However, existing cQA forums usually provide only textual answers, which are not informative enough for many questions. In this paper, we propose a scheme that is able to en-rich textual answers in cQA with appropriate media data. Our scheme consists of three components: answer medium selection, query generation for multimedia search, and multimedia data selection and presentation. This approach automatically determines which type of media information should be added for a textual answer. It then automatically collects data from the web to enrich the answer. By processing a large set of QA pairs and adding them to a pool, our approach can enable a novel multimedia question answering (MMQA) approach as users can find multimedia answers by matching their questions with those in the pool. Different from a lot of MMQA research efforts that attempt to directly answer questions with image and video data, our approach is built based on community-contributed textual answers and thus it is able to deal with more complex questions. We have conducted extensive experiments on a multi source QA data set. The results demonstrate the effectiveness of our approach.
A
Web Usage Mining Approach Based On New Technique In Web
Path Recommendation Systems The Internet is one of the fastest growing
areas of
intelligence gathering. The ranking of
web page for the Web search-engine is one of the significant problems
at present. This leads to the important attention to the research
community. Web Perfecting is used to
reduce the access latency of the Internet. However, if most perfected
Web pages
are not visited by the users in their subsequent accesses, the limited
network bandwidth and server
resources will not be used efficiently and may worsen the access delay
problem. Therefore, it is critical that
we have an accurate prediction method during perfecting. To provide
prediction efficiently, we advance
architecture for predicting in Web
Usage Mining system and propose a novel approach for classifying user
navigation patterns for predicting users’ requests based on clustering
users
browsing behavior knowledge. The Excremental
results show that the approach can improve accuracy, precision, recall
and F measure of classification in the architecture
Restricts
the types of queries that the service can answer. For example, a Web
service might provide a method that returns the songs of a given singer,
but it might not provide a method that returns the singers of a given
song. If the user asks for the singer of some specific song, then the
Web service cannot be called – even though the underlying database might
have the desired piece of information. This asymmetry is particularly
problematic if the service is used in a Web service orchestration
system. In this paper, we propose to use on-the-fly information
extraction to collect values that can be used as parameter bindings for
the Web service. We show how this idea can be integrated into a Web
service orchestration system. Our approach is fully implemented in a
prototype called SUSIE. We present experiments with real-life data and
services to demonstrate the practical viability and good performance of
our approach.
IEEE 2013:PMSE: A Personalized Mobile Search Engine
We propose a personalized mobile search engine (PMSE) that captures the users’ preferences in the form of concepts by mining their click through data. Due to the importance of location information in mobile search, PMSE classifies these concepts into content concepts and location concepts. In addition, users’ locations (positioned by GPS) are used to supplement the location concepts in PMSE. The user preferences are organized in an ontology-based, multifacet user profile, which are used to adapt a personalized ranking function for rank adaptation of future search results. To characterize the diversity of the concepts associated with a query and their relevance to the user’s need, four entropies are introduced to balance the weights between the content and location facets. Based on the client-server model, we also present a detailed architecture and design for implementation of PMSE. In our design, the client collects and stores locally the click through data to protect privacy, whereas heavy tasks such as concept extraction, training, and re ranking are performed at the PMSE server. Moreover, we address the privacy issue by restricting the information in the user profile exposed to the PMSE server with two privacy parameters. We prototype PMSE on the Google Android platform. Experimental results show that PMSE significantly improves the precision comparing to the baseline.
We propose a personalized mobile search engine (PMSE) that captures the users’ preferences in the form of concepts by mining their click through data. Due to the importance of location information in mobile search, PMSE classifies these concepts into content concepts and location concepts. In addition, users’ locations (positioned by GPS) are used to supplement the location concepts in PMSE. The user preferences are organized in an ontology-based, multifacet user profile, which are used to adapt a personalized ranking function for rank adaptation of future search results. To characterize the diversity of the concepts associated with a query and their relevance to the user’s need, four entropies are introduced to balance the weights between the content and location facets. Based on the client-server model, we also present a detailed architecture and design for implementation of PMSE. In our design, the client collects and stores locally the click through data to protect privacy, whereas heavy tasks such as concept extraction, training, and re ranking are performed at the PMSE server. Moreover, we address the privacy issue by restricting the information in the user profile exposed to the PMSE server with two privacy parameters. We prototype PMSE on the Google Android platform. Experimental results show that PMSE significantly improves the precision comparing to the baseline.
The
relationships between consumer emotions and their buying behaviors have
been well documented. Technology-savvy consumers often use the web to
find information on products and services before they commit to buying.
We propose a semantic web usage mining approach for discovering periodic
web access patterns from annotated web usage logs which incorporates
information on consumer emotions and behaviors through self-reporting
and behavioral tracking. We use fuzzy logic to represent real-life
temporal concepts (e.g., morning) and requested resource attributes
(ontological domain concepts for the requested URLs) of periodic
pattern-based web access activities. These fuzzy temporal and resource
representations, which contain both behavioral and emotional cues, are
incorporated into a Personal Web Usage Lattice that models the user’s
web access activities. From this, we generate a Personal Web Usage
Ontology written in OWL, which enables semantic web applications such as
personalized web resources recommendation. Finally, we demonstrate the
effectiveness of our approach by presenting experimental results in the
context of personalized web resources recommendation with varying
degrees of emotional influence. Emotional influence has been found to
contribute positively to adaptation in personalized recommendation
Keyword
search has become a ubiquitous method for users to access text data in
the face of information explosion. Inverted lists are usually used to
index underlying documents to retrieve documents according to a set of
keywords efficiently. Since inverted lists are usually large, many
compression techniques have been proposed to reduce the storage space
and disk I/O time. However, these techniques usually perform
decompression operations on the fly, which increases the CPU time. This
paper presents a more efficient index structure, the Generalized
INverted IndeX (Ginix), which merges consecutive IDs in inverted lists
into intervals to save storage space. With this index structure, more
efficient algorithms can be devised to perform basic keyword search
operations, i.e., the union and the intersection operations, by taking
the advantage of intervals. Specifically, these algorithms do not
require conversions from interval lists back to ID lists. As a result,
keyword search using Ginix can be more efficient than those using
traditional inverted indices. The performance of Ginix is also improved
by reordering the documents in data sets using two scalable algorithms.
Experiments on the performance and scalability of Ginix on real data
sets show that Ginix not only requires less storage space, but also
improves the keyword search performance, compared with traditional
inverted indexes.
In
web-based e-learning environment every learner has a distinct
background, learning style and a specific goal when searching for
learning material on the web. The goal of personalization is to tailor
search results to a particular user based on that user’s contextual
information. The effectiveness of accessing learning material involves
two important challenges: identifying the user context and modeling the
user context as ontological profiles. This work describes the
ontology-based framework for context-aware adaptive learning system,
with detailed discussions on the categorization contextual information
and modeling along with the use of ontology to explicitly specify
learner context in an e-learning environment. Finally we conclude by
showing the applicability of the proposed ontology with appropriate
architectural overview of e-learning system.
As
probabilistic data management is becoming one of the main re-search
focuses and keyword search is turning into a more popular query means,
it is natural to think how to support keyword queries on probabilistic
XML data. With regards to keyword query on De-terministic XML documents,
ELCA (Exclusive Lowest Common Ancestor) semantics allows more relevant
fragments rooted at the ELCAs to appear as results and is more popular
compared with other keyword query result semantics (such as SLCAs). In
this paper, we investigate how to evaluate ELCA results for keyword
queries on probabilistic XML documents. After defin-ing probabilistic
ELCA semantics in terms of possible world se-mantics, we propose an
approach to compute ELCA probabilities without generating possible
worlds. Then we develop an efficient stack-based algorithm that can find
all probabilistic ELCA results and their ELCA probabilities for a given
keyword query on a prob-abilistic XML document. Finally, we
experimentally evaluate the proposed ELCA algorithm and compare it with
its SLCA counter-part in aspects of result effectiveness, time and space
efficiency, and scalability
Crowd sourcing Predictors of Behavioral Outcomes
Generating
models from large data sets—and deter-mining which subsets of data to
mine—is becoming increasingly automated. However choosing what data to
collect in the first place requires human intuition or experience,
usually supplied by a domain expert. This paper describes a new approach
to machine science which demonstrates for the first time that
non-domain experts can collectively formulate features, and provide
values for those features such that they are predictive of some
behavioral outcome of interest. This was accomplished by building a web
platform in which human groups interact to both respond to questions
likely to help predict a behavioral outcome and pose new questions to
their peers. This results in a dynamically-growing online survey, but
the result of this cooperative behavior also leads to models that can
predict user’s outcomes based on their responses to the user-generated
survey questions. Here we describe two web-based experiments that
instantiate this approach: the first site led to models that can predict
users’ monthly electric energy consumption; the other led to models
that can predict users’ body mass index. As exponential increases in
content are often observed in successful online collaborative
communities, the proposed methodology may, in the future, lead to
similar exponential rises in discovery and insight into the causal
factors of behavioral outcomes.
A large number of organizations today generate and share textual descriptions of their products, services, and actions. Such collections of textual data contain significant amount of struc-tured information, which remains buried in the unstructured text. While information extraction algorithms facilitate the extraction of structured relations, they are often expensive and inaccurate, es-pecially when operating on top of text that does not contain any instances of the targeted structured information. We present a novel alternative approach that facilitates the generation of the structured metadata by identifying documents that are likely to contain informa-tion of interest and this information is going to be subsequently useful for querying the database. Our approach relies on the idea that hu-mans are more likely to add the necessary metadata during creation time, if prompted by the interface; or that it is much easier for humans (and/or algorithms) to identify the metadata when such information actually exists in the document, instead of naively prompting users to fill in forms with information that is not available in the document. As a major contribution of this paper, we present algorithms that identify structured attributes that are likely to appear within the document, by jointly utilizing the content of the text and the query workload. Our experimental evaluation shows that our approach generates superior results compared to approaches that rely only on the textual content or only on the query workload, to identify attributes of interest.
Secure
distributed data storage can shift the burden of maintaining a large
number of files from the owner to proxy servers. Proxy servers can
convert encrypted files for the owner to encrypted files for the
receiver without the necessity of knowing the content of the original
files. In practice, the original files will be removed by the owner for
the sake of space efficiency. Hence, the issues on confidentiality and
integrity of the outsourced data must be addressed carefully. In this
paper, we propose two identity-based secure distributed data storage
(IBSDDS) schemes. Our schemes can capture the following properties: The
file owner can decide the access permission independently without the
help of the private key generator (PKG); For one query, a receiver can
only access one file, instead of all files of the owner; Our schemes are
secure against the collusion attacks, namely even if the receiver can
compromise the proxy servers, he cannot obtain the owner’s secret key.
Although the first scheme is only secure against the chosen plain text
attacks (CPA), the second scheme is secure against the chosen cipher
text attacks (CCA). To the best of our knowledge, it is the first IBSDDS
schemes where an access permissions is made by the owner for an exact
file and collusion attacks can be protected in the standard model.
IEEE 2013: Govcloud: Using Cloud Computing in Public Organizations
Abstract: Governments are fac-ing reductions in ICT budgets just
as users are increasing demands for electronic services. One solution announced
aggressively by vendors is cloud computing. Cloud comput-ing is not a new
technology, but as described by Jackson] is a new way of offering services,
taking into consideration business and economic models for providing and
consuming ICT services. Here we explain the impact and benefits for public
organizations of cloud services and explore issues of why governments are slow
to adopt use of the cloud. The exist-ing literature does not cover this subject
in detail, especially for European organizations
Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through visualization technology. In this paper, we present a system that uses visualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multi-dimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
IEEE 2013: Privacy Preserving Delegated Access Control in Public Clouds
Abstract—Current approaches to enforce fine-grained access control on confidential data hosted in the cloud are based on fine-grained encryption of the data. Under such approaches, data owners are in charge of encrypting the data before uploading them on the cloud and re-encrypting the data whenever user credentials or authorization policies change. Data owners thus incur high communication and computation costs. A better approach should delegate the enforcement of fine-grained access control to the cloud, so to minimize the overhead at the data owners, while assuring data confidentiality from the cloud. We propose an approach, based on two layers of encryption, that addresses such requirement. Under our approach, the data owner performs a coarse-grained encryption, whereas the cloud performs a fine-grained encryption on top of the owner encrypted data. A challenging issue is how to decompose access control policies (ACPs) such that the two layer encryption can be performed. We show that this problem is NP-complete and propose novel optimization algorithms. We utilize an efficient group key management scheme that supports expressive ACPs. Our system assures the confidentiality of the data and preserves the privacy of users from the cloud while delegating most of the access control enforcement to the cloud.
Abstract—Using
Cloud
Storage, users can remotely store their data and enjoy the on-demand
high
quality applications and services from a shared pool of configurable
computing
resources, without the burden of local data storage and maintenance.
However,
the fact that users no longer have physical possession of the outsourced
data
makes the data integrity protection in Cloud Computing a formidable
task,
especially for users with constrained computing resources. Moreover,
users
should be able to just use the cloud storage as if it is local, without
worrying about the need to verify its integrity. Thus, enabling public
audit ability for cloud storage is of critical importance so that users
can
resort to a third party auditor (TPA) to check the integrity of
outsourced data
and be worry-free. To securely introduce an effective TPA, the auditing
process
should bring in no new vulnerabilities towards user data privacy, and
introduce
no additional online burden to user. In this paper, we propose a secure
cloud
storage system supporting privacy-preserving public auditing. We further
extend
our result to enable the TPA to perform audits for multiple users
simultaneously and efficiently. Extensive security and performance
analysis
show the proposed schemes are provably secure and highly efficient
Photos with people (e.g., family,
friends, celebrities, etc.) are the major interest of users. Thus, with the
exponentially growing photos, large-scale content-based face image retrieval is
an enabling technology for many emerging applications. In this work, we aim to
utilize automatically detected human attributes that contain semantic cues of
the face photos to improve content-based face retrieval by constructing
semantic code words for efficient large-scale face retrieval. By leveraging
human attributes in a scalable and systematic framework, we propose two
orthogonal methods named attribute-enhanced sparse coding and
attribute-embedded inverted indexing to improve the face retrieval in the
offline and online stages. We investigate the effectiveness of different
attributes and vital factors essential for face retrieval. Experimenting on two
public data sets, the results show that the proposed methods can achieve up to
43.5% relative improvement in MAP compared to the existing method
Reversible Data
Hiding in Encrypted Images by Reserving Room Before Encryption
Recently, more and more attention is paid to reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that the original cover can be losslessly recovered after embedded data is extracted while protecting the image content’s confidentiality. All previous methods embed data by reversibly vacating room from the encrypted images, which may be subject to some errors on data extraction and/or image restoration. In this paper, we propose a novel method by reserving room before encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed data in the encrypted image. The proposed method can achieve real reversibility, that is, data extraction and image recovery are free of any error. Experiments show that this novel method can embed more than 10 times as large payloads for the same image quality as the previous methods, such as for PSNR dB
Recently, more and more attention is paid to reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that the original cover can be losslessly recovered after embedded data is extracted while protecting the image content’s confidentiality. All previous methods embed data by reversibly vacating room from the encrypted images, which may be subject to some errors on data extraction and/or image restoration. In this paper, we propose a novel method by reserving room before encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed data in the encrypted image. The proposed method can achieve real reversibility, that is, data extraction and image recovery are free of any error. Experiments show that this novel method can embed more than 10 times as large payloads for the same image quality as the previous methods, such as for PSNR dB
Graph-based ranking
models have been widely applied in information retrieval area. In this paper,
we focus on a well known graph-based model - the Ranking on Data Mani fold model,
or Manifold Ranking (MR). Particularly, it has been successfully applied to
content-based image retrieval, because of its outstanding ability to discover
underlying geometrical structure of the given image database. However, manifold
ranking is computationally very expensive, which significantly limits its
applicability to large databases especially for the cases that the queries are
out of the database (new samples). We propose a novel scalable graph-based
ranking model called Efficient Manifold Ranking (EMR), trying to address the
shortcomings of MR from two main perspectives: scalable graph construction and
efficient ranking computation. Specifically, we build an anchor graph on the
database instead of a traditional k-nearest neighbor graph, and design a new
form of adjacency matrix utilized to speed up the ranking. An approximate method
is adopted for efficient out-of-sample retrieval. Experimental results on some
large scale image databases demonstrate that EMR is a promising method for real
world retrieval applications.
This
paper introduces a new exemplar-based in painting frame-work. A coarse
version of the input image is first in painted by a non-parametric patch
sampling. Compared to existing approaches, some improvements have been
done (e.g. filling order computation, combination of K nearest
neighbor). The in painted of a coarse version of the input image allows
to reduce the computational complexity, to be less sensitive to noise
and to work with the dominant orientations of image structures. From the
low-resolution in painted image, a single-image super-resolution is
applied to recover the details of missing areas. Experimental results on
natural images and texture synthesis demonstrate the effectiveness of
the proposed method.
The Visual Cryptography Scheme is a
secure method that encrypts a secret document or image by breaking it into
shares. A distinctive property of Visual Cryptography Scheme is that one can
visually decode the secret image by superimposing shares without computation.
By taking the advantage of this property, third person can easily retrieve the
secret image if shares are passing in sequence over the network. The project
presents an approach for encrypting visual cryptographically generated image
shares using Public Key Encryption. RSA algorithm is used for providing the
double security of secret document. Thus secret share are not available in
their actual form for any alteration by the adversaries who try to create fake
shares. The scheme provides more secure secret shares that are robust against a
number of attacks & the system provides a strong security for the
handwritten text, images and printed documents over the public network.
A (k, n) Visual Cryptographic Scheme (VCS) encodes
a secret image into n shadow images (printed on Transparencies)
distributed among n participants. When any k participants superimpose their
transparencies on an overhead projector (OR operation), the secret image can be
visually revealed by human visual system without computation. However, the
monotone property of OR operation degrades the visual quality of reconstructed
image for OR-based VCS (OVCS). Accordingly,
XOR-based VCS (XVCS), which uses XOR operation for decoding, was proposed to
enhance the contrast. In this paper, we investigate the relation between OVCS
and XVCS. Our main contribution is to theoretically prove that the basis matrices
of (k, n)-OVCS can be used in (k, n)-XVCS. Meantime, the contrast is enhanced 2
(k1) times
Steganography using Genetic Algorithm along with Visual Cryptography for
Wireless Network Application
Image Stenography is an emerging field of research for secure data hiding
and transmission over networks. The proposed system provides the best approach
for Least Significant Bit (LSB) based Stenography using Genetic Algorithm
(GA) along with Visual Cryptography (VC). Original message is converted into
cipher text by using secret key and then hidden into the LSB of original image.
Genetic Algorithm and Visual Cryptography has been used for enhancing the
security. Genetic Algorithm is used to modify the pixel location of stego image
and the detection of this message is complex. Visual Cryptography is used to
encrypt the visual information. It is achieved by breaking the image into two
shares based on a threshold. The performance of the proposed system is
experimented by performing steganalysis and conducting benchmarking test for
analysing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise
Ratio (PSNR). The main aim of this paper is to design the enhanced secure
algorithm which uses both steganography using Genetic Algorithm and Visual
Cryptography to ensure improved security and reliability.
IEEE 2012: EXPERT DISCOVERY AND INTERACTIONS IN MIXED SERVICE-ORIENTED SYSTEMS
IEEE 2012 TRANSACTIONS ON SERVICES COMPUTING

Project Price: Contact US
Abstract — Web-based collaborations and processes have become essential in today’s business environments. Such processes typically span interactions between people and services across globally distributed companies. Web services and SOA are the defacto technology to implement compositions of humans and services. The increasing complexity of compositions and the distribution of people and services require adaptive and context-aware interaction models. To support complex interaction scenarios, we introduce a mixed service-oriented system composed of both human-provided and software-based services interacting to perform joint activities or to solve emerging problems. However, competencies of people evolve over time, thereby requiring approaches for the automated management of actor skills, reputation, and trust. Discovering the right actor in mixed service-oriented systems is challenging due to scale and temporary nature of collaborations. We present a novel approach addressing the need for flexible involvement of experts and knowledge workers in distributed collaborations. We argue that the automated inference of trust between members is a key factor for successful collaborations. Instead of following a security perspective on trust, we focus on dynamic trust in collaborative networks. We discuss Human-Provided Services (HPS) and an approach for managing user preferences and network structures. HPS allows experts to offer their skills and capabilities as services that can be requested on demand. Our main contributions center around a context-sensitive trust-based algorithm called Expert HITS inspired by the concept of hubs and authorities in Web-based environments. Expert HITS takes trust-relations and link properties in social networks into account to estimate the reputation of users.
IEEE 2012 Cooperative Download in Vehicular Environments
Project Price: Contact US

Abstract—We consider a complex (i.e., non-linear) road scenario where users aboard vehicles equipped with communication interfaces are interested in downloading large files from road-side Access Points (APs). We investigate the possibility of exploiting opportunistic encounters among mobile nodes so to augment the transfer rate experienced by vehicular downloaders. To that end, we devise solutions for the selection of carriers and data chunks at the APs, and evaluate them in real-world road topologies, under different AP deployment strategies. Through extensive simulations, we show that carry & forward transfers can significantly increase the download rate of vehicular users in urban/suburban environments, and that such a result holds throughout diverse mobility scenarios, AP placements and network loads.
IEEE 2012 Privacy and Integrity Preserving Range Queries in Sensor Networks
Project Price: Contact US

Abstract—The architecture of two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, has been widely adopted because of the benefits of power and storage saving for sensors as well as the efficiency of query processing. However, the importance of storage nodes also makes them attractive to attackers. In this paper, we propose SafeQ, a protocol that prevents attackers from gaining information from both sensor collected data and sink issued queries. SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values. To preserve integrity, we propose two schemes—one using Merkle hash trees and another using a new data structure called neighborhood chains—to generate integrity verification information so that a sink can use this information to verify whether the result of a query contains exactly the data items that satisfy the query. To improve performance, we propose an optimization technique using Bloom filters to reduce the communication cost between sensors and storage nodes.
IEEE 2012: Ensuring Distributed Accountability for Data Sharing in the Cloud
IEEE 2012 TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Project Price: Contact US

Abstract— Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.
IEEE 2012: Towards Secure and Dependable Storage Services in Cloud Computing
Project Price: Contact US

Abstract— Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users’ physical possession of their outsourced data, which inevitably poses new security risks towards the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
IEEE 2012:A Secure Intrusion detection system against DDOS attack in Wireless Mobile Ad-hoc Network

IEEE 2012 INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS
Project Price: Contact US
Abstract— Wireless Mobile ad-hoc network (MANET) is an emerging technology and have great strength to be applied in critical situations like battlefields and commercial applications such as building, traffic surveillance, MANET is infrastructure less, with no any centralized controller exist and also each node contain routing capability, Each device in a MANET is independently free to move in any direction, and will therefore change its connections to other devices frequently. So one of the major challenges wireless mobile ad-hoc networks face today is security, because no central controller exists. MANETs are a kind of wireless ad hoc networks that usually has a routable networking environment on top of a link layer ad hoc network. Ad hoc also contains wireless sensor network so the problems is facing by sensor network is also faced by MANET. While developing the sensor nodes in unattended environment increases the chances of various attacks. There are many security attacks in MANET and DDoS (Distributed denial of service) is one of them. Our main aim is seeing the effect of DDoS in routing load, packet drop rate, end to end delay, i.e. maximizing due to attack on network. And with these parameters and many more also we build secure IDS to detect this kind of attack and block it. In this paper we discussed some attacks on MANET and DDOS also and provide the security against the DDOS attack.
IEEE 2012: HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing
IEEE 2012 TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Project Price: Contact US

Abstract— Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending cipher text-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the cipher text-policy attribute-based encryption (CP-ABE) scheme by Bettencourt et al. and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.
IEEE 2012: A Keyless Approach to Image Encryption
IEEE 2012 COMMUNICATION SYSTEMS AND NETWORK TECHNOLOGIES
Project Price: Contact US
Project Price: Contact US
Abstract— This work proposes a novel scheme for separable reversible data hiding in encrypted images. In the first phase, a content owner encrypts the original uncompressed image using an encryption key. Then, a data-hider may compress the least significant bits of the encrypted image using a data-hiding key to create a sparse space to accommodate some additional data. With an encrypted image containing additional data, if a receiver has the data-hiding key, he can extract the additional data though he does not know the image content. If the receiver has the encryption key, he can decrypt the received data to obtain an image similar to the original one, but cannot extract the additional data. If the receiver has both the data-hiding key and the encryption key, he can extract the additional data and recover the original content without any error by exploiting the spatial correlation in natural image when the amount of additional data is not too large.
IEEE 2012 COMMUNICATIONS MAGAZINE
Project Price: Contact US
Abstract— Handling traffic
dynamics in order to avoid network congestion and subsequent service
disruptions is one of the key tasks performed by contemporary network
management systems. Given the simple but rigid routing and forwarding
functionalities in IP base environments, efficient resource management and
control solutions against dynamic traffic conditions is still yet to be
obtained. In this article, we introduce AMPLE — an efficient traffic
engineering and management system that performs adaptive traffic control by
using multiple virtualized routing topologies. The proposed system consists of
two compel monetary components: offline link weight optimization that takes as
input the physical network topology and tries to produce maximum routing path
diversity across multiple virtual routing topologies for long term operation
through the optimized setting of link weights. Based on these diverse paths,
adaptive traffic control performs intelligent traffic splitting across
individual routing topologies in reaction to the monitored network dynamics at
short timescale. According to our evaluation with real network topologies and
traffic traces, the proposed system is able to cope almost optimally with
unpredicted traffic dynamics and, as such, it constitutes a new proposal for
achieving better quality of service and overall network performance in IP
networks.
IEEE 2012:IEEE 2012: An Adaptive Opportunistic Routing Scheme for Wireless Ad-hoc Networks
IEEE 2012 NETWORKING
Project Price: Contact US
IEEE 2012 TRANSACTIONS ON WIRELESS COMMUNICATIONS
Project Price: Contact US
Abstract— Cooperative communication has received tremendous interest for wireless networks. Most existing works on cooperative communications are focused on link-level physical layer issues. Consequently, the impacts of cooperative communications on network-level upper layer issues, such as topology control, routing and network capacity, are largely ignored. In this article, we propose a Capacity-Optimized Cooperative (COCO) topology control scheme to improve the network capacity in MANETs by jointly considering both upper layer network capacity and physical layer cooperative communications. Through simulations, we show that physical layer cooperative communications have significant impacts on the network capacity, and the proposed topology control scheme can substantially improve the network capacity in MANETs with cooperative communications
IEEE 2012:Performance of PCN-Based Admission Control Under Challenging Conditions
IEEE 2012 TRANSACTIONS ON NETWORKING
Project Price: Contact US
IEEE 2012 TRANSACTIONS ON SERVICES COMPUTING

Project Price: Contact US
Abstract — Web-based collaborations and processes have become essential in today’s business environments. Such processes typically span interactions between people and services across globally distributed companies. Web services and SOA are the defacto technology to implement compositions of humans and services. The increasing complexity of compositions and the distribution of people and services require adaptive and context-aware interaction models. To support complex interaction scenarios, we introduce a mixed service-oriented system composed of both human-provided and software-based services interacting to perform joint activities or to solve emerging problems. However, competencies of people evolve over time, thereby requiring approaches for the automated management of actor skills, reputation, and trust. Discovering the right actor in mixed service-oriented systems is challenging due to scale and temporary nature of collaborations. We present a novel approach addressing the need for flexible involvement of experts and knowledge workers in distributed collaborations. We argue that the automated inference of trust between members is a key factor for successful collaborations. Instead of following a security perspective on trust, we focus on dynamic trust in collaborative networks. We discuss Human-Provided Services (HPS) and an approach for managing user preferences and network structures. HPS allows experts to offer their skills and capabilities as services that can be requested on demand. Our main contributions center around a context-sensitive trust-based algorithm called Expert HITS inspired by the concept of hubs and authorities in Web-based environments. Expert HITS takes trust-relations and link properties in social networks into account to estimate the reputation of users.
IEEE 2012 Cooperative Download in Vehicular Environments
Project Price: Contact US

Abstract—We consider a complex (i.e., non-linear) road scenario where users aboard vehicles equipped with communication interfaces are interested in downloading large files from road-side Access Points (APs). We investigate the possibility of exploiting opportunistic encounters among mobile nodes so to augment the transfer rate experienced by vehicular downloaders. To that end, we devise solutions for the selection of carriers and data chunks at the APs, and evaluate them in real-world road topologies, under different AP deployment strategies. Through extensive simulations, we show that carry & forward transfers can significantly increase the download rate of vehicular users in urban/suburban environments, and that such a result holds throughout diverse mobility scenarios, AP placements and network loads.
IEEE 2012 Privacy and Integrity Preserving Range Queries in Sensor Networks
Project Price: Contact US

Abstract—The architecture of two-tiered sensor networks, where storage nodes serve as an intermediate tier between sensors and a sink for storing data and processing queries, has been widely adopted because of the benefits of power and storage saving for sensors as well as the efficiency of query processing. However, the importance of storage nodes also makes them attractive to attackers. In this paper, we propose SafeQ, a protocol that prevents attackers from gaining information from both sensor collected data and sink issued queries. SafeQ also allows a sink to detect compromised storage nodes when they misbehave. To preserve privacy, SafeQ uses a novel technique to encode both data and queries such that a storage node can correctly process encoded queries over encoded data without knowing their values. To preserve integrity, we propose two schemes—one using Merkle hash trees and another using a new data structure called neighborhood chains—to generate integrity verification information so that a sink can use this information to verify whether the result of a query contains exactly the data items that satisfy the query. To improve performance, we propose an optimization technique using Bloom filters to reduce the communication cost between sensors and storage nodes.
IEEE 2012: Ensuring Distributed Accountability for Data Sharing in the Cloud
IEEE 2012 TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Project Price: Contact US

Abstract— Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.
IEEE 2012: Towards Secure and Dependable Storage Services in Cloud Computing
Project Price: Contact US

Abstract— Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users’ physical possession of their outsourced data, which inevitably poses new security risks towards the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
IEEE 2012:A Secure Intrusion detection system against DDOS attack in Wireless Mobile Ad-hoc Network

IEEE 2012 INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS
Project Price: Contact US
Abstract— Wireless Mobile ad-hoc network (MANET) is an emerging technology and have great strength to be applied in critical situations like battlefields and commercial applications such as building, traffic surveillance, MANET is infrastructure less, with no any centralized controller exist and also each node contain routing capability, Each device in a MANET is independently free to move in any direction, and will therefore change its connections to other devices frequently. So one of the major challenges wireless mobile ad-hoc networks face today is security, because no central controller exists. MANETs are a kind of wireless ad hoc networks that usually has a routable networking environment on top of a link layer ad hoc network. Ad hoc also contains wireless sensor network so the problems is facing by sensor network is also faced by MANET. While developing the sensor nodes in unattended environment increases the chances of various attacks. There are many security attacks in MANET and DDoS (Distributed denial of service) is one of them. Our main aim is seeing the effect of DDoS in routing load, packet drop rate, end to end delay, i.e. maximizing due to attack on network. And with these parameters and many more also we build secure IDS to detect this kind of attack and block it. In this paper we discussed some attacks on MANET and DDOS also and provide the security against the DDOS attack.
IEEE 2012: HASBE: A Hierarchical Attribute-Based Solution for Flexible and Scalable Access Control in Cloud Computing
IEEE 2012 TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Project Price: Contact US

Abstract— Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending cipher text-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the cipher text-policy attribute-based encryption (CP-ABE) scheme by Bettencourt et al. and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.
IEEE 2012: A Keyless Approach to Image Encryption
IEEE 2012 COMMUNICATION SYSTEMS AND NETWORK TECHNOLOGIES
Project Price: Contact US
Abstract— Maintaining
the secrecy and confidentiality of images is a vibrant area of
research, with two different approaches being followed, the first being
encrypting the images through encryption algorithms using keys, the
other approach involves dividing the image into random shares to
maintain the images secrecy. Unfortunately heavy computation cost and
key management limit the employment of the first approach and the poor
quality of the recovered image from the random shares limit the
applications of the second approach. In this paper we propose a novel
approach without the use of encryption keys. The approach employs
Sieving, Division and Shuffling to generate random shares such that with
minimal computation, the original secret image can be recovered from
the random shares without any loss of image quality.
IEEE 2012: Separable Reversible Data
Hiding in Encrypted Image
Abstract— This work proposes a novel scheme for separable reversible data hiding in encrypted images. In the first phase, a content owner encrypts the original uncompressed image using an encryption key. Then, a data-hider may compress the least significant bits of the encrypted image using a data-hiding key to create a sparse space to accommodate some additional data. With an encrypted image containing additional data, if a receiver has the data-hiding key, he can extract the additional data though he does not know the image content. If the receiver has the encryption key, he can decrypt the received data to obtain an image similar to the original one, but cannot extract the additional data. If the receiver has both the data-hiding key and the encryption key, he can extract the additional data and recover the original content without any error by exploiting the spatial correlation in natural image when the amount of additional data is not too large.
IEEE 2012:The Future of Cloud-Based
Entertainment
IEEE 2012 JOURNALS
& MAGAZINES
Project Price: Contact US
Abstract— This paper notes some signification trends
related to
the Internet and Becloud computing that will change the way entertainment
is delivered and experienced. After extrapolating some general conclusions from these
trends, two scenarios
are described to illustrate predicted entertainment experiences.
IEEE 2012: AMPLE: An
Adaptive Traffic Engineering System Based on Virtual Routing TopologiesIEEE 2012 COMMUNICATIONS MAGAZINE
Project Price: Contact US
Abstract— Handling traffic
dynamics in order to avoid network congestion and subsequent service
disruptions is one of the key tasks performed by contemporary network
management systems. Given the simple but rigid routing and forwarding
functionalities in IP base environments, efficient resource management and
control solutions against dynamic traffic conditions is still yet to be
obtained. In this article, we introduce AMPLE — an efficient traffic
engineering and management system that performs adaptive traffic control by
using multiple virtualized routing topologies. The proposed system consists of
two compel monetary components: offline link weight optimization that takes as
input the physical network topology and tries to produce maximum routing path
diversity across multiple virtual routing topologies for long term operation
through the optimized setting of link weights. Based on these diverse paths,
adaptive traffic control performs intelligent traffic splitting across
individual routing topologies in reaction to the monitored network dynamics at
short timescale. According to our evaluation with real network topologies and
traffic traces, the proposed system is able to cope almost optimally with
unpredicted traffic dynamics and, as such, it constitutes a new proposal for
achieving better quality of service and overall network performance in IP
networks. IEEE 2012:IEEE 2012: An Adaptive Opportunistic Routing Scheme for Wireless Ad-hoc Networks
IEEE 2012 NETWORKING
Project Price: Contact US
Abstract— In this paper, a
distributed adaptive opportunistic routing scheme for multi-hop wireless ad-hoc
networks is proposed. The proposed scheme utilizes a reinforcement learning
framework to opportunistically route the packets even in the absence of
reliable knowledge about channel statistics and network model. This scheme is
shown to be optimal with respect to an expected average per packet reward
criterion. The proposed routing scheme jointly addresses the issues of learning
and routing in an opportunistic context, where the network structure is
characterized by the transmission success probabilities. In particular, this
learning framework leads to a stochastic routing scheme which optimally
“explores” and “exploits” the opportunities in the network.
IEEE 2012:Topology Control In Mobile Ad Hoc
Networks
With Cooperative Communications
With Cooperative Communications
IEEE 2012 TRANSACTIONS ON WIRELESS COMMUNICATIONS
Project Price: Contact US
Abstract— Cooperative communication has received tremendous interest for wireless networks. Most existing works on cooperative communications are focused on link-level physical layer issues. Consequently, the impacts of cooperative communications on network-level upper layer issues, such as topology control, routing and network capacity, are largely ignored. In this article, we propose a Capacity-Optimized Cooperative (COCO) topology control scheme to improve the network capacity in MANETs by jointly considering both upper layer network capacity and physical layer cooperative communications. Through simulations, we show that physical layer cooperative communications have significant impacts on the network capacity, and the proposed topology control scheme can substantially improve the network capacity in MANETs with cooperative communications
IEEE 2012:Performance of PCN-Based Admission Control Under Challenging Conditions
IEEE 2012 TRANSACTIONS ON NETWORKING
Project Price: Contact US
Abstract— Precongistion notification
(PCN) is a packet-marking technique for IP networks to notify egress nodes of a
so-called PCN domain whether the traffic rate on some links exceeds certain
configurable bounds. This feedback is used by decision points for admission
control (AC) to block new flows when the traffic load is already high.
PCN-based AC is simpler than other AC methods because interior routers do not
need to keep per-flow states. Therefore, it is currently being standardized by
the IETF. We discuss various realization options and analyze their performance
in the presence of flash crowds or with multipath routing by means of
simulation and mathematical modeling. Such situations can be aggravated by
insufficient flow aggregation, long round-trip times, on/off traffic, delayed
media, inappropriate marker configuration, and smoothed feedback
IEEE 2012: A Novel Profit Maximizing Metric for Measuring Classification Performance of Customer Churn Prediction Models
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Project Price: Contact US
Abstract— The interest for data mining techniques has increased tremendously during the past decades, and numerous classification techniques have been applied in a wide range of business applications. Hence, the need for adequate performance measures has become more important than ever. In this paper, a cost benefit analysis framework is formalized in order to define performance measures which are aligned with the main objectives of the end users, i.e. profit maximization. A new performance measure is defined, the expected maximum profit criterion. This general framework is then applied to the customer churn problem with its particular cost benefit structure. The advantage of this approach is that it assists companies with selecting the classifier which maximizes the profit. Moreover, it aids with the practical implementation in the sense that it provides guidance about the fraction of the customer base to be included in the retention campaign.
IEEE 2012: Prediction of User’s Web-Browsing Behavior: Application of Markov Model
IEEE TRANSACTIONS ON SYSTEMS AUGUST 2012
Project Price: Contact US
Abstract— Web prediction is a classification problem in which
we attempt to predict the next set of Web pages that a user may visit based on
the knowledge of the previously visited pages. Predicting user’s behavior while
serving the Internet can be applied effectively in various critical applications.
Such application has traditional tradeoffs between modeling complexity and
prediction accuracy. In this paper, we analyze and study Markov model and all-Kth
Markov model in Web prediction. We propose a new modified Markov model to
alleviate the issue of scalability in the number of paths. In addition, we
present a new two-tier prediction framework that creates an example classifier EC,
based on the training examples and the generated classifiers. We show that such
framework can improve the prediction time without compromising Prediction
accuracy. We have used standard benchmark data sets to analyze, compare, and
demonstrate the effectiveness of our techniques using variations of Markov
models and association rule mining. Our experiments show the effectiveness of
our modified Markov model in reducing the number of paths without compromising accuracy.
Additionally, the results support our analysis conclusions that accuracy
improves with higher orders of all-Kth model.
IEEE 2012: Efficient
audit service outsourcing for data integrity in clouds
IEEE 2012
Transactions on Cloud Computing
Project Price: Contact US
Abstract — Cloud-based outsourced storage relieves the client’s burden for
storage management and maintenance by providing a comparably low-cost,
scalable, location-independent platform. However, the fact that clients no
longer have physical possession of data indicates that they are facing a potentially
formidable risk for missing or corrupted data. To avoid the security risks,
audit services are critical to ensure the integrity and availability of
outsourced data and to achieve digital forensics and credibility on cloud
computing. Provable data possession (PDP
IEEE 2012: Cloud Computing
Security: From Single to Multi-Clouds
IEEE 2012 - 45th
Hawaii International Conference onSystem Sciences
Project Price: Contact US
Abstract — The use of cloud computing has increased rapidly in many
organizations. Cloud computing provides many benefits in terms of low cost and
accessibility of data. Ensuring the security of cloud computing is a major
factor in the cloud computing environment, as users often store sensitive
information with cloud storage providers but these providers may be untrusted.
Dealing with “single cloud” providers is predicted to become less popular with
customers due to risks of service availability failure and the possibility of
malicious insiders in the single cloud. A movement towards “multi-clouds”, or
in other words,“interclouds” or “cloud-of-clouds” has emerged recently. This paper surveys recent research related to single
and multi-cloud security and addresses possible solutions. It is found that the
research into the use of multi-cloud providers to maintain security has
received less attention from the research community than has the use of single
clouds. This work aims to promote the use of multi-clouds due to its ability to
reduce security risks that affect the cloud computing user.













































































No comments:
Post a Comment