No. 4 (2014)
ARTICLES FROM THIS ISSUE
-
Model-Based Availability Evaluation of Composed Web Services
Abstract
Web services composition is an emerging software development paradigm for the implementation of distributed computing systems, the impact of which is very relevant both in research and industry. When a complex functionality has to be delivered on the Internet, a service integrator can produce added value by delivering more abstract and complex services obtained by composition of existing ones. But while isolated services availability can be improved by tuning and reconfiguring their hosting servers, with Composed Web Services (CWS) basic services must be taken as they are. In this case, it is necessary to evaluate the composition effects. The authors propose a high-level analysis methodology, supported by a tool, based on the transformation of BPEL descriptions of CWS into models based on the fault tree availability evaluation formalism that enables a modeler, unfamiliar with the underlying combinatorial probabilistic mathematics, to evaluate the availability of CWS, given components availability and expected execution behavior.
-
FIM-SIM: Fault Injection Module for CloudSim Based on Statistical Distributions
Abstract
The evolution of ICT systems in the way data is accessed and used is very fast nowadays. Cloud computing is an innovative way of using and providing computing resources to businesses and individuals and it has gained a faster popularity in the last years. In this context, the user’s expectations are increasing and cloud providers are facing huge challenges. One of these challenges is fault tolerance and both researchers and companies have focused on finding and developing strong fault tolerance models. To validate these models, cloud simulation tools are used as an easy, flexible and fast solution. This paper proposes a Fault Injector Module for CloudSim tool (FIM-SIM) for helping the cloud developers to test and validate their infrastructure. FIM-SIM follows the event-driven model and inserts faults in CloudSim based on statistical distributions. The authors have tested and validated it by conducting several experiments designed to highlight the statistical distribution influence on the failures generated and to observe the CloudSim behavior in its current state and implementation.
-
Comparative Study of Supervised Learning Methods for Malware Analysis
Abstract
Malware is a software designed to disrupt or even damage computer system or do other unwanted actions. Nowadays, malware is a common threat of the World Wide Web. Anti-malware protection and intrusion detection can be significantly supported by a comprehensive and extensive analysis of data on the Web. The aim of such analysis is a classification of the collected data into two sets, i.e., normal and malicious data. In this paper the authors investigate the use of three supervised learning methods for data mining to support the malware detection. The results of applications of Support Vector Machine, Naive Bayes and k-Nearest Neighbors techniques to classification of the data taken from devices located in many units, organizations and monitoring systems serviced by CERT Poland are described. The performance of all methods is compared and discussed. The results of performed experiments show that the supervised learning algorithms method can be successfully used to computer data analysis, and can support computer emergency response teams in threats detection.
-
Usability Analysis of a Novel Biometric Authentication Approach for Android-Based Mobile Devices
Abstract
Mobile devices are widely replacing the standard personal computers thanks to their small size and userfriendly use. As a consequence, the amount of information, often confidential, exchanged through these devices is raising. This makes them potential targets of malicious network hackers. The use of simple passwords or PIN are not sufficient to provide a suitable security level for those applications requiring high protection levels on data and services. In this paper a biometric authentication system, as a running Android application, has been developed and implemented on a real mobile device. A system test on real users has been also carried out in order to evaluate the human-machine interaction quality, the recognition accuracy of the proposed technique, and the scheduling latency of the operating system and its degree of acceptance. Several measures, such as system usability, users satisfaction, and tolerable speed for identification, have been carried out in order to evaluate the performance of the proposed approach.
-
Bayesian Network Based Fault Tolerance in Distributed Sensor Networks
Abstract
A Distributed Sensor Network (DSN) consists of a set of sensors that are interconnected by a communication network. DSN is capable of acquiring and processing signals, communicating, and performing simple computational tasks. Such sensors can detect and collect data concerning any sign of node failure, earthquakes, floods and even a terrorist attack. Energy efficiency and fault-tolerance network control are the most important issues in the development of DSNs. In this work, two methods of fault tolerance are proposed: fault detection and recovery to achieve fault tolerance using Bayesian Networks (BNs). Bayesian Network is used to aid reasoning and decision making under uncertainty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using BNs. It is also used to detect energy depletion of node, link failure between nodes, and packet error in DSN. The proposed model is used to detect faults at node, sink and network level faults ( link failure and packet error). The proposed fault recovery model is used to achieve fault tolerance by adjusting the network of the randomly deployed sensor nodes based on of its probabilities. Finally, the performance parameters for the proposed scheme are evaluated.
-
Partner Selection Using Reputation Information in n-player Cooperative Games
Abstract
To study cooperation evolution in populations, it is common to use games to model the individuals interactions. When these games are n-player it might be difficult to assign defection responsibility to any particular individual. In this paper the authors present an agent based model where each agent maintains reputation information of other agents. This information is used for partner selection before each game. Any agent collects information from the successive games it plays and updates a private reputation estimate of its candidate partners. This approach is integrated with an approach of variable sized population where agents are born, interact, reproduce and die, thus presenting a possibility of extinction. The results now obtained, for cooperation evolution in a population, show an improvement over previous models where partner selection did not use any reputation information. Populations are able to survive longer by selecting partners taking merely into account an estimate of others’ reputations.
-
An Agent-Based Collaborative Platform for the Optimized Trading of Renewable Energy within a Community
Abstract
Cities are increasingly recognized for their ability to play a catalytic role in addressing climate and energy challenges using technologically innovative approaches. Since energy used in urban areas accounts for about 40% of total EU energy consumption, a change of direction towards renewable energy is necessary in order to alleviate the usage of carbonized electricity and also to save money. A combination of IT and telecommunication technologies is necessary to enable the energy and resources saving. ICT based solutions can be used to enable energy and money saving not only for a single building, but for the whole community of a neighborhood. In this paper a model for the energy cost minimization of a neighborhood together with an agent-based interaction model that reproduces the proposed formal representation is presented. Furthermore the authors present a prototype implementation of this model and first experimental tests.
-
Data and Task Scheduling in Distributed Computing Environments
Abstract
Data-aware scheduling in today’s large-scale heterogeneous environments has become a major research and engineering issue. Data Grids (DGs), Data Clouds (DCs) and Data Centers are designed for supporting the processing and analysis of massive data, which can be generated by distributed users, devices and computing centers. Data scheduling must be considered jointly with the application scheduling process. It generates a wide family of global optimization problems with the new scheduling criteria including data transmission time, data access and processing times, reliability of the data servers, security in the data processing and data access processes. In this paper, a new version of the Expected Time to Compute Matrix (ETC Matrix) model is defined for independent batch scheduling in physical network in DG and DC environments. In this model, the completion times of the computing nodes are estimated based on the standard ETC Matrix and data transmission times. The proposed model has been empirically evaluated on the static grid scheduling benchmark by using the simple genetic-based schedulers. A simple comparison of the achieved results for two basic scheduling metrics, namely makespan and average flowtime, with the results generated in the case of ignoring the data scheduling phase show the significant impact of the data processing model on the schedule execution times.
-
Statistical Analysis of Message Delay in SIP Proxy Server
Abstract
Single hop delay of SIP message going through SIP proxy server operating in carriers backbone network is being analyzed. Results indicate that message sojourn times inside SIP server in most cases do not exceed order of tens of milliseconds (99% of all SIP-I messages experience less than 21 ms of sojourn delay) but there were observed very large delays which can hardly be attributed to message specific processing procedures. It is observed that delays are very variable. Delay components distribution that is to identified are not exponentially distributed or nearly constant even per message type or size. The authors show that measured waiting time and minimum transit time through SIP server can be approximated by acyclic phase-type distributions but accuracy of approximation at very high values of quantiles depends on the number outliers in the data. This finding suggests that modeling of SIP server with queueing system of G|PH|c type may server as an adequate solution.
-
FTTH Network Optimization
Abstract
Fiber To The Home (FTTH) is the most ambitious among optical technologies applied in the access segment of telecommunications networks. The main issues of deploying FTTH are the device price and the installation cost. Whilst the costs of optical devices are gradually decreasing, the cost of optical cable installation remains challenging. In this paper, the problem of optimization that has practical application for FTTH networks is presented. Because the problem is Non-deterministic polynomial-time hard (NP-hard), an approximation algorithm to solve it is proposed. The author has developed the algorithm in a C# program in order to analyze its performance. The analysis confirms that the algorithm gains near-optimal results with acceptable time consumption. Therefore, the algorithm to be applied in a network design tool for FTTH network planning is proposed.
-
Efficient Performance and Lower Complexity of Error Control Schemes for WPAN Bluetooth Networks
Abstract
This paper presents a new technique of reduction retransmission time by decreasing the discarded packets and combating the complexity through error control techniques. The work is based on Bluetooth, one of the most common Wireless Personal Area Network. Its early versions employ an expurgated Hamming code for error correction. In this paper, a new packet format using different error correction coding scheme and new formats for the EDR Bluetooth packets are presented. A study for the Packet Error Probability of classic and Enhanced Data Rate (EDR) packets is also presented to indicate the performance. The simulation experiments are performed over Additive White Gaussian Noise (AWGN) and Rayleigh flat-fading channels. The experimental results reveal that the proposed coding scheme for EDR packets enhances the power efficiency of the Bluetooth system and reduce the losses of EDR packets.
-
Design and Development of Miniature Dual Antenna GPS-GLONASS Receiver for Uninterrupted and Accurate Navigation
Abstract
Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), and GPS-GLONASS receivers are commonly used for navigation. However, there are some applications where a single antenna interface to a GPS or GPS-GLONASS receiver will not suffice. For example, an airborne platform such as an Unmanned Aerial Vehicles ( UAV ) will need multiple antennae during maneuvering. Also, some applications will need redundancy of antenna connectivity to prevent loss of positioning if a link to satellite fails. The scope of this work is to design a dual antenna GPS-GLONASS navigation receiver and implement it in a very small form-factor to serve multiple needs such as: provide redundancy when a link fails, and provide uninterrupted navigation even under maneuvering, also provide improved performance by combining data from both signal paths. Both hardware and software architectures are analyzed before implementation. A set of objectives are identified for the receiver which will serve as the benchmarks against which the receiver will be validated. Both analysis and objectives are highlighted in this paper. The results from the tests conducted on such a dual antenna GPS-GLONASS receiver have given positive results on several counts that promise a wider target audience for such a solution.
-
Model that Solve the Information Recovery Problems
Abstract
The main part of the information ensuring in information and communication systems (ICS) is the provision for the development of methods for monitoring, optimization and forecasting facilities. Accordingly, an important issue of information security is a challenge to improve the monitoring systems accuracy. One way is to restore the information from the primary control sensors. Such sensors may be implemented in the form of technical devices, and as a hardware and software systems. This paper reviews and analyzes the information recovery models using data from monitoring systems that watch the state of information systems objects and highlights its advantages and disadvantages. The aim of proposed modeling is to improve the accuracy of monitoring systems.