No. 1 (2025)

Cover page  1/2025

Explore the current issue of the JTIT

The current issue of the Journal of Telecommunication and Information Technology (JTIT) offers high-quality original articles and showcases the results of key research projects conducted by recognized scientists and dealing with a variety of topics involving telecommunications and information technology, with a particular emphasis placed on the current literature, theory, research and practice.
The articles published in this issue are available under the open access (OA), “publish-as-you-go” scheme. Four issues of JTIT are published each year.
The Journal of Telecommunications and Information Technology is the official publication of the National Institute of Telecommunications - the leading government organization focusing on advances in telecommunications technologies.

We encourage you to sign up to receive free email alerts keeping you up to date with all of the latest articles by registering here.

Published: 2025-03-31

Full Issue

ARTICLES FROM THIS ISSUE

  • Enhancing Phishing Detection in Cloud Environments Using RNN-LSTM in a Deep Learning Framework

    Abstract

    Phishing attacks targeting cloud computing services are more sophisticated and require advanced detection mechanisms to address evolving threats. This study introduces a deep learning approach leveraging recurrent neural networks (RNNs) with long short-term memory (LSTM) to enhance phishing detection. The architecture is designed to capture sequential and temporal patterns in cloud interactions, enabling precise identification of phishing attempts. The model was trained and validated using a dataset of 10,000 samples, adapted from the PhishTank repository. This dataset includes a diverse range of attack vectors and legitimate activities, ensuring comprehensive coverage and adaptability to real-world scenarios. The key contribution of this work includes the development of a high-performance RNN-LSTM-based detection mechanism optimized for cloud-specific phishing patterns that achieve 98.88% accuracy. Additionally, the model incorporates a robust evaluation framework to assess its applicability in dynamic cloud environments. The experimental results demonstrate the effectiveness of the proposed approach, surpassing existing methods in accuracy and adaptability.

    Oussama Senouci, Nadjib Benaouda
  • Cat Swarm Optimization with Lévy Flight for Link Load Balancing in SDN

    Abstract

    Efficient network communications with optimal network path selection play a key role in the modern world. Conventional path selection algorithms often face numerous challenges resulting from their limited scope of application. This research proposes a modified swarm intelligence approach, known as cat swarm optimization (CSO) with Lévy flight that is used for network link load balancing and routing optimization. CSO's quick convergence capabilities are suitable for rapid response applications; however, the approach is prone to getting stuck in local optima. Lévy flight enhances search efficiency, thus aiding in escaping local optima. CSO with Lévy flight (CSO-LF) outperforms original CSO and PSO algorithms in terms of solution quality and robustness across various benchmarks. The proposed method has been evaluated in software defined networks (SDN) with nine benchmark functions assessed. CSO-LF achieved the best scores in both the best and worst positions. When used in SDN for link load balancing, CSO-LF demonstrated lower latency and higher throughput than CSO, and lower latency and higher throughput than PSO in a fat tree topology.

    Kwaku Kwarteng, Kwame O. Gyasi, Justice O. Agyemang, Kwame Agyekum, Kingsford Kwakye, Ellis M. Sani, Emmanuel A. Ampomah, Kusi A. Bonsu
  • FPGA-based Low Latency Square Root CORDIC Algorithm

    Abstract

    The coordinate rotation digital computer (CORDIC) algorithm is a popular method used in many fields of science and technology. Unfortunately, it is a time-consuming process for central processing units (CPUs) and graphics processing units (GPUs), and even for specialized digital signal processing (DSP) solutions. The CORDIC algorithm is an alternative for Newton-Raphson numerical calculation and for the FPGA based resource-expensive look-up-table (LUT) method. Various modifications of the CORDIC algorithm allow to speed up the operation of hardware in edge computing devices. With that context taken into consideration, this article presents a fast and accurate square root floating point (SQRT FP) CORDIC function which can be implemented in field programmable gate arrays (FPGAs). The proposed algorithm offers low-complexity, decent accuracy and speed, and is sufficient for digital signal processing (DSP) applications, such as digital filters, accelerators for neural networks, machine learning and computer vision applications, and intelligent robotic systems.

    Mariusz Węgrzyn, Stepan Voytusik, Nataliia Gavkalova
    21-29
  • Task Offloading and Scheduling Based on Mobile Edge Computing and Software-defined Networking

    Abstract

    When integrated with mobile edge computing (MEC), software-defined networking (SDN) allows for efficient network management and resource allocation in modern computing environments. The primary challenge addressed in this paper is the optimization of task offloading and scheduling in SDN-MEC environments. The goal is to minimize the total cost of the system, which is a function of task completion lead time and energy consumption, while adhering to task deadline constraints. This multi-objective optimization problem requires balancing the trade-offs between local execution on mobile devices and offloading tasks to edge servers, considering factors such as computation requirements, data size, network conditions, and server capacities. This research focuses on evaluating the performance of particle swarm optimization (PSO) and Q-learning algorithms under full and partial offloading scenarios. Simulation-based comparisons of PSO and Q-learning show that for large data quantities, PSO is more cost efficient than the other algorithms, with the cost increase equaling approximately 0.001% per kilobyte, as opposed to 0.002% in the case of Q-learning. As far as energy consumption is concerned, PSO performs 84% and 23% better than Q-learning in the case of full and partial offloading, respectively. The cost of PSO is also less sensitive to network latency conditions than GA. Furthermore, the results demonstrate that Q-learning offers better scalability in terms of execution time as the number of tasks increases, and exceeds the outcomes achieved by PSO for task loads of more than 40. Such observations prove that PSO is better suited for large data transfers and energy-critical applications, whereas Q-learning is better suited for highly scalable environments and large numbers of tasks.

    Fatimah Azeez Rawdhan
    30-37
  • A Hole-free Shifted Coprime Array for DOA Estimation

    Abstract

    Coprime arrays have recently become a popular trend in estimating the direction of arrival in array signal processing, as they increase the degree of freedom (DOF). Coprime arrays utilize a couple of uniform linear subarrays to create a difference co-array with specific preferable features. In this paper, three proposed structures are considered that depend on the shifting of one of the two uniform linear arrays. The proposed configurations reveal a sequence of lags obtained by filling the holes of the co-array, which span the aperture array. The displacement value depends on the value of the pair of data in the array. The resulting virtual array achieved by means of the proposed methods may generate DOFs MN + 1, MN + N + 1, and MN + 2N - 2, respectively. The performance of the proposed configurations is evaluated by experimental simulations aiming to demonstrate the effectiveness of the array's design.

    Fatimah Abdulnabi Salman, Bayan Mahdi Sabbar
    38-46
  • Context-Awareness for Device-to-Device Resource Allocation

    Abstract

    The paper investigates a context-aware approach to radio resource allocation for device-to-device (D2D) communication, focusing on solutions that leverage information on user equipment location and environmental features, such as building layouts. A system enabling direct communication by sharing uplink resources with cellular users is considered. Such a system introduces mutual interference between direct and cellular communications, posing challenges related to maintaining adequate performance levels. To address these challenges, various context-based resource allocation methods are analyzed, aiming to optimize spectral efficiency and minimize interference. The study explores the impact that different D2D device densities exert on overall network performance measured by means of spectral efficiency and the signal-to-interference ratio.

    Marcin Rodziewicz
    47-55
  • Semantic Segmentation of Plant Structures with Deep Learning and Channel-wise Attention Mechanism

    Abstract

    Semantic segmentation of plant images is crucial for various agricultural applications and creates the need to develop more demanding models that are capable of handling images in a diverse range of conditions. This paper introduces an extended DeepLabV3+ model with a channel-wise attention mechanism, designed to provide precise semantic segmentation while emphasizing crucial features. It leverages semantic information with global context and is capable of handling object scale variations within the image. The proposed approach aims to provide a well generalized model that may be adapted to various field conditions by training and tests performed on multiple datasets, including Eschikon wheat segmentation (EWS), humans in the loop (HIL), computer vision problems in plant phenotyping (CVPPP), and a custom "botanic mixed set" dataset. Incorporating an ensemble training paradigm, the proposed architecture achieved an intersection over union (IoU) score of 0.846, 0.665 and 0.975 on EWS, HIL plant segmentation, and CVPPP datasets, respectively. The trained model exhibited robustness to variations in lighting, backgrounds, and subject angles, showcasing its adaptability to real-world applications.

    Mukund Kumar Surehli, Naveen Aggarwal, Garima Joshi, Harsh Nayyar
    56-66
  • Ultra-wideband Antenna System Design for Future mmWave Applications

    Abstract

    An ultra-wideband planar four-element multiple-input multiple-output (MIMO) antenna array for millimeter wave (mmWave) 5G applications is presented in this article, characterized by a simple structure and diverse performance capabilities. The antenna system operates in the 20 GHz band (ranging from 42.3 to 63.3 GHz), with a high gain of 7.8 dB. The compact size of 25 × 25 mm makes it suitable for being integrated with various telecommunication devices used in a number of mmWave applications. The antenna's elements are placed orthogonally, achieving great isolation of over 24 dB. The performance of the proposed antenna was analyzed in terms of its s parameters, gain, efficiency, radiation patterns, and MIMO diversity characteristics, including the envelope correlation coefficient (ECC), diversity gain (DG), and mean effective gain (MEG).

    Muhannad Y. Muhsin, Zainab S. Muqdad, Asaad H. Sahar, Zainab F. Mohammad, Hussam AL-Saedi
  • Compressive Sensing-based Differential Channel Feedback Scheme Using Subspace Matching Pursuit Algorithm for B5G Wireless Systems

    Abstract

    Millimeter wave (mmWave) massive multi-input multi-output (MIMO) systems are the promising technology for next-generation 5G wireless systems and beyond. Sparse signal recovery and channel feedback are challenging and fundamental problems affecting downlink transmission due to the substantial increase in channel matrix size in mmWave systems. To overcome the overhead of the channel and improve CS recovery effectiveness, this article proposes the joint use of the subspace matching search algorithm and differential operation for channel impulse response (CIR). Here, the current CIR is converted to a differential CIR using operations between the current and previous CIRs. The differential CIR is then compressed and fed back to the base station. Subsequently, this differential CIR is recovered using the subspace matching search algorithm. Such a scheme leverages effective structural sparsity through a combination of subspace and differential operations. The adaptive algorithm adaptively selects relevant subspaces based on coefficients. The simulation results show that the proposed scheme reduces channel overhead by 36% and 24% at compression ratios of 25% and 45%, respectively, over different time slots in mmWave massive MIMO systems.

    Baranidharan V, Surendar M
    74-80
  • Optimizing Spectral and Energy Efficiency of Massive MIMO Networks Using MVO and API Algorithms

    Abstract

    Wireless communication, especially that relying on 5G technology, plays a crucial role in modern networks. The use of massive multiple-input, multiple-output (MIMO) systems is one of the key advancements in this area, as it improves energy efficiency (EE) and spectral efficiency (SE), making such a technique critical for future communication networks. This article focuses on optimizing EE and SE using a new metaheuristic multiverse optimization algorithm (MVO), and compares the results obtained with those achieved with the use of the Pachycondyla Apicalis algorithm (API) and other methods. Furthermore, the study explores the best values for factors such as coherence time, power amplifier efficiency, and hardware power in each user, with all of them playing a critical role in maximizing EE. The authors also examine the correlation between EE and SE in the downlink direction. The results show that the MVO approach achieves better performance in fewer iterations compared to API and other methods, demonstrating its potential for improving wireless communication systems.

    Hiba Ines Bitat, Fouzia Maamri, Fatima Khelfaoui, Hanane Djellab, Yacine Belhocine, Yacine Messai
    81-91