A printable PDF is at the end of the page.
Program Summary
Monday, October 16
Detailed Program
Monday, October 16
Monday, October 16 9:00 – 10:30
Special Session 1: Delay Doppler Communications and Sensing
- 9:00 Fine Doppler Resolution Channel Estimation and Offset Gradient Descent Equalization for OTFS Transmission over Doubly Selective Channels
-
Orthogonal time frequency space (OTFS) modulation offers attractive performance in coping with doubly selective channels. In this paper, we propose a time-domain training-sequence-aided transmission and an offset gradient descent equalization to reduce channel estimation and equalization complexity and enable better adaption to channels with either integer or fractional Doppler shift. Our proposed scheme leverages an extended-frame OTFS structure, which consists of multiple original OTFS frames, to achieve finer Doppler resolution and hence alleviate the impact of fractional Doppler when the frame is sufficiently long. We also propose an offset gradient descent equalization method, which exploits the structure of the channel matrix to significantly reduce the complexity. Simulation results validate the scheme and demonstrate that our proposed transmission scheme can achieve similar performance with MMSE equalization at significantly reduced complexity.
- 9:20 Data-Driven OTFS Channel Estimation Based on Gated Recurrent Convolutional Autoencoder
-
Considering the traffic environment with high-moving vehicles, orthogonal time frequency space (OTFS) has become an emerging technology to handle the rapid time-varying channels via vehicular communications. Due to sparse representation of the delay-Doppler (DD) domain, the related channel information can be estimated by means of the embedded pilot technique. However, the uncertainties of unknown and burst noise can incur system performance degradation issues. To tackle this problem, in this paper, we propose a novel gated recurrent convolutional autoencoder (GRCAE) model to denoise the complex noise for channel estimation in OTFS systems. Specifically, the proposed model can distinguish and retain the significant features of the signal during the denoising process through the gated recurrent unit (GRU) network. Meanwhile, the convolutional autoencoder can better capture the local spatial features of the signal and reconstruct them to obtain a denoised signal. The parallel procedure further improves the denoising accuracy and robustness. Our simulation results demonstrate that the proposed GRCAE-based approach present satisfactory performance with a low computational and time complexity in various noise scenarios.
- 9:40 A Compressive Sensing and Denoising RCAN-Based Channel Estimation Scheme for OTFS System
-
Orthogonal time frequency space (OTFS) systems effectual mitigate severe Doppler shifts, providing extensive prospects for high-mobility applications. Due to the sparsity of its delay-Doppler (DD) domain channel, compressive sensing (CS) is utilized as the method to address the majority of current OTFS channel estimation (CE) issues. However, these CS-based methods suffer from the drawback of challenging noise elimination for non-zero elements. In this paper, a denoising network-based CS algorithm is proposed for CE in the OTFS system. Specifically, we utilize the sparsity adaptive matching pursuit (SAMP) algorithm to perform the reconstruction of the DD domain channel and generate initial channel response. Then, a residual channel attention network (RCAN) is designed for denoising, which can generate an accurate channel response. The proposed RCAN trains a deep network model through the residual in residual (RIR) structure and can use the channel attention (CA) mechanism to consider different weights of channel-wise features. The simulation results demonstrate that the denoising RCAN-based CE scheme exhibits a commendable trade-off between achieving exceptional performance and maintaining low complexity. Furthermore, the scheme’s remarkable robustness to channel mismatch and variations in speed has been well substantiated.
- 10:00 Power Allocation for OTFS-Based AirComp System with Robust Precoding
-
In this work, we consider a high-mobility AirComp scenario, where orthogonal time frequency space (OTFS) modulation is employed to eliminate the effects of high-mobility channels. Specifically, firstly, a two-stage transmission scheme is developed for the considered system. The estimated channel in the first stage is used to design the minimum mean square error precoder utilized for data transmission in the second stage for the current frame. To further enhance the computation accuracy, the estimated channel is used for the power allocation between the pilot symbol and data symbols for the next frame. Therein, we derive the normalized mean square error (NMSE) by taking into account the imperfect channel estimation and then find the optimal power allocation which minimizes the computation NMSE under the total power constraint via the interior point method. The simulation results show that, compared to the benchmark schemes (i.e., the non-robust precoder and the robust precoder without power allocation), the proposed scheme can effectively improve the computation accuracy of the AirComp system.
Monday, October 16 9:00 – 10:30
Regular session 1: Deep Learning
- 9:00 A Novel Weights-Less Watermark Embedding Method for Neural Network Models
-
Deep learning-based Artificial Intelligence (AI) technology has been extensively used recently. AI model theft is a regular occurrence. As a result, many academics focus their efforts on safeguarding the Intellectual Property (IP) of trained Neural Network (NN) models. The majority of the most recent white-box setting watermark embedding methods rely on modifying model weights. Weights updated for the NN model during training must take into account the initial task as well as the embedding of watermarks. As a result, the accuracy of the initial task will be affected, necessitating more training time. This research proposes a novel weights-less watermark embedding method for deep neural networks to address this issue. Without actually embedding the watermark within the NN model weights, it uses a principle of code matching between the watermark and the weights. The proposed method requires less time than existing white-box setting watermark embedding methods, and the accuracy of the original task is not much diminished. Additionally, since the NN model weights are left alone, their statistical distribution will remain unchanged, giving the model increased resistance to watermark detection. The experiments in this paper demonstrate the effectiveness, efficiency, and robustness of our method.
- 9:18 A Generative Adversarial Networks-Based Integer Overflow Detection Model for Smart Contracts
-
Due to the rapid development of blockchain technology in recent years, smart contracts have been widely applied in critical fields such as finance, insurance, healthcare, and the Internet of Things. However, smart contracts face increasingly serious security issues due to their unique operating environment and programming characteristics. We focus on Ethereum-based smart contracts and propose a high-precision and versatile detection method to address the integer overflow vulnerability, which significantly affects smart contract development and execution. Our method can also solve the problem of possible data shortage. Specifically, we utilize code embedding algorithms to convert Solidity-compiled smart contracts into spatial vectors, thereby retaining as much syntax and semantic information as possible. Based on this, we use Generative Adversarial Network (GAN) to train a small sample vector dataset to generate a substantial number of synthetic datasets. Our proposed model combines GAN discriminator feedback and vector similarity analysis to identify smart contracts that contain integer overflow vulnerabilities.
- 9:36 Remarks on an Optimal Predictive Control Using a Quaternion Neural Network and a Derivative-Free Optimisation Approach
-
In this study, the possibility of using quaternion neural networks (QNNs) in control systems was explored. The QNN was applied to a prediction model and its effectiveness in achieving optimal predictive control of non-linear systems was investigated. Computational experiments of the optimal predictive control using the QNN for a tracking task of a discrete-time non-linear plant were conducted. Experimental results validate the feasibility and effectiveness of QNN for controlling non-linear systems.
- 9:54 A Comparative Study of Artificial Intelligence-Based Algorithms for Bitwise Decoding of Error Correction Codes
-
Your abstract may have spelling errors: The development of computationally efficient algorithms is crucial to support the extremely low latency and ultra-high reliability requirements of the next-generation radio communication systems. Error-correction codes (ECC) are used in communication systems to maintain the reliability of data transmissions. In this paper, several commonly used artificial intelligence (AI) methods are used to design algorithms for bitwise decoding of (ECC). The AI-based decoding algorithms are analysed and compared using their error-correction performance, training time, and computational intensity. The efficacy is assessed using the benchmark codes: extended binary Golay and the Hamming code for varying Signal-to-Noise Ratios (SNR) over the Additive White Gaussian Noise (AWGN) channel and the decoding performance is evaluated using the block error rate (BLER). It is envisaged that the results from this comparative study would help to identify AI models most suitable for developing computationally efficient and practically implementable algorithms for decoding longer ECC.
Presenter bio: Dr. Ekta Sharma is a Researcher with the Australian Government’s National Intelligence Community. Her work develops next-generation data security technologies to help solve communication problems in the context of space satellite challenges. She is also working as the UniSQ’s Vice Chancellors’ Postdoctoral Fellow for Women in the STEMM discipline. This work supports UniSQ’s commitment to improving career pathways for women as part of the Science in Australia Gender Equity Athena Swan Action Plan. Dr. Sharma has over a decade of strong technical experience across the broad spectrum of advanced artificial intelligence, statistics, and data science in Australia and Switzerland. She has also served as an academic consultant in India. Her other background in Operations Research and double Master’s (Mathematical Sciences) assist her to accomplish key processes in a feasible, sustainable, and optimum manner. This has awarded her competitive grants including the Australian Mathematical Sciences Institute award and others reserved for top Australian females working on big data.
Monday, October 16 9:00 – 10:30
Tutorial 1 – Towards Unified Understanding of Semantic Communications and Networking
Semantic communication (SC) is an emerging approach to designing the next generation communication systems that goes beyond the current paradigm of transmitting bits. In contrast to traditional communication systems that focus solely on delivering bits at Level A, SC encompasses Levels B and C, aiming to convey the semantics behind the bits and maximize their effectiveness for specific tasks, respectively. Although Shannon and Weaver identified Levels B and C over 70 years ago, these issues were largely overlooked due to the lack of appropriate technical tools. However, recent advances in machine learning (ML) have given substance to initial SC concepts, positioning SC as a key enabler for 6G and beyond.
Despite growing interest and numerous contemporary studies, several significant limitations persist in this nascent field. Key among them is the absence of clear definitions for semantics, leading to disjointed research on the principles and architectures of SC. Furthermore, the role of ML in SC remains ambiguous, complicating the distinction between SC and other existing ML-based communication frameworks. Lastly, most of studies concentrates on the physical (PHY) layer in point-point scenarios, raising concerns about scalability and compatibility for multiple users, as well as applicability to the medium access control (MAC) and higher layers.
This tutorial aims to consolidate the understanding of SC by presenting a comprehensive definition of semantics and identifying its relationship with ML and communication system architectures. Through this fresh and unified perspective, we will illustrate how ML facilitates PHY-layer SC in point-to-point scenarios and explore the extension of these methodologies to large-scale SC systems. We will also demonstrate their application to MAClayer SC through selected use cases. Finally, we will introduce non-ML and theoretical approaches for modelling ML-based SC frameworks, paving the way for future research directions. The intended audience include PhD students, postdocs, and researchers with general background on machine learning and wireless communications.
Monday, October 16 10:30 – 11:00
Morning Tea
Monday, October 16 11:00 – 12:30
Special Session 2 – Intelligent Non-Terrestrial Communications in 6G
- 11:00 Machine Learning-Based Cyclostationary Spectrum Sensing in Cognitive Dual Satellite Networks
-
Efficient and reliable utilization of the electromagnetic spectrum remains a significant challenge in wireless and satellite communication. Cognitive satellite networks have emerged as a promising solution, but they encounter obstacles like spectrum scarcity, interference, and signal degradation. Spectrum sensing, a vital aspect of these networks, enables the detection and efficient usage of available spectrum. Classic spectrum sensing methods have been developed, with cyclostationary feature detection (CFD) techniques proving robust against noise. However, adapting CFD techniques to cognitive satellite networks is still in its early stages. This paper introduces a machine learning-based cyclostationary spectrum sensing approach for cognitive dual satellite networks, harnessing machine learning to enhance traditional CFD methods in satellite environments. Simulation results demonstrate that the proposed approach surpasses the conventional cyclostationary spectrum sensing method across various signal-to-noise ratio conditions.
- 11:17 L-Band Spectral Opportunities for Cognitive GEO-LEO Dual Satellite Networks
-
With the increasing congestion in lower frequency bands of satellite spectrum, the integration of cognitive radio technology is expected to become essential for efficient spectrum sharing in both Geostationary and Low Earth Orbit satellite systems. Intelligent spectrum sharing strategies enable more efficient and effective satellite communications, leading to increased capacity and improved performance. In this paper, we provide an overview on incorporating cognitive radio technology into dual satellite communications applications. Moreover, this paper presents a measurement apparatus for identifying spectral opportunities and presents preliminary results for such opportunities in a portion of the L-band used by the Inmarsat Broadband Global Area Network downlink. Numerous opportunities in both the time and frequency domains were identified for the spectrum surveyed. The results revealed that almost half of the spectrum was available for over 99% of the capture duration, with only 8% being available less than 1% of the time.
- 11:35 Computing with the Internet of Flying-Things from Sky to Space
-
The rapid evolution of the Internet of Things (IoT) has extended the boundaries of connectivity and computing beyond terrestrial environments. This paper explores the concept of extending the IoT paradigm to the sky and space, enabling a new frontier of computing services. We review the existing services available in these domains, including connectivity/network services, computational services, and physical services. Then, we outline use-cases that demonstrate the potential of computing services in the sky and space, such as remote asset monitoring, disaster response, and autonomous aerial systems. We also present a comprehensive analysis of the state-of-the-art infrastructure supporting computing services in these environments, including satellite networks, airborne fog computing, and edge computing. This paper provides insights into the future of computing with the Internet of Flying-Things, presenting a roadmap for researchers and practitioners to explore novel applications and advance the field of IoT in the sky and space domains.
Presenter bio: Jinho Choi was born in Seoul, Korea. He received the B.E. degree (magna cum laude) in electronics engineering from Sogang University, Seoul, in 1989 and the M.S.E. and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1991 and 1994, respectively. He is a Profesor with Deakin University, Australia. Prior to joining Deakin University in 2018, he was a Professor/Chair with Swansea University, Swansea, U.K., and GIST, Korea. He has authored two books published by Cambridge University Press in 2006 and 2010. His research interests include wireless communications and array/statistical signal processing. - 11:53 Evaluation of Intelligent Resource Allocation Methods for Interference-Limited Satellite Networks
-
Satellite systems serve as a crucial tool for non-terrestrial networks, offering a solution to areas where terrestrial network coverage is limited. With the advanced capabilities of modern satellite systems that leverage frequency reuse across multiple beams, the provision of efficient 6G services becomes possible worldwide. However, achieving this efficiency requires an optimal resource allocation (RA) strategy that considers the interference-limited environment and the constraints of limited power and bandwidth. This research paper provides a comprehensive review of existing studies on RA problems in interference-limited multi-beam satellite systems, specifically exploring low-complexity methods. The paper thoroughly investigates the operational principles underlying various RA methods that use simple machine learning algorithms. Additionally, it conducts an in-depth analysis of the advantages and disadvantages associated with these methods. To validate the effectiveness of the proposed low-complexity techniques, the paper presents simulation results obtained from a multi-beam satellite system operating in a Low Earth Orbit (LEO). These results serve as evidence for the efficacy of employing simple RA methods based on linear machine learning.
- 12:11 On Ka-Band Utilization Towards Non-Terrestrial Networks
-
In the context of the fifth generation (5G) and beyond wireless systems, non-terrestrial networks (NTNs) are expected to play a crucial role in supporting ubiquitous connectivity and ensuring service availability, reliability, and responsiveness. In the deployed NTNs, the Ka-band is extensively utilized to support the growing demand for high-speed and high-capacity communications. This work aims to illustrate the utilization of the Ka-band for consideration in the design of spectral-efficient NTNs. First, we provide an extensive overview of NTNs, with a specific emphasis on joint spectrum management within these networks. Then, we present the current utilization of the Ka-band in various systems, such as satellite and terrestrial systems. Finally, we discuss technical challenges in deploying NTNs with the Ka-band and explore potential solutions to address these challenges.
Presenter bio: Bohai Li received the B.S. degree in electrical engineering from the University of Sydney and the B.S. degree in electrical engineering from the Harbin Institute of Technology, in 2018. He is currently pursuing the Ph.D. degree in telecommunication with the University of Sydney. He holds a research training program scholarship (international). He was a visiting student at the Chinese University of Hong Kong from Sep 2019 to Feb 2020. His research involves relaying communication, NOMA, Internet of Things (IoT), and Age of Information (AoI).
Monday, October 16 11:00 – 12:30
Regular session 2: Image and Video Processiong I
- 11:00 Video Anomaly Detection Using Self-Attention-Enabled Convolutional Spatiotemporal Autoencoder
-
The process of automatically detecting abnormal video patterns in the intelligent surveillance framework is known as video anomaly detection. A self-attention-enabled convolutional spatiotemporal autoencoder is proposed to detect video anomalies efficiently. The proposed Self-Attention-enabled Convolutional Long-Short-Term-Memory Auto-Encoder (SA-ConvLSTM2D-AE)-based video anomaly detector is comprised of three sequential stages: spatial encoder to learn spatial (appearance) features of individual frames, temporal encode-decoder to learn temporal (motion) features of encoded spatial features, and spatial decoder to decode the encoded spatial features for reconstructing the individual frames. Here, the self-attention mechanism is embedded into the convolutional Long Short Term Memory block present in the temporal encoder-decoder section to generate the Spatial-Attention-enabled ConvLSTM block for learning better spatiotemporal features. An efficient threshold selection criteria based on the finding of the optimized Geometric mean value of the sensitivity and specificity from the Receiver Operating Characteristics curve is implemented. The model is trained on the video frame sequences corresponding to the normal incidents only. However, the test frame sequences having video anomalies are poorly reconstructed by the model, as anomalous samples are never exposed during training. Hence, when the anomaly score of individual frames exceeds the selected optimum threshold level, then an anomaly is said to be detected.
Presenter bio: Dr. Umesh C. Pati is a Full Professor at the Department of Electronics and Communication Engineering, National Institute of Technology (NIT), Rourkela. He has obtained his B.Tech. Degree in Electrical Engineering from National Institute of Technology (NIT), Rourkela, Odisha. He received both M.Tech. and Ph.D. degrees in Electrical Engineering with specialization in Instrumentation and Image Processing respectively from the Indian Institute of Technology (IIT), Kharagpur. He has also served as Head, Career Development Center, NIT Rourkela for three years. His current areas of interest are Image/Video Processing, Computer Vision, Artificial Intelligence, Medical Imaging, Internet of Things (IoT), Industrial Automation, and Instrumentation Systems. He has authored/edited two books and published more than 125 articles in the peer-reviewed international journals as well as conference proceedings. He has served as a reviewer in a wide range of reputed international journals and conferences. He has delivered a number of Keynote/Invited Talks at various International/National platforms. He has also guest-edited Special Issues of Cognitive Neurodynamics and the International Journal of Signal and Imaging System Engineering. Dr. Pati has filed 2 Indian patents. Besides other sponsored projects, he is currently associated with a high-value IMPRINT project, “Intelligent Surveillance Data Retriever (ISDR) for Smart City Applications” which is an initiative of the Ministry of Education (formerly the Ministry of Human Resource Development) and Ministry of Housing and Urban Affairs, Govt. of India. He has visited countries like the USA, Italy, Austria, Singapore, Mauritius, Nepal, etc. in connection with research collaboration and paper presentation. He was also an academic visitor to the Department of Electrical and Computer Engineering, San Diego State University, USA, and the Institute for Automation, University of Leoben, Austria. He is a Senior member of IEEE, Fellow of The Institution of Engineers (India), Fellow of The Institution of Electronics and Telecommunication Engineers (IETE), and life member of various professional bodies like MIR Labs (USA), The Indian Society for Technical Education, Instrument Society of India, Computer Society of India, and Odisha Bigyan Academy. His biography has been included in the 32nd edition of MARQUIS Who’s Who in the World 2015. He is also the recipient of Torchbearer of Education Award 2020 by Coding Ninjas. - 11:18 MCA: A Robust Image Source Identification Algorithm Based on Multi Column Constraint Convolution and Attention Mechanism
-
Although the existing image source identification algorithms have developed relatively maturely, there is a problem of poor performance for JPEG recompressed images. This paper proposes an image source identification algorithm MCA based on multi-column constrained convolutional layers and attention mechanism. It uses the feature fusion of multi-column constrained convolutional layers with different scales, which adaptively learn sufficient features for image source identification. Combined with SENet to weight different feature channels, enhance the valuable features for the current classification task and suppress the less useful features. The experimental results show that the MCA algorithm has an accuracy rate of 99.61% for 9 camera models and 98.6% for 17 camera models. The influence of JPEG recompression with different quality factors on several camera source identification algorithms is analyzed and compared. We use four conditions to simulate the real scene: original images, JPEG recompression with different quality factors (100, 95, 90). The average accuracy of the proposed algorithm reaches 86.53% for 17 camera models, which is about 2% higher than other current methods. The MCA algorithm reaches higher accuracy than other convolutional neural network-based methods and shows better robustness to JPEG recompression identification than other current methods.
- 11:36 Development of A Virtual Aerial Display Considering Interaction with 3DCG Objects
-
In recent years, aerial displays that enable users to experience three-dimensional images are attracting attention. The reason why they are attracting attention is that they are expected to be used in various fields, such as for meetings at workplaces for a long-term using stereoscopic images, which have not been realized so far, and for applications in advertising. Therefore, the purpose of this research is to create a system that allows a single user to actually move their head to view 3D objects projected on a PC, and to develop a system that allows interaction while viewing stereoscopic images via an aerial display.
- 11:54 A Novel GAN-Based Intra Prediction Mode for HEVC
-
Rapid development of AI-IoT environment has great potential for the improvement of video codec technology. The traditional intra prediction in high efficiency video coding (HEVC) generates linear predictions based on some predefined directions for the encoding. However, in the traditional intra prediction method, complex textures need to be encoded by small blocks even if a high resolution video is encoded. To solve this problem, we proposed a new intra prediction mode using the generative adversarial network (GAN). The proposed new framework consists of a traditional video encoder and an embedded generator module including a CNN function. The simulation results show that the proposed algorithm can achieve an improvement of 5.0% BD-rate comparing to the original HEVC algorithm.
Presenter bio: Takafumi Katayama received his B.E. and M.E. degrees in Electrical Engineering from Tokushima University in 2011. He belonged to Renesas Electronics Corporation from 2012 to 2014. His Ph.D. degree from Electrical Engineering of Tokushima University in 2019. Presently, he joined Tokushima University in 2019 as an Assistant Professor. He is a member of IEEE. His current research interests include machine learning, video coding algorithms, and hardware design. - 12:12 Change Detection in Synthetic Aperture Radar Images Using Attention-Based Siamese Network
-
Synthetic Aperture Radar (SAR) image Change Detection (CD) is an essential task in the field of multi-temporal remote sensing image analysis. However, the CD process is becoming difficult due to the speckle noise. This work employs an attention-based Siamese network to detect the changed regions from multi-temporal SAR images accurately. The network has two branches with shared weights and is used to extract the features from multi-temporal images independently. Also, a Convolution Block Attention Module has been utilized to enhance the salient feature maps. The linear layer at the output performs classification. The effectiveness of the proposed model has been demonstrated by the performance measures obtained for three SAR image datasets.
Presenter bio: Dr. Umesh C. Pati is a Full Professor at the Department of Electronics and Communication Engineering, National Institute of Technology (NIT), Rourkela. He has obtained his B.Tech. Degree in Electrical Engineering from National Institute of Technology (NIT), Rourkela, Odisha. He received both M.Tech. and Ph.D. degrees in Electrical Engineering with specialization in Instrumentation and Image Processing respectively from the Indian Institute of Technology (IIT), Kharagpur. He has also served as Head, Career Development Center, NIT Rourkela for three years. His current areas of interest are Image/Video Processing, Computer Vision, Artificial Intelligence, Medical Imaging, Internet of Things (IoT), Industrial Automation, and Instrumentation Systems. He has authored/edited two books and published more than 125 articles in the peer-reviewed international journals as well as conference proceedings. He has served as a reviewer in a wide range of reputed international journals and conferences. He has delivered a number of Keynote/Invited Talks at various International/National platforms. He has also guest-edited Special Issues of Cognitive Neurodynamics and the International Journal of Signal and Imaging System Engineering. Dr. Pati has filed 2 Indian patents. Besides other sponsored projects, he is currently associated with a high-value IMPRINT project, “Intelligent Surveillance Data Retriever (ISDR) for Smart City Applications” which is an initiative of the Ministry of Education (formerly the Ministry of Human Resource Development) and Ministry of Housing and Urban Affairs, Govt. of India. He has visited countries like the USA, Italy, Austria, Singapore, Mauritius, Nepal, etc. in connection with research collaboration and paper presentation. He was also an academic visitor to the Department of Electrical and Computer Engineering, San Diego State University, USA, and the Institute for Automation, University of Leoben, Austria. He is a Senior member of IEEE, Fellow of The Institution of Engineers (India), Fellow of The Institution of Electronics and Telecommunication Engineers (IETE), and life member of various professional bodies like MIR Labs (USA), The Indian Society for Technical Education, Instrument Society of India, Computer Society of India, and Odisha Bigyan Academy. His biography has been included in the 32nd edition of MARQUIS Who’s Who in the World 2015. He is also the recipient of Torchbearer of Education Award 2020 by Coding Ninjas.
Monday, October 16 11:00 – 12:30
Tutorial 1b – Towards 6G: From THz communications to reconfigurable intelligent surfaces (RIS)
Future wireless communication systems will exploit large antenna arrays and reconfigurable intelligent surfaces (RIS), to achieve a high degree of freedom in the space domain and enhance coverage. RIS have the potential to enable a dynamically changing environment, which allows the transmission channel to be “programmed”. Furthermore, to save the spectrum and hardware resources, Joint Communication and Sensing (JCAS) offers new opportunities by combining communications and radar sensing. AI will be an integral part of the communication system and we will discuss some applications such as an AI-driven neural receiver.
New frequency ranges such as sub-Terahertz and terahertz (THz) waves have frequencies extending from 0.1 THz up to 3 THz and fall in the spectral region between microwave and optical waves and promise a plethora of applications yet to be explored, ranging from communication to imaging, spectroscopy, and sensing. The prospect of offering large contiguous frequency bands to meet the demand for highest data transfer rates up to the terabit/sec range make it a key research area of 6G mobile communication. In light of the approaching ITU World Radio Conference 2023, academic and industrial research is striving to demonstrate the feasibility of this frequency region for communication. To fully utilize this potential, it is crucial to understand the propagation characteristics, and channel measurements are necessary for developing future communication standards. We will discuss the characteristics of channel modeling and propagation in this frequency range and present recent measurement campaign results in the D-band and H-band, including in industrial environments.
Besides using electronic MMICs, alternative methods for generating THz radiation based on photonic technologies will play a key role in the future. Especially with the prospect of miniaturizing today’s lab setups into photonic integrated circuits (PIC), these approaches could become mainstream. Recently R&S is coordinating a research project, 6G-ADLANTIK, funded by the German ministry for education and research with the objective to develop a novel tunable THz system based on ultra-stable photonic sources and optical frequency comb technology for communication and instrumentation.
This tutorial aims to provide a comprehensive overview of the developments in 6G technologies and highlight various research projects dedicated to the different topics.
Monday, October 16 12:30 – 1:30
Lunch
Monday, October 16 1:30 – 1:40
Opening
Monday, October 16 1:40 – 2:20
Keynote 1 – Amanda Hu (Nokia Bell Labs): Networking in 2030 and beyond: extending the scope of human possibilities
With every generation of communications technology, the focus of the network changes. The 2G and 3G eras centered on human-to-human communication through voice and text. 4G heralded a fundamental shift to the massive consumption of data, while the 5G era has turned its focus on connecting the Internet of Things (IoT) and industrial automation systems. In the 6G era of the coming decade, we envision that the digital, physical and human world will become seamlessly fused, augmenting human possibilities. This presentation will uncover some key trends that will influence the evolution and adoption of technologies towards 2030 and redefine capabilities and evolution the network as the critical enabler of ecosystem transformation. In the accompanying discussion, Amanda will explore 6G use cases, illustrating their potential benefits. She will also explore the six key technologies that Nokia has identified as vital elements in any future 6G standard. Researching and developing these key technologies has been a key focus of Nokia Bell Labs for the last three years.
Monday, October 16 2:20 – 3:00
Keynote 2 – Dr. Bo Hagerman (Ericsson): On the Journey 5G to 6G – New Behaviours, New Technology, in a Sustainable World
5G is in commercial use on all continents, with more and more users enjoying new and enhanced services. Even so, this is only the start of a long journey of the evolution of the 5G standard and the business carried on top of the networks. Enterprises and public sectors gradually increase the 5G networks use to drive their digitalization efforts, calling for more advanced network capabilities such as 5G slicing and network capacity. In parallel following the pull from society’ needs, efforts are needed to assess and qualify future needs and technological possibilities beyond 5G including the way to 6G.
In our talk, we will address how the 5G standard will continue to evolve towards 6G. We will as well discuss areas where there is need and possibilities to advance future services. This will need to be based on advancements in technology both on its technical merits as well as its suitability for mass-scale production within a decade. 3GPP-based standards have had fantastic success being able to truly advance technology that can be commercially available for a wide audience. We will in our talk discuss technology components and related research challenges to be qualifying as contributions to 6G may be brought forward to support requirements of advanced future services.
Monday, October 16 3:00 – 3:30
Afternoon Tea
Monday, October 16 3:30 – 5:00
Regular session 3: Language and Audio-Related AI
- 3:30 The Recent Large Language Models in NLP
-
Over the past few years, Natural Language Processing (NLP) has evolved significantly thanks to the development of large Language Models (LMs). In this paper, we present a survey of four recent language models that we believe have had a significant importance in the NLP field lately: BERT (Google), ELMo (Allen Institute), GPT-3 (OpenAI), and LLaMA (Meta AI). For each model, we analyse its architecture, the dataset on which it was trained, its performance evaluation, as well as the strengths and challenges faced by each. Our paper compares the recent Language Models and their contributions to the field of NLP, and discusses future extensions.
Presenter bio: Dr Hazem El Alfy is an Assistant Professor of Data Science at S P Jain School of Global Management. He is a seasoned academic with over ten years of research and teaching at universities in the US, Egypt, Japan, Kuwait and Australia. His primary area of expertise lies within computer vision, video analysis and machine learning. He has developed a proven track record of designing and implementing innovative techniques to solve complex visual surveillance problems. His diverse skill set includes both analytical and mathematical expertise, a strong academic background encompassing teaching, student mentoring, research development and publishing, as well as practical industry experience. Furthermore, Dr Hazem possesses exceptional multilingual written and oral communication skills, allowing him to effectively communicate and collaborate with colleagues, students, and professionals from diverse backgrounds and cultures. Dr Hazem earned his PhD and MSc in Computer Science from the University of Maryland at College Park. He also has an MSc in Engineering Mathematics and a BSc in Computer Engineering from Alexandria University, Egypt. - 3:48 Deep Multimodal-Based Number Finger Spelling Recognizer for Thai Sign Language
-
The use of video-based sign language recognition is an important technique for increasing communication and accessibility for the deaf and hard-of-hearing communities. However, developing and maintaining high-quality sign language datasets from video input is difficult, especially in Thai, because of the lack of a standard Thai finger spelling video dataset. To overcome this challenge, this article must focus on accumulating a larger total of 24 primary numbers in Thai Finger Spelling from 43 signers with various backgrounds, genders, and appearances. We conduct six deep learning-based architectures: RGB-sequencing-based CNN-LSTM and VGG-LSTM for video-only modality, a sequence of coordinates of joints in human’s body using LSTM, BiLSTM, and GRU models, and the structure of human’s joints modality using ST-GCN, as well as their combinations. Our results reveal that combining the RGB-sequencing modality from VGG-LSTM with the joint’s structure modality on ST-GCN has the greatest performance on both in-sample and out-of-sample test sets.
- 4:06 Gesture Recognition Machine Vision Video Calling Application Using YOLOv8
-
Gesture recognition refers to the ability of a computer system to recognize and interpret sign language gestures and translate them into spoken language or text. This technology has many potential applications, including in education, communication, and accessibility for individuals with hearing impairments. This paper proposes a vision-based video calling application which can be used for communication between normal, deaf, and dumb people. For better accuracy, object detection using Darknet’s YOLOv8, a powerful and efficient object detection system that has been shown to perform well on a variety of computer vision tasks, including object detection, instance segmentation, and image classification. This paper proposes a trained model for image classification of 10 different hand gesture classes will process each frame while video calling while achieving 98% accuracy and render its result at the top of the application with 10ms latency.
- 4:24 Automatic Bengali Image Captioning Using EfficientNet-Transformer Network
-
The task of image captioning is a complex process that involves generating textual descriptions for images. Much of the research done in this particular domain, especially using transformer models, has been focused on English language. However, there has been relatively little research dedicated to the context of the Bengali language. This study addresses the lack of research in the context of Bengali language and proposes a novel approach to automatic image captioning that involves a multi-modal, transformer-based, end-to-end model with an encoder-decoder architecture. Our approach utilizes a pre-trained EfficientNet Transformer Network. To evaluate the effectiveness of our approach, we compare our model with a Vision Transformer that utilizes a non-convolutional encoder pre-trained on ImageNet.The two models were tested on the BanglaLekhaImageCaptions dataset and evaluated using BLEU metrics.
- 4:42 Classification of Plucking Techniques from the Audio and Video of a Classical Guitar Performance
-
In classical guitar performances, various plucking gestures can be employed to evoke different tonal qualities in the produced sound. To investigate these gestural parameters and tonal variations, we classify two main plucking techniques, namely the apoyando and the tirando, along with their corresponding versions, from the audio and the video of a classical guitar performance. From the video signal, we extract right-hand gestural patterns using a hand keypoint detection model. We achieved an average accuracy of 92.5% after building a classification model from our data set containing the images at the plucking onsets. Moreover, the model can accurately classify even on the succeeding frames after the onset, and we can achieve comparable accuracy even under various camera poses provided that the data set fall under the same camera pose. From the audio signal, we achieved an average accuracy of 88.61% using the mel-frequency cepstral coefficients as our feature set. We believe that these models can potentially be used as a pedagogical tool in learning how to properly and correctly perform these various plucking techniques.
Monday, October 16 3:30 – 5:00
Regular session 4: Circuits and Systems I
- 3:30 Wi-Fi HaLow Internet of Things System on Chip (SoC) in Sub-1 GHz
-
A state-of-the-art IEEE 802.11ah (Sub-1GHz, Wi-Fi HaLow) compliant system on chip (SoC) realized in the CMOS process is presented. The SoC is the industry’s smallest, fastest and lowest power Wi-Fi HaLow SoC, providing up to ~10x the range of traditional Wi-Fi solutions. The device is the first 802.11ah system that supports 32.5 Mbps (using 8 MHz bandwidth). An architecture overview, key performance metrics, and various range comparisons are presented.
- 3:48 On Improving the Critical Path Delay of PathFinder at Smaller Channel Widths
-
PathFinder, a popular FPGA routing tool, employs negotiated congestion routing that reduces the congestion by forcing the nets to detour through uncongested interconnects. However, such detouring often gives less importance to the delay of the interconnects and more to their congestion. This approach may increase the critical path delay (CPD) under tight capacity constraints. In this work, we propose a historical cost function for the negotiated congestion routing that ensures solutions have small CPD values. The proposed historical cost function is integrated into the latest version of PathFinder, and its performance is evaluated using Titan23 FPGA benchmarks. The results indicate that the proposed cost function can enable PathFinder to converge to solutions of smaller CPD, even for small channel widths. Statistical tests are employed to verify the significance of the benefits of the proposed approach.
Presenter bio: Sadiq M. Sait obtained a Bachelor’s degree in Electronics from Bangalore University in 1981, and Master’s and PhD degrees in Electrical Engineering from King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, Saudi Arabia in 1983 & 1987 respectively. Since 1987 he has been working at the Department of Computer Engineering where he is now a Professor. In 1981 Sait received the best Electronic Engineer award from the Indian Institute of Electrical Engineers, Bangalore (where he was born). In 1990, 1994 & 1999 he was awarded the ‘Distinguished Researcher Award’ by KFUPM. In 1988, 1989, 1990, 1995 & 2000 he was nominated by the Computer Engineering Department for the ‘Best Teacher Award’ which he received in 1995, and 2000. Sait has authored over 200 research papers, contributed chapters to technical books, and lectured in over 25 countries. - 4:06 Obstacle Avoidance Rectilinear Steiner Minimal Tree Length Estimation Using Deep Learning
-
Obstacle avoidance rectilinear Steiner minimal tree (OARSMT) connects multiple pins belonging to a net using minimal wire length and avoids the obstacles present on the grid, and this is an essential part of the placement/routing phases of VLSI physical design. High-level tasks such as floor-planning and placement use estimators to determine the quality of solutions. The use of OARSMT can provide better estimations. In this work we propose to use deep learning (DL) to quickly predict the length of the OARSMT of a net with pins located anywhere on the routing grid, where the routing grid’s dimension and the obstacles remain fixed. The proposed method consists of a data encoder and a DL model of three convolutional layers and an output layer. The encoder generates a low-dimensional representation of the problem data, and the DL model extracts features and predicts the wire length. We used the industrial test problems to train and test the proposed system. The experimental results show that the proposed method has a runtime of only 15ms using a graphics processing unit and can produce predictions having average residuals varying between 56-80 in different test problems
Presenter bio: Sadiq M. Sait obtained a Bachelor’s degree in Electronics from Bangalore University in 1981, and Master’s and PhD degrees in Electrical Engineering from King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, Saudi Arabia in 1983 & 1987 respectively. Since 1987 he has been working at the Department of Computer Engineering where he is now a Professor. In 1981 Sait received the best Electronic Engineer award from the Indian Institute of Electrical Engineers, Bangalore (where he was born). In 1990, 1994 & 1999 he was awarded the ‘Distinguished Researcher Award’ by KFUPM. In 1988, 1989, 1990, 1995 & 2000 he was nominated by the Computer Engineering Department for the ‘Best Teacher Award’ which he received in 1995, and 2000. Sait has authored over 200 research papers, contributed chapters to technical books, and lectured in over 25 countries. - 4:24 Power Transmission Line Component Detection Using YOLO V3 on Raspberry Pi
-
To transmit high-voltage electric power efficiently and continuously, Power Transmission Line (PTL) systems require routine inspections for early damage detection and maintenance. The detection and localization of damage across transmission equipment are crucial, as it enables transmission companies to reduce maintenance costs and prevent sudden power disruptions. Previously, these inspections have been performed using line crawling or a helicopter. However, these conventional solutions are sluggish, expensive, and risky. The recent development of drones, high-resolution cameras, edge computing, and deep learning technology enables the use of drones for PTL inspection. In this paper, we report the initial study of research in PTL Inspection using an autonomous drone and machine learning. The PTL component detection is conducted in this paper. The original dataset of PTL in Tokyo is created in this research. The dataset is annotated with three labels of PTL components, which are hanger, connector, and insulator. YOLOv3 with 5 different data training sizes is evaluated in this research. The YOLOv3 model was also evaluated in Raspberry PI to evaluate the system performance. The simulation results show that our proposed model can achieve 93.97% detection precision with 5.32 second detection time on Raspberry Pi. (Abstract)
Presenter bio: He received his B.Eng. (2007) and M.Eng. (2009) from Bandung Institute of Technology, Indonesia. He received his PhD from Kyushu Institute of Technology,Japan in 2013. Currently, he is a full-time lecturer at Tokyo City University, Japan. His research interest is intelligent internet of things and machine learning. He is a member of IEEE. - 4:42 Combined Approximate Transforms and Approximate Computing for Low-Complexity Multibeam Arrays
-
The use of approximate transforms in conjunction with approximate adder hardware is explored towards implementing low-complexity multi-beam antenna arrays. The use of approximate adders is motivated by the efficient look up table (LUT) usage on field programmable gate arrays (FPGA) resulting in better area and time complexities on hardware. 8-beam and 16-beam hardware architectures for multi beamforming are designed using previously reported 8-point and 16-point spatial approximate-DFT (ADFT) algorithms, respectively, albeit with approximate adders. Designs have been implemented on Xilinx Kintex UltraScale KCU105 FPGA device verifying 31.5% reduction in (FPGA) LUT count and 58% improvement in critical path delay with a maximum beam error of 0.6 degrees and maximum peak side lobe level deviation 0.21 dB for the 8-beam case, when compared to ADFT with accurate adders with the same fixed-point word length. Similarly, for the 16-beam case, the use of approximate adders provide 25.39% reduction LUT count in FPGA with a maximum beam error of 0.6 degrees and peak side lobe level 4.5 dB. Operation of the proposed 8-beam and 16-beam hardware designs are also verified with signal-to-interference ratio (SIR) simulations verifying better than 27.84dB improvement in SIR at the beamformed output.
Monday, October 16 3:30 – 5:00
Regular session 5: Smart Antennas
- 3:30 Sawtooth Corrugated Vivaldi Antenna for Ultra-Wideband Application with Gain Improvement
-
In this paper, a modified Vivaldi antenna was designed to achieve an ultra-wideband operation ranging from as low as 1 GHz up to 12 GHz maximum. This was achieved by introducing a proper matching of the radial stub to the 50 Ω impedance line. The simulation results show a -10 dB return loss bandwidth from 1 GHz to 12 GHz which covers the L, S, C, and X bands of the frequency spectrum. On the other hand, the gain improvement of the antenna was achieved by introducing and etching a sawtooth pattern on the antenna ground plane. Several iterations were made to come up with the optimized design and finally obtained a gain of ≥ 5 dBi at a frequency range 1GHz to 10 GHz. It was observed that the peak gain of the antenna is at 5 GHz with a magnitude of 13.2 dB while having a decreasing gain beyond 10 GHz.
- 3:48 Design of Bow-Tie Antenna Loaded with Parasitic Element for Gain, Directivity and Front-To-Back Ratio Enhancement for Very High Frequency (VHF) Wireless Receiving Applications
-
A 270 MHz modified bow-tie antenna with improved gain, directivity, and F/B ratio is proposed for very high frequency (VHF) wireless receiving applications. This was achieved by adding a parasitic element that acts as a director for the main radiator. An improvement of 9 dB to the F/B ratio was achieved with the proposed design compared to bow-tie antenna without parasitic element. The fabricated antenna achieved a -10 dB bandwidth of 236-271 MHz.
- 4:06 Comparative Study of Multiband Horn and Yagi-Uda Antenna for Spectrum Sensing Applications
-
In this paper, Yagi-Uda and Horn antenna are designed to operate at frequencies ranging from 150 MHz to 400 MHz and simulated using Custom Simulation Technology – Microwave Studio (CST-MWS). To achieve multi-band frequency operation of the antenna, the Yagi-Uda antenna was designed and fabricated in a way that the physical dimensions could be adjusted for different operating frequencies. For the Horn antenna, the method used was to modify the design by adding a ridge at the aperture flare which is commonly known as the Double-Ridged Horn antenna. The antennas were test simulated with feeding point resistance of 50 Ω. The gain which ranges from 8dB to 15dB was achieved, the operating frequency from 150 MHz to 400 MHz was achieved, the VSWR at each operating frequency ranges from 1 to 1.5 and more importantly, the magnitude of back lobe of the radiation pattern is low.
- 4:24 Design of a Compact Fractal Dipole Antenna for GPS-GSM-Based Tracking Applications
-
A design of a compact fractal dipole antenna which is based on the fractal theory is presented for GPS-GSM-based tracking applications. The proposed antenna has of two arms-one iteration is used on one side of the fractal arms and two iterations are used in other side to match the dual band at two different frequencies. The antenna is designed to work at 900MHz (GSM) and 2.4 GHz (ISM), and its size is 79mmx21mmx1.6mm making it small sized. The simulator software package used for this simulation was the CST Studio Suite software, and from the simulation results for gain, return losses and radiation characteristics, it makes the designed antenna suitable for the tracking applications.
- 4:42 Investigating the Impact of Soil Conditions on a Modified Bow-Tie Antenna’s Radiation Characteristics Operating at 270MHz
-
This study explores the impact of different soil conditions on the radiation pattern of a modified bow-tie antenna that operates at 270MHz using the Sierpinski pattern. The primary objective is to investigate how soil conditions influence the main lobe and back lobe magnitude of the antenna’s radiation pattern. Prior research has demonstrated that a 270MHz antenna can penetrate the soil up to 20 meters. In this research, a simulated bow-tie antenna was placed parallel to a simulated soil surface. The distance between the antenna and soil is varied at 10mm, 20mm, and 30mm. The simulated soil setup involves different soil types: dry sandy, dry loamy, wet sandy, and wet loamy soil. The results reveal that the antenna performs better in dry soil conditions, while wet soil configurations negatively affect the radiation pattern. Moist soil causes an increase in the antenna’s back lobe, up to 6dBi, indicating that most signals reflect back to the surface, making it unsuitable for antenna operation. Based on the findings, this study recommends further research on other soil types and qualities. It also provides valuable insights for antenna engineers and researchers working on improving antenna performance in different soil conditions, specifically in ground-penetrating radar (GPR) technologies.
Monday, October 16 5:00 – 6:00
RF Lab Tour (UTS CB11.10.303 )
Add your name to the tour list at the registration desk on Day 1.
Meet at Registration Desk after last session for tour (15 minutes).
We plan to run 3 tours before the Welcome Reception.
Monday, October 16 6:00 – 8:00
Welcome Reception
Tuesday, October 17
Tuesday, October 17 9:00 – 10:30
Special Session 3 – Integrated Sensing and Communications: Advancements and Challenges
- 9:00 Integrated Sensing and Communication for UAV-Borne SAR System
-
Integrated Sensing and Communications (ISAC) is gradually becoming one of the key technologies in B5G/6G systems. Compared with conventional ground cellular systems, unmanned aerial vehicles (UAVs), thanks to their controllable trajectory, have been viewed as a promising technique to provide flexible communication services and synthetic aperture radar (SAR) based sensing. This paper investigates the issue of trajectory optimization for UAVs simultaneously performing SAR and communication functionalities, with the aim of minimizing the propulsion power under the communication and sensing constraints. To solve the non-convex problem, we propose a trajectory planning algorithm, wherein the successive convex approximation and block coordinate descent methods are employed to convexify the problem. Simulation results reveal that the proposed trajectory planning algorithm reduces power consumption. The energy saving achieved by the proposed algorithm can be up to 50%.
- 9:20 Performance Bound of Joint Communication and Sensing System in Time-Varying Channels
-
The current literature on sensing performance bounds for Joint Communications and Sensing (JCAS) systems is primarily focused on the channels wherein the Doppler shift within one Orthogonal Frequency Division Multiplexing (OFDM) block is neglected. This assumption, however, is not applicable in scenarios involving high mobility. In this paper, we aim to establish the sensing performance bound in time-varying channels and optimize preambles. Firstly, we establish input-output relationships for both continuous and discrete models in such channels. Then, we derive the delay and Doppler Cram'{e}r-Rao lower bound (CRLB) in time-varying channels. Finally, we optimize preambles based on CRLB minimization. Simulation results unfold the impact of parameters on the CRLB and validate our CRLB optimization methods in time-varying channels.
- 9:40 Fundamental Limits for Dynamic Path Parameter Estimation in Asynchronous ISAC Systems
-
Passive sensing based on the dynamic path signal is a key technology in integrated sensing and communications (ISAC), where clock asynchronism between transceivers poses a crucial challenge. While various passive sensing applications have been realized by analyzing the dynamic channel path signal reflected from the target, how accurately the dynamic path signal may be recovered from the noisy asynchronous sensing signal is yet known. This work investigates the error bounds for estimating the dynamic path angle-of-arrival and complex gain sequence (CGS). We provide mathematical analyses and numerical simulations to characterize the dependence of the error bounds on system configurations and multipath conditions. Our results demonstrate the increase of the error bounds due to the asynchronism and reveal the blind zone in dynamic path CGS estimation.
- 10:00 Development of an Uplink Sensing Demonstrator for Perceptive Mobile Networks
-
Uplink sensing offers a cost-effective solution to real-time detection and tracking of moving objects in Perceptive Mobile Networks (PMN), which integrate sensing into mobile networks. Currently, there has been extensive research on passive human localization techniques based on millimeter-wave radar and Wi-Fi, whereas there are relatively few reports on passive human localization using mobile networks. This paper introduces the development of a real-time uplink sensing demonstrator for PMN, enabling sub-meter precision tracking in indoor environments. Motion-related parameters, such as Doppler frequency, angle-of-arrival (AoA), and dynamic propagation delay are estimated instantly within a sampling window of around 200 milliseconds. This prototype system achieves a median localization precision of 76 cm in an office setting.
- 10:20 Spectral-Efficient Waveform Design for RIS-Assisted ISAC
-
With integrated sensing and communications (ISAC) and reconfigurable intelligent surface (RIS) emerging as critical enablers for future mobile communications, their combination has attracted increasing attention lately. This work proposes a novel design to effectively utilize RIS for improving ISAC. We pursue a spectral efficient ISAC by seeking to maximize the weighted sum rate (WSR) of communications and minimizing sensing radiation pattern approximation error. A holistic optimization problem is formulated for the proposed design, with practical constraints of RIS considered, including unit modulus and discrete phase shift. And efficient solution is developed for the non-convex optimization problem by resorting to techniques including weighted minimum mean squared error (WMMSE), fractional programming (FP) and semi-definite relaxing (SDR). Simulation results demonstrate the non-trivial improvements of WSR and sensing radiation patterns achieved by the proposed designs, also highlighting their superiority over the conventional methods.
Presenter bio: Kai received a Ph.D. degree from the University of Technology Sydney (UTS) in 2020. He is now a lecturer at the School of Electrical and Data Engineering (SEDE), UTS. His research interests include signal processing in spatial, time and frequency domains and its applications in radar, communications and joint communications and sensing.
Tuesday, October 17 9:00 – 10:30
Regular session 6: Image and Video Processing II
- 9:00 Coordinate Attention-Based Convolution Neural Network for In-Loop Filter of AVS3
-
Employing deep learning is a promising solution for reducing encoding bit rate in future video encoding systems. This paper proposes a neural network-based in-loop filter for the third generation of Audio Video Coding Standard (AVS3). The proposed network introduces a coordinate attention mechanism-based convolutional neural network in-loop filter (AMCNNLF) with a flexible attention module and a residual feature aggregation (RFA) module. Specifically, the attention module focuses on salient features to capture the visual structure, while the RFA module takes full advantage of the local refinement features. By leveraging the encoding parameters, we introduce the Quantizer Parameter (QP) values as auxiliary features to enable the proposed network suitable for processing encoded videos with multiple QPs. Experimental results indicate that the proposed network reduces the average Bjøntegaard-Delta rate (BD-Rate) of luma component about 0.4% under all intra configuration compared with the benchmark.
- 9:18 MCC: A Low Resource Requirement Camera Source Recognition Model Based on Multi-Scale Feature Fusion
-
Although existing research methods have developed relatively maturely in camera source tracing research, there are still many problems: a large amount of training data is still required when training the model; it is difficult to distinguish challenging camera models. In response to these problems, this paper proposes a camera source identification model based on multi-scale feature fusion. The feature map output by the shallow network is subtracted pixel by pixel from the feature map mapped through several convolutional layers. The purpose is to eliminate the image content information-related features learned by the deep convolutional layer with higher semantics, and then combine multiple constrained convolutional layers in parallel for feature fusion. A series of experiments were completed based on the Dresden image set. The experimental results show that MCC-model has an accuracy of 99.51% in identifying the source camera of images when facing 23 models in the Dresden forensic image library and only selecting 20 images for each model. It also faces all camera models in the Dresden forensic image library, including challenging camera models, and identifies their camera sources with an accuracy of 95.3%, with Sony DSCH50, Sony DSC-T77 and Sony DSCW170 achieving accuracies of 82%, 99.5%, and 90.7%, respectively.
- 9:36 Deep Learning-Based Image Quality Assessment Metric for Quantifying Perceptual Distortions in Transmitted Images
-
An Image Quality Assessment (IQA) metric measures the quality degradation of an image and is used to optimize parameters of an image processing algorithm. The IQA score can be also an important indicator of the target downstream application performance. With the popularity of deep learning (DL)-based applications in resource constrained domains, where most of the DL computations are outsourced to avail remotely located resources. The image data transmission for this purpose is susceptible to distortions due to the imperfect communication environment. The existing IQA metrics used to evaluate the quality of these images mainly rely on human judgment and do not account for the perceptual distortions that are responsible for the degradation in DL model performance. To address this issue, we propose a convolutional autoencoder-based IQA metric that compares images in low dimensional feature space and can be used to monitor image degradation occurred during data transmission. The simulation analysis shows that the proposed method introduces at best 0.13% error while on average 8% error compared to the application model accuracy. Importantly, the proposed IQA score coincides with the DL model performance on a downstream task and can be used to optimize the parameters of a communication system.
- 9:54 An Improved CTU-Level Rate Control Algorithm Based on Temporal Domain Motion Intensity
-
Rate control (RC) plays a vital role in Versatile Video Coding (VVC) and video transmission. However, the RC algorithm in VVC remains the problem that the process of updating model parameters only considers the impact of the current encoding block content, without considering the similarity and reference value between different Coding Tree Units (CTUs) in the temporal domain. As a result, the coding performance is not optimal. To address this problem and further optimize the RC of VVC, this paper proposes a CTU-level rate control model parameter updating algorithm based on temporal domain motion intensity and we integrate this algorithm into VTM 18.0. Experiments show that the proposed algorithm achieves better coding performance than anchor with 1.03% and 0.74% BD-rate saving under the Random Access (RA) and Low Delay B (LDB) configurations, respectively, while obtaining smaller bitrate error under common test conditions.
- 10:12 Deep Lane Detection Based on Kullback-Leibler Divergence
-
Lane detection is a fundamental perception technology in autonomous driving cars that utilizes image or 3D information acquired through sensors attached to the vehicle to recognize lanes in the surrounding area. The versatility of lane detection technology is vast as it is utilized in various processes such as path planning or constructing high definition maps for autonomous driving cars. In this paper, we propose an image-based lane detection technology that applies Kullback-Leibler Divergence to the objective function of the existing ordinal classification-based lane detection technique. By quantitatively defining the neural network’s logit distribution and label distribution, the proposed method induces the lane detection neural network to extract and represent various global features from the input image. To validate the performance of the proposed method, we conducted experiments using several public benchmarks and demonstrated that our method outperforms existing lane detection methods, achieving high accuracy and precision.
Presenter bio: Taehyeon Kim (Member, IEEE) received the B.S. degree in electronic engineering from Kangwon National University, Chuncheon, South Korea, in 2017 and the Ph.D. degree in electrical and electronic engineering from Yonsei University, Seoul, South Korea, in 2022. He was a senior research engineer at the Chief Technology Office Division, LG Electronics Co., Ltd, Seoul, South Korea, in 2022. He is currently a Senior Researcher at the Contents Convergence Research Center, Korea Electronics Technology Institute, Seoul, South Korea. His research interests include all aspects of computer vision, with a special focus on neural network compression, automated machine learning, and dimensionality reduction.
Tuesday, October 17 9:00 – 10:30
Tutorial 2a – Miniaturised and Passive Inspired Millimetre Wave Integrated Circuits in Silicon Technology
Currently, millimetre wave ( mm Wave integrated circuit IC design is one of the popular research topics within the IEEE Circuits & Systems Society. As footprints of on chip passive devices are inherently getting small at mm Wave region, adopting the classical design approach that is based on distributed element s , such as transmission for on chip passive components is now fully enabled in standard Silicon based technology , including CMOS and SiGe . Consequently, there are opportunities to re consider how such passive components can be designed and implemented with active components in a more efficient and effective way. In addition, compared with active components design, it is believed that the full potential of on chip passive components is still far from being reached.
In this talk the implementation of miniaturised passive devices as well as the possibility of using such devices to co design with active components will be discussed . The talk will be divided into two parts . In Part I, recent works in miniaturised on chip passive filter design will be presented. Moreover, how to use the passive components to mitigate device level limitation s of silicon will be discussed in Part II.
Tuesday, October 17 10:30 – 11:00
Morning Tea
Tuesday, October 17 11:00 – 12:30
Special Session 4 – AI Oriented Multi Media Information Systems
- 11:00 Iterative Variable Threshold Method Resistant to Acoustic Reflections for Underwater Acoustic Positioning Systems
-
Underwater robots and drones such as remotely operated vehicles (ROVs) and autonomous unmanned vehicles (AUVs) are used for oceanic surveys. Underwater acoustic positioning is an indispensable technology for autonomous control system (ACS) in ROVs and AUVs. In long baseline (LBL) acoustic positioning system, multiple distances from the reference points to the positioning target are required. The cross-correlation function between received and reference signals is calculated to measure the distance. In this work, we propose an iterative variable threshold method that is resistant to acoustic reflections. The proposed method has been evaluated in the simulation and field experiment.
- 11:18 Optimizing Optical Signal Quality with Deep Learning Dispersion Compensation at Various Distances
-
One of the limiting factors in long-distance communication with optical fiber is dispersion. Dispersion causes optical pulses to broaden as they travel through the fiber, leading to inter-symbol interference that ultimately degrades the signal quality. To solve this problem, a dispersion compensator is usually applied after a certain distance to bring the broadened signal back to its original form. However, this conventional approach is ineffective in cost and implementation since more compensators should be added for a longer distance. In this paper, we propose a deep learning (DL)-based dispersion compensator as an alternative to the current compensator and supplement the current approach if the distance is very long. The dataset is created from signals obtained through the simulation done in Optisystem. Different DL algorithms proposed in this paper can classify the received signals from various distances into binary values without the help of conventional dispersion compensation.
- 11:36 Power Consumption and Prototype Evaluation of IoT Devices for Environmental Monitoring Systems
-
In this paper, we conduct experiments and evaluations of intermittent operation with the goal of reducing power consumption in IoT devices for environmental monitoring systems. This intermittent operation is realized by alternately transitioning the CPU between its normal operating states and a deep sleep mode with minimal power consumption. While increasing the sensing interval can reduce power consumption, it also increases the risk of data loss, thus presenting a trade-off. This paper evaluates the reduction in power consumption by integrating three methods: (1) disconnecting power supply to modules not in use, (2) utilizing CPU’s low-power mode, and (3) minimizing power consumption through intermittent operation. From the experimental results, the ratio of static to dynamic power was calculated. The methods (1) and (2) are used to reduce static power consumption to the maximum extent, while for dynamic power, we demonstrate the relationship between the interval of intermittent operation, power consumption, and operating time. In this paper, we also show a result of long-term environmental monitoring experiment with our developed IoT devices.
- 11:54 Adaptive Drones and Federated Learning: A New Paradigm for Multimedia IoT Networks
-
Federated learning, an efficient method for processing multimedia data on local devices, has been proven as an efficient way of task offloading and privacy protection. This study explores federated learning in multimedia tasks, with a particular emphasis on drones as communication relays in an IoT network. We address the challenges of random location distribution, communication difficulties, and the complexities of cloud server processing. To overcome these, we propose an approach using adaptive drones and a dedicated network. We introduce a Lyapunov drift-based learning optimization (LDLO) and a multi-target Ford-Fulkerson algorithm (MFFA) for task allocation and drone movement optimization. A simulation study validates our methods’ effectiveness in reducing latency and energy consumption.
Presenter bio: Chaofeng Zhang is currently an assistant professor in Advanced Institute of Industrial Technology (AIIT), Tokyo, Japan. received the B.Eng degree in Soochow University, China, in 2011. He also received M.Eng and Ph.D. degree in Muroran Institute of Technology, Japan, in 2016 and 2019, respectively. His research interests include cloud computing, full-duplex communication, wireless positioning technology. Dr. Zhang serves as an Associate Editor for IEEJ Transactions on Electronics, Information and Systems; Journal of Frontiers in Space Technologies. He got IEEE VTS Tokyo Chapter 2016 student paper award in 2016, and got the best presentation award in A3 Annual Workshop on Next Generation Internet and Network Security. From March 2017 to April 2017, he was a visiting scholar at Soochow University, China. - 12:12 Non-Local Technique on Deep Attentive Face Super-Resolution Network
-
Recent Face Super-resolution (FSR) based on iterative collaboration between facial image recovery network and landmark estimation has succeeded in super-resolving facial images. However, the existing noise in coarse features at the low-level feature extraction leads to inaccurate facial priors such as landmarks and component maps, consequently degrading the super-resolved face image on a large scale. This paper proposes, a Non-local technique for deep attentive face super-resolution network (NLDA). A Non-local module has been designed before the residual channel attention block (RCAB) to eliminate noise degradation on coarse features effectively. The proposed model optimizes feature extraction and improves facial landmark fusion to yield higher-quality super-resolved images. This approach facilitates more accurate landmark estimation and boosts the performance of our model on a large scale and various face poses. Quantitative and qualitative experiments over CelebA and Helen face image datasets show that the proposed method outperforms other state-of-the-art FSR methods in recovering high-quality face images in various face poses and at a large scale.
Presenter bio: Supavadee Aramvith (IEEE S’95-M’01-SM’06, IEICE M’04) received the B.S. (first class honors) degree in Computer Science from Mahidol University, Bangkok, Thailand, in 1993. She received the M.S. and Ph.D. degrees in Electrical Engineering from the University of Washington, Seattle, USA, in 1996 and 2001, respectively. She joined Chulalongkorn University in June 2001. Currently, she is an Associate Professor and Head of Digital Signal Processing Laboratory at Department of Electrical Engineering, Chulalongkorn University, Bangkok, Thailand. She was Associate Head in International Affairs (2007-2016) and Head, Communication Engineering Division (2013-2016). She is a focal point of Faculty of Engineering, Chulalongkorn University as an academia member of International Telecommunication Union (ITU) under United Nations (2015-Present) and is an Expert to ITU-D for the Impact Study of ICT4SG (ICT for Sustainable Development Goals) (2016-2017). She is currently IEEE Region 10 Executive Committee: Educational Activities Coordinator (2011-2014, 2016), IEEE Educational Activities Board (EAB) volunteers: EAB Engineering Projects for Communities Service (EPICS in IEEE) (2012-2016) and HKN Globalization Committee (2016). She was IEEE Region 10 WIE Coordinator (2015) and MGA Representative to EAB (2015). She was Assistant Executive Director of AUN/SEED-Net Secretariat, JICA Project from 2007-2009. She is also in the IEEE Circuits and Systems (CAS) Society Technical Committee on Multimedia System and Applications, IEEE Communication Society Multimedia Communication Committee, APSIPA Image and Video Technical Committee Chair, IEEE Signal Processing Society Chapter Chair of Thailand Section and IEEE Thailand Section Executive Committee. She also serves as Chair, IEICE Bangkok area representative. She is an Associate Editor of IEICE Transaction on Information and Systems. She is reviewer of major journals such as IEEE-TSVT, JVCIR, ETRI, and EIT Image Processing. She also serves as General Co-Chair (ISMAC 2009-2015, MMM 2018), Technical Program Co-Chair (IWAIT 2008, IEEE ISCIT (2010, 2012, 2015, 2017), APSIPA (2016-2017), Special Session co-chairs (APSIPA (2014, 2018)), International Steering Committee (IEEE ISPACS), Board member (IWAIT) and Organising Committee of many well known conferences such VCIP and ICME. In addition, She has rich project management experiences as the project leader and technical advisor to The National Broadcasting and Telecommunications Commissions of Thailand and Ministry of Information and Communication Technology (ICT). Her research interests include computer vision techniques for surveillance applications, rate-control for video coding, error resilient video coding for wireless video transmissions, and image/video retrieval techniques. She leads Video Technology Research Group under Digital Signal Processing Laboratory.
Tuesday, October 17 11:00 – 12:30
Regular session 7: AI Enabled Health Care and Virtual Reality
- 11:00 Efficient Human Computer Interaction Pipeline for Mobile Devices
-
This paper proposes an efficient human computer interaction pipeline specializing in real-time action and emotion recognition on mobile devices with limited hardware performance. We proposed a new pipeline solution that removes the face detection model, significantly increasing speed, and extracts the face based on pose estimation, making it ideal for use on mobile devices. We developed an algorithm that calculates the center coordinate and width of the face bounding box using the most orthogonal parts of the extracted face landmarks, and the height using the nose and mouth. We compared the performance of our proposed pipeline with an existing pipeline on three mobile devices with limited hardware performance and found that the proposed pipeline significantly improved the inference speed on all devices, with the largest performance improvement rate of 41.0% observed on the Galaxy J5, a low-end device. These findings suggest that our proposed pipeline can provide a practical and sustainable solution for real-world scenarios, especially for low-performance devices.
- 11:18 Risky Blooms: Space-Time Chlorophyll-a Analysis and Forecasting
-
The uncontrolled proliferation and widespread dissemination of harmful algal blooms (HABs) have significant consequences for the environment, climate, human health, and socio-economics. Therefore, it is crucial to conduct a comprehensive analysis and evaluate the causes and impacts of both prolonged and sudden formations of HABs in order to develop effective strategies. In addressing ecological challenges, including HABs, computer technologies have emerged as valuable tools. To gain a profound understanding of the dynamics of blooms, it is essential to comprehend their temporal and spatial scales. However, effectively modeling ecological problems has been challenging, particularly in analyzing the influence of multiple variables on a specific variable as it evolves over time. To overcome this challenge, we propose a novel approach that combines transfer entropy (TE) network inference with a graph neural network (GNN). This approach enables the simultaneous consideration of multiple variables to model the occurrences of blooms, facilitating a comprehensive analysis of the issue from both temporal and spatial perspectives and leading to accurate predictions.
Presenter bio: Haojiong Wang obtained her Master of Science degree with distinction from the University of Southampton, UK in 2019. She is currently pursuing her PhD at the Graduate School of Information Science and Technology, Hokkaido University, Japan. Her research interests include machine learning, image processing and information science. She is very good at applying computer technology to environmental and biological fields to achieve interdisciplinary research challenges. - 11:36 Prediction of Heart Disease Using Hybrid Naïve Bayes Technique
-
The domain of medical diagnosis has attracted many researchers. Several cases of human early mortality have been predicted by investigating the diseases. Different reasons are the cause of several diseases and one such cause is heart disease. Set of disorders in the heart lead to heart disease. It consists of blood vessel issues such as irregular heart beat problems, weak heart muscle problems, congenital heart difficulties, cardio vascular defects and coronary artery disorder. Many researchers have summarized idiosyncratic procedures to preserve human life and help health care experts to recognize, prevent and manage heart disorder. Machine Learning is utilized for decision making in many domains. Naïve Bayes technique is one approach which uses conditional probability. In this paper we have taken UCI machine learning data repository including of patients affected from heart disease is analyzed using Complement Naïve Bayes probability methodology along with correlated features of the heart disease data set. The prediction is done using this concept as hybrid Naïve Bayes probability technique. The advantage of Complement Naïve Bayes probability prediction is that, it can be used for imbalanced data set. Accuracy of classification of patients suffering from heart disease is predicted. Implementation is done in Python environment.
- 11:54 Implementation of Intuitive 3-Dimensional Manipulation for 3DCG Objects Using Monocular Camera
-
The widespread use of smartphones has led to an increase in opportunities for manipulating 3-Dimensional-Computer-Graphics (3DCG) objects in Augmented Reality (AR) and Virtual Reality (VR) applications. However, these interactions usually require touch panel input, which can be difficult for users who lack prior experience. To address this issue, one possible solution is to use depth sensors to infer hand shapes from skeletal data and control 3DCG objects using gesture movements. Nevertheless, these sensors can be challenging to implement due to cost and other limitations. Thus, we will investigate the possibility of manipulating 3DCG objects using only a monocular camera based on gesture movements.
- 12:12 A Multi-Class Graph Convolutional Neural Network for EEG Classification and Representation
-
The classification of motor imagery based on EEG signals is critical for motor rehabilitation with a Brain-Computer Interface (BCI). The majority of the currently available works on this topic call for a step of subject-specific adaptation before they can be applied to a new user. Therefore, research that immediately extends a pre-trained model to new users is especially desirable. Since brain dynamics oscillate greatly between and across participants, it is difficult to build effective handcrafted features based on existing information. In an effort to fill the beforementioned limitation, this paper offers a Graph-based Convolutional Neural Network Model (G-CNNM) to investigate EEG features across participants for motor imagery classification. Initially, a structure in the form of a graph is created with the intention of representing the location information of EEG nodes. The next step is for a convolutional neural network model to learn EEG features from both the spatial and temporal dimensions, with an emphasis placed on the temporal periods that are most distinguished. An evaluation of the proposed methodology was performed on a benchmark EEG dataset- BCICIV2a- for motor imagery classification. Evidence from this study demonstrates that the G-CNNM outperforms the existing state-of-the-art approaches in terms of accuracy scoring 72.57%.
Tuesday, October 17 11:00 – 12:30
Tutorial 2b – Towards the ultimate 6G network leveraging Joint Communications and Sensing (JCAS), AI and non-terrestrial networks (NTN)
A core capability introduced by 6G will be the joint support for mobile communications and mobile sensing. Today, mobile robots and XR applications record their surroundings in 3D using sensors such as radar, localization techniques. Another example may be gesture control of smartphones as an evolution of the established touchscreen operation. At the same time, communication takes place between these devices over the cellular network.
With the evolution of cellular systems to mmWave bands in 5G and potentially sub-THz bands in 6G, more bandwidth will become available and provide an unprecedented opportunity to employ the mobile network for sensing.
In the form of machine learning, artificial intelligence has achieved tremendous success in image and video analysis as well as natural language processing. For 6G, researchers propose applying machine learning to signal processing by replacing individual or multiple blocks in the chain with trained models that can perform channel estimation and equalization, for instance. The ultimate goal is to learn the entire communications system model and train a particular type of neural network (autoencoder) to allow modification of the signal to be transmitted.
The cellular layout in the current network architecture is designed to minimize interference at the cell borders between cells. However, to achieve ultra-high speed, high capacity (with improvements in particular on the uplink) and very reliable communications, it is ideal to communicate at short distances via a low-loss path and increase the redundancy over multiple communications paths. One possibility for such a spatially distrib-uted topology involves cell-free networks where base stations distributed over a large area coordinate coherent joint transmission to provide service to each user. This approach will lead to higher signal-to-noise ratio and gain as well as a more consistent quality of experience for users at different locations. This will also impact processing architecture: information and communications technologies will further merge, i.e. the processing of large amounts of data will take place in distributed systems in the network and not necessarily in the end-user device, leading to challenging data rate and latency requirements.
In order to offer new services to drones, aircraft, ships and space stations/satellites and thus provide coverage in remote areas, maritime locations and in space, it is necessary to extend network coverage three-dimensionally and include the vertical direction in addition to horizontal deployments. Such ubiquitous communications could be realized with non-terrestrial networks (NTN), which would utilize drones (high-altitude platform stations (HAPS) in the stratosphere) and low earth orbit (LEO) satellite constellations acting as mobile base stations in the sky and leading to a unified network architecture.
Tuesday, October 17 12:30 – 1:30
Lunch
Tuesday, October 17 1:30 – 2:10
Keynote 3 – Dr. Astrid Algaba Brazalez (Ericsson): Innovative and Highly Efficient Antenna Solutions for Next Generation Communication Systems
The raise of new use cases such as Internet of Senses, Machine Type Communication (MTC), remote surgery and holographic communication, as well as the continuous exponential increase of mobile data traffic, involves the need of settling new spectrum allocation for the future scenarios and applications of the upcoming sixth generation (6G) of communication systems which is expected to be deployed by 2030. It is envisioned that millimeter-wave (mm-wave) and sub-THz frequency bands will be highly relevant for 6G, and this brings technological challenges from the point of view of the hardware applied to implement the antenna system. On one side, integrated radio products operating beyond mm-waves require directive antennas to counter-back the high path loss at those frequencies. On the other side, in order to ensure the needed link budget and relax the power consumption from the power amplifiers, low-loss hardware technologies and low-loss materials need to be used. Moreover, robust and cost-effective manufacturing techniques must be employed in order to reduce the impact on the radio performance due to tolerances, as well as ensuring a viable commercialization of the system.
The right choice of the antenna solution employed in mm-wave and sub-THz systems is very critical, specially considering the fact that multibeam antennas with wide scanning capability must be used in order to provide enough coverage in the whole service area. Furthermore, electronic reconfigurability is an essential aspect of future radio systems in order to allow efficient connectivity on demand, and to be able to adapt the system to the use case requirements. In this keynote seminar, I will outline and elaborate upon the challenges and opportunities of applying innovative antenna solutions based on lenses and lenses combined with traditional phased arrays (also known as dome antennas), into radio access base stations. I will also summarize the research activities performed at Ericsson Research on metasurface-based lenses, geodesic lenses and dielectric dome antennas for 5G and beyond applications.
Tuesday, October 17 2:10 – 2:50
Keynote 4 – Prof. Mona Jarrahi (UCLA): Plasmonic terahertz transceivers for next generation communication systems
Communication at terahertz carrier frequencies is a promising way to satisfy the ever-growing demands for high-speed wireless networks. However, practical feasibility of terahertz communications systems has been limited by the low radiation power of terahertz transmitters and low sensitivity of terahertz receivers. This presentation introduces plasmonic terahertz transceivers, which offer significantly higher terahertz radiation powers and detection sensitivities compared to existing technologies. Thus, they allow potential use of atmospheric transparent bands in the terahertz regime for increasing wireless communication data rates.
Tuesday, October 17 2:50 – 3:00
ISCIT 2024 Organising Committee Presentation
Tuesday, October 17 3:00 – 3:30
Afternoon Tea
Tuesday, October 17 3:30 – 5:00
Regular session 8: Artificial Intelligence for Communications
- 3:30 Statistical Analysis of Least Mean Modulus Algorithm for Non-Gaussian Noise
-
This paper develops a statistical analysis of adaptive filters using Least Mean Modulus Algorithm (LMMA) for a Gaussian regressor and non-Gaussian observation noise. In the analysis, we propose using the characteristic function for the probability distribution of the non-Gaussian noise, to derive difference equations for calculating theoretical transient behavior of the filter convergence. Numerical experiments are carried out to examine the accuracy of the analysis, where simulations and theoretical calculations of filter convergence for the non-Gaussian noise are compared with those for the Gaussian noise. Good agreement is observed between simulation and theory, proving the validity of the proposed method of analysis.
Presenter bio: Shin’ichi Koike received B.S.E.E., M.S.E.E. and Ph.D. from the University of Tokyo, Tokyo, Japan. From 1995 to 2004, he was with NEC Corporation, Japan as Chief Engineer. He is currently consultant and senior researcher. In the research area, he is interested in modulation theory and adaptive filtering. - 3:48 Stock Price Prediction Using Machine Learning: A Survey of Recent Techniques
-
In this paper, we conduct a survey on recent stock price prediction models and examine the effectiveness and accuracy of using RNN, LSTM, and GRU models for stock price prediction. Since an LSTM model can handle both text and time-series data, it was thought to be the best option for stock market forecasting. The LSTM model was enhanced by combining it with other techniques and developing the GRU model, a variant of the LSTM model. The input gate and forget gate from the LSTM are combined to form the update gate in the GRU model. Those recent techniques prove to be more accurate and outperform other similar and traditional techniques.
Presenter bio: Dr Hazem El Alfy is an Assistant Professor of Data Science at S P Jain School of Global Management. He is a seasoned academic with over ten years of research and teaching at universities in the US, Egypt, Japan, Kuwait and Australia. His primary area of expertise lies within computer vision, video analysis and machine learning. He has developed a proven track record of designing and implementing innovative techniques to solve complex visual surveillance problems. His diverse skill set includes both analytical and mathematical expertise, a strong academic background encompassing teaching, student mentoring, research development and publishing, as well as practical industry experience. Furthermore, Dr Hazem possesses exceptional multilingual written and oral communication skills, allowing him to effectively communicate and collaborate with colleagues, students, and professionals from diverse backgrounds and cultures. Dr Hazem earned his PhD and MSc in Computer Science from the University of Maryland at College Park. He also has an MSc in Engineering Mathematics and a BSc in Computer Engineering from Alexandria University, Egypt. - 4:06 Consideration on a Single Input Blind Source Separation Method for Pulse Wave Extraction
-
Blind Source Separation(BSS) is known as a method to separate double sources using only the sequences observed by double sensors. A study of Pulse Wave (PW) extraction using BSS has progressing. In the pulse wave extraction, various types of noises and PW are used as source sequences, and the PW is extracted using BSS from the observed sequences that are a mixture of these sequences. However, since this method uses double sensors, there are still issues to be solved for practical use, such as adjusting the distance between sensors. To solve this problem, we proposed a single-input BSS method for PW extraction using Quadrature Mirror Filter (QMF). This method generates two series of sequences that can be considered as a mixture of pulse waves and other noises using QMF. As a result, it was shown that the same degree of precision compared to the conventional double-input BSS method. However, there are still problems that the computational cost increases due to filtering and the necessity of adjustmenting filter characteristics. Therefore, in this study, we propose the method based on oversampling to reduce the computational cost and unnecessary to adjust filter characteristics while maintaining accuracy.
- 4:24 A Novel Deep Learning Framework for Efficient Automatic Modulation Recognition of Sub-Nyquist Sampled Signals
-
Automatic modulation recognition (AMR) is a promising technology that can enable intelligent communication receivers to detect signal modulation schemes. Recently, deep learning (DL) research has facilitated high-performance DL-AMR approaches in traditional Nyquist sampling. However, in practical applications, the cost, complexity, and power consumption of analog-to-digital converters (ADCs) pose significant challenges to the implementation of real-time wideband spectrum sensing. Therefore, a sub-Nyquist sampling mechanism has been proposed to solve the challenging problem of meeting the high Nyquist sampling rate required by 6G ultra-wideband technology. Nevertheless, sub-Nyquist sampling may lead to non-linear distortion of the original signal, which can greatly increase the difficulty of signal modulation recognition and result in a significant decrease in the recognition accuracy of traditional DL-based methods. In this paper, we propose a novel deep learning framework for sub-Nyquist signal modulation recognition, which consists of two parts: (1) signal reconstruction based on Simultaneous Orthogonal Matching Pursuit (SOMP), and (2) time-frequency feature-based modulation recognition network including short time discrete Fourier transform (STFT) and efficient convolutional neural networks (CNN), called STFT-AMCNet. The performance of the proposed approach is evaluated and compared with prior arts on a public dataset to demonstrate its great efficiency.
- 4:42 Privacy-Preserving Gaussian Process Latent Variable Model for Dimensional Reduction
-
In this paper, we propose a privacy-preserving Gaussian Process Latent Variable Model (GPLVM) for scrambled data generated based on a random unitary transform. The GPLVM is a flexible Bayesian non-parametric modeling method that has been extensively studied and applied in many machine learning tasks. The proposed a privacy-preserving GPLVM reduces the dimension of the high dimensional data in the scrambled domain in consideration of its use in the edge/cloud. In addition, we propose a dimensional extension method to improve security strength. We prove, theoretically, that the proposal has exactly the same estimation performance as the GPLVM for non-scrambled data. Finally, we performed numerical demonstrations on multi-phase oil flow data. The effectiveness of the proposed method is verified.
Presenter bio: Takayuki Nakachi is a Professor of the Information Technology Center in University of the Ryukyus. He received the Ph.D. degree in electrical engineering from Keio University, Tokyo, Japan, in 1997. Since he joined Nippon Telegraph and Telephone Corporation (NTT) in 1997, he has been engaged in research on super-high-definition image/video coding, media transport technologies. From 2006 to 2007, he was a visiting scientist at Stanford University. He also actively participates in MPEG international standardization. His current research interests include sparse modeling, communication science, information theory and signal processing. He received the 26th TELECOM System Technology Award, the 6th Paper Award of Journal of Signal Processing and the Best Paper Award of IEEE ISPACS2015. Dr. Nakachi is a member of the Institute of Electrical and Electronics Engineers the Institute of Electronics (IEEE) and the Information and Communication Engineers (IEICE) of Japan.
Tuesday, October 17 3:30 – 5:00
Regular session 9: Next-Generation Networking
- 3:30 A Privacy-Preserved End Terminal Characterization Mechanism by Collaborative Traffic Analysis
-
Network security is always one of the high priority missions in an organization in terms of intrusion prevention, infection prevention, and data protection, etc. The existing security facilities mainly work at the border point of an organization network and the false detections never stop nowadays. This situation inevitably generates malware-infected end terminals and not only can spread the malware within the organization network due to the internal traffic is outside the monitoring of security facilities but also can make the organization an attacker to the Internet. Meanwhile, the requirement for user’s privacy protection is increasing with the popularization of Secure Socket Layer (SSL)/Transport Layer Security (TLS) based encrypted communication. This extended abstract proposes an end terminal characterization mechanism by collaborative traffic analysis in order to detect malware-infected end terminals at the early stage with considering the user’s privacy. In particular, the target threat model for proposed mechanism, the design and working scenario, and the following steps of this research are presented.
- 3:48 Policy-Based Detection and Blocking System Against Abnormal Applications by Analyzing DNS Traffic
-
we focus on direct outbound application traffic and propose a policy-based detection and blocking system against abnormal application by analyzing DNS traffic. Specifically, the direct outbound application traffic without corresponding domain name resolution will be detected and blocked as abnormal network traffic from bot-infected computers. We implemented a prototype system and conducted the feature evaluation on SMTP protocol. The results confirmed that the proposed system worked correctly as designed.
- 4:06 Profile-Based Data-Driven Approach to Analyse Virtualised Network Functions Performance
-
Current Network Function Virtualisation (NFV) orchestration frameworks lack intelligence and handle resources in a reactive manner while neglecting Virtualised Network Function (VNF)-level service performance. This article introduces a novel NFV analysis framework and methodology, which is able to operate in conjunction with already standardised or forthcoming, Artificial Intelligence based VNF management processes. This framework comprises a profile-based data-driven method for the analysis of VNF-level service performance. The novel potential of the proposed method lies in the fact that instead of providing and working with some volatile monitoring metrics for reactive service management, we analyse the impact of the underlying virtualised system’s resource configurations and each VNF’s input data rate, on the performance characteristics of that VNF and its resource utilisation. This will help network operators by providing insights on the resource utilisation and performance behaviour of a VNF to make proactive and efficient resource management plans to meet the targeted service performance. For the evaluation of our proposed approach, an autonomous profiling method is used to perform benchmarking and monitoring and generate real profile information of VNFs in a real deployment environment.
- 4:24 Recursive Service Function Chain Orchestration
-
Current orchestration platforms are more than capable of scheduling microservices but the scheduling Service Function Chains (SFC) of network service are not well addressed. In this paper, we focus on the SFC scheduling for microservices and aim to achieve the recursive usage of microservices which is a single microservice serving multiple SFC simultaneously. We propose an extension to the ESTI MANO stack based on a hierarchical Monte Carlo Tree Search algorithm. It allows microservices to serve multiple SFCs simultaneously in an environment similar to the popular container orchestration platform Kubernetes without interfering with existing horizontal scalers. We developed our simulation based on the popular cloud computing simulation tool CloudSim Plus and benchmarked our algorithm against the other four simple algorithms inspired by existing works. The results show our proposed algorithm guarantees the feasibility of generated schedule. With the sacrifice of latency for individual SFC, the overall completion time and power usage of the host can be reduced up to 43%.
- 4:42 A Comparative Study of German and Japanese University Homepages
-
In the globalization era, a university’s homepage on the Internet has become more critical since it can give an insight into the technologies valued by the university. In this research, we compared the university homepages/websites in Germany and Japan. We aimed to learn about the differences between these countries’ university websites. We first compiled a list of 492 German and 716 Japanese university websites. We then conducted experiments where users from two countries accessed all the websites. We gathered and analyzed the performance metrics related to technical and loading speed-related data for comparison. The results show that Japanese websites perform better regarding returned HTTP error codes, fewer redirects, and smaller download sizes. Regarding the loading speed, we observed that the Japanese websites could be accessed faster on average, locally and from overseas, than their German counterparts.
Tuesday, October 17 3:30 – 5:00
Invited Session 1
Prof David Huang (University of Western Australia): Detection, tracking and shadow profile retrieval with opportunistic signals from LEO satellites
A/Prof. Yi Gong (Beijing Information Science and Technology University): A Compressive Sensing and Denoising RCAN-Based Channel Estimation Scheme for OTFS System
Dr. Graeme Woodward (University of Canterbury): Radio localisation: tracking insects, busting sport cheats and studying climate change
Tuesday, October 17 6:00 – 9:30
Conference Banquet
Wednesday, October 18
Wednesday, October 18 9:00 – 10:30
Special Session 5 – Energy Harvesting Technologies and Ultra-low power Analogue/RF IC Design for IoT, Radar/Space and 5G/6G Communication Applications
- 9:00 Design of Signal Enhancing Multiband Antenna Using the Second Iteration of Sierpinski-Shaped Fractal for GSM/GPS/RFID Applications
-
This study presents a novel multiband antenna design operating at three frequencies (900 MHz for GSM, 1.57542 GHz for GPS, and 2.45 GHz for RFID). Utilizing the Second Iteration of Sierpinski Hexagonal shape fractal, the antenna was designed and simulated using CST software. The designed antenna was fabricated using two different materials: FR-4 substrate and copper sticker with a plastic base. The antenna with FR-4 substrate was integrated to a tracking module to confirm its functionality for the GSM and GPS band, the results exhibit lower delays and higher signal strength compared to the module without the antenna (conventional). The sticker antenna is fabricated as with and without ground, thus tested with different setups in an intermittent location, demonstrating its signal-enhancing feature for the GSM band by affixing it to a mobile phone and functionality for the GSM and GPS band by comparing its performance to the tracking module with the integrated FR-4 antenna. Moreover, the antenna successfully functions as an RFID reader, detecting and communicating with RFID tags within predetermined ranges in multiple directions. Statistical analysis validated the antenna’s performance and effectiveness. These findings contribute to the advancement of tracking antenna technology, providing an efficient solution for object tracking.
- 9:18 A Design of High Efficiency Non-Time Division Multiplexing Battery-Less and Self-Powered Multi-Input Single-Inductor Single-Output Using 22nm FDSOI Technology
-
This study focuses on the development and analysis of a non-time division multiplexing technique of the multi-input single-inductor single-output (MISIMO) energy harvesting unit (EHU) using 22nm FDSOI technology. The EHU is designed to harness energy from three transducers, namely: PV, PZT, and TEG. Moreover, the proposed system operates without the need for external power sources or batteries. The proposed EHU achieves a peak extraction efficiency of 87% and a maximum power-point tracking efficiency of 99% and delivering an overall system power output of 3.6mW. It can rapidly reach a peak output voltage of 2V within approximately 27 sec. of charging from 0 V. The findings demonstrate the efficacy of the EHU in efficiently harvesting energy from multiple sources, enabling remote and sustainable applications for wearables, wireless sensor nodes (WSNs) and Internet-of-Things (IoT) devices.
- 9:36 Integration of OpenCV and Cyclone V Hybrid ARM and FPGA SoC for Face Detection Application
-
This paper presents a Hybrid ARM and FPGA-based Face Detection System design powered by the OpenCV computer vision library and the SoCKit Altera Cyclone V System-on-Chip FPGA Development Board. An integrated system was designed for compatibility with the Cyclone V SoC SoCKit Development Board using Altera QSys and Altera Quartus. A custom version of the Linux Operating System Kernel from GitHub was then developed to support the Development Board’s specifications and the System Requirements, such as USB Video Class Kernel Modules for USB Web Cam Support of the Integrated Face Detection System, which was compiled using the Linaro toolchain. OpenCV was then compiled within the Linux System, and a face detection program using OpenCV face detection functions was developed to be compatible with the integrated system. The Integrated Face Detection System was compared to a CISC-based setup with Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz. The results showed that the SoC has 43% less time than the Intel Core i7 setup in detecting a face from the standard Lena.jpg input file.
Presenter bio: Harreez M. Villaruz received her B.S. degree in Electronics Engineering from Mindanao State University – Iligan Institute of Technology (MSU-IIT), Philippines, in 2006, and her M.S. degree in Electrical Engineering and her Ph.D. Degree in Electrical Engineering and Computer Science from National Taipei University, Taiwan, in 2010 and 2022, respectively. Dr. Villaruz was the recipient of the Phi Tau Phi Scholastic Honor from National Taipei University, Taiwan, in 2022. She is a member of the Faculty of the College of Engineering, MSU-IIT, Philippines from 2007 to present. Her current research interests include high-efficiency power management IC design, low-EMI pulse-width modulation IC design, power management ICs, applications of Field Programmable Arrays and Digital Signal Processing. - 9:54 Design of Charge Pump for Low Power, Wide Range PLL in 65nm CMOS Technology
-
In this paper, a specific circuit topology called NMOS-Switch Current Steering Charge Pump is presented. The circuit is designed using a 65nm CMOS Technology process. The main objective of this design is to minimize the current mismatch between the charging and discharging currents in order to reduce certain undesirable effects such as PLL reference spurs and phase offset. It employs dual compensation method to address the current mismatch issue. This design was able to achieve a maximum current mismatch of 1.75% compared to the conventional architecture with 20% mismatch. A current mismatch remained below 1% a range of output voltages from 0.24V to 0.9V, with a supply voltage of 1.2V. A power consumption of 1.48mW provides insight into the energy efficiency of the circuit and can be used for further analysis and comparison with other charge pump designs. This design highlights the design and performance characteristics of a specific charge pump topology using a dual compensation method. The experimental results demonstrate the effectiveness of this approach in reducing current mismatch, minimizing PLL reference spurs, and phase offset. The low current mismatch and moderate power consumption make this charge pump design a promising solution for practical applications in various electronic systems.
Wednesday, October 18 9:00 – 10:30
Regular session 10: Signal Processing
- 9:00 On Sustainability of a Hospital-As-Vertical- Operator Model in the 5G Era
-
Innovative business models and applications have been key to 5G service designs. This paper aims to investigate a mobile network access service provision model of cooperation and resource sharing among a hospital-as-vertical operator (HAVO) and 5G mobile network operators (MNOs). The question to address is: when the existing HAVO is upgraded to 5G network and services, would the HAVO model be sustainable? By using current 5G network and service parameters, this paper examines an existing 4G HAVO model and adopts a cost-benefit analysis (CBA) to perform worst-case evaluation of its sustainability in the 5G era. Analyses show the insights that win-win-win conditions will continue to exist for all the participants of HAVO business model.
- 9:18 Overfitting Problem of Reservoir-Computing-Based Nonlinear Equalizer Trained on PRBS Signals in Optical Communication Systems
-
We investigated the overfitting characteristics of a nonlinear equalizer based on reservoir computing (RC) designed for fiber-optic nonlinear impairment mitigation. The results revealed the risk of the equalizer overfitting PRBS signals.
- 9:36 Microprocessor Instruction Design Tool for RISC-V Architecture
-
Our developing MEIMAT (MEiji Microprocessor Architecture Design Tools) can represent any instruction of various processors by the MEIMAT meta-instruction in two ways of semantic and functional expressions. In this paper, we have verified the RISC-V support to the MEIMAT, and have improved so that MEIMAT further supports RISC-V. First, we have verified if RISC-V instructions are able to be expressed by the MEIMAT semantic expression. It is also verified that the functional expression is automatically generated from the semantic instruction, and the corresponding correct circuit diagrams are generated in the MEIMAT instruction visualization tool. Then, we have newly developed stage configuration mechanism to implement RISC-V instructions. Because RISC-V architecture is implemented by 4 or 5 processing stage in most actual processors. To develop further easy-to-understand design tool, we have also introduced illustration windows for RISC-V specified hardware module such register file, especially status register in the instruction visualization tool. Finally, through these verifications and improvements, we have successfully confirmed that the 43 instructions in the RISC-V (RV32I) instructions set of the RISC-V architecture can be converted into MEIMAT meta-instructions, and then they can be represented as circuit diagram with the RISC-V stage configuration in the MEIMAT instruction visualization tool.
- 9:54 Lossless Audio Compression Using DWT, DCT and Huffman-Based LZW Encoding
-
As digital content and multimedia files increase in quality and size, there is a growing need for audio signal compression to aid in a more efficient signal transmission, network systems management, and processing of data. In this study, a lossless compression method was implemented using a combination of Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT)[1][2] to decompose audio signals. Furthermore, Huffman-based Lempel-Ziv-Welch (LZW) coding algorithm[3] was used for the entropy encoding. The resulting compressed files were stored in MATLAB raw files to produce an average of 4.4 compression ratio(CR) (~78), a peak signal-to-noise ratio(PSNR) of 62dB across a small audio sample and an average of 1.63 CR and 39.51dB PSNR across a music/speech dataset.
- 10:12 A Non-Uniform Quantization-Based Hardware Architecture for BP Decoding of Polar Codes
-
Belief propagation (BP) decoding of polar codes benefits from the high parallelism in hardware implementation and consequently provides high throughput compared to successive cancellation-based decoding algorithms. As the intermediate log-likelihood ratios (LLRs) in the decoding process have a wide dynamic range, the quantization error can degrade the error correction performance of the BP decoder. To reduce the quantization error in a uniform quantization scheme, we may need to use more bits for each LLR value which is not desirable due to the required memory space. In this paper, we design an efficient hardware architecture for non-uniformly quantized LLR messages in BP decoding where the arithmetic operations on logarithmically compressed messages are replaced with the mapping to precomputed results in lookup tables. By employing 5-bit non-uniform quantization, the designed BP decoder architecture reduces the required memory space by 37.5 percent compared to 8-bit uniform quantization while additionally improving the block error rate (BLER) by more than 0.1 dB in high SNR regimes. Under a memory space limit for uniform quantization, the BLER improvement of the designed architecture is up to 0.4 dB when both quantization schemes use 5 bits for LLRs.
Wednesday, October 18 9:00 – 10:30
Tutorial 3 – Enabling Joint Communication and Radio Sensing in Mobile Networks: A Tutorial on Advancement and Challenges
Joint communication and radar/radio sensing (JCAS) is emerging as a main technology for future communications and sensing networks and services. To take full advantage of the ubiquitous mobile networks, one can integrate sensing functionalities into future mobile networks to create a perceptive mobile network (PMN). It is envisaged that PMNs have the potential to revolutionize future 5G and beyond networks by offering ubiquitous sensing for numerous smart applications. In the proposed tutorial, we aim to provide a timely overview of the latest development in PMN, including new theories, methodologies, and applications. We will also discuss the challenges that must be overcome in order to achieve widespread adoption of these networks. Through a combination of theoretical concepts and practical examples, this tutorial will provide attendees with a comprehensive understanding of the current state of perceptive mobile networks, as well as insights into future directions for research and development in this exciting field.
Wednesday, October 18 10:30 – 10:50
Morning Tea
Wednesday, October 18 10:50 – 11:45
Special Session 5 – Energy Harvesting Technologies and Ultra-low power Analogue/RF IC Design for IoT, Radar/Space and 5G/6G Communication Applications
- 10:50 4-Phase Interleaved Charge Pump Topologies with Reversion Loss Elimination Techniques for IoT Applications
-
A multistage implementation of an Interleaved Charge Pump (ICP) Topology is used in this paper in a 4-phase clocking scheme to optimize the design for scalability, eliminate all reversion losses, and eliminate level shifters for PMOS charge transfer switch (CTS) control to reduce voltage overstress and breakdown concerns. In addition, to address low input power conditions, an ICP topology for startup and bulk modulation techniques are applied to the ICP design. Furthermore, The designs are capable of supporting inputs as low as 250mV, and extended load ranges, implemented utilizing TSMC’s 65nm process technology. At a low input power roughly 140uW, measured data indicated a peak PCE of 90.9%.
- 11:08 Buck Converter with Variable Output Voltage for Dynamic Voltage Scaling (DVS) Applications
-
In this research, a novel buck converter capable of dynamically adjusting the output voltage with remarkable scheme to meet the demands of dynamic voltage scaling (DVS) applications is presented. The converter employs a four-bit resistor network switch, enabling the output voltage to be varied based on the workload requirements during runtime. To demonstrate the feasibility of our approach, the proposed chip was implemented using a 0.18um 1p6m CMOS process. Experimental measurements revealed that the four-bit resistor switch effectively alters the output voltage within the range of 1.0V to 2.4V, with increment of 0.1V achievable by adjusting the switch settings from 0001 to 1111. This buck converter is intended to serve as switching output voltage for power electronic systems.
Presenter bio: Harreez M. Villaruz received her B.S. degree in Electronics Engineering from Mindanao State University – Iligan Institute of Technology (MSU-IIT), Philippines, in 2006, and her M.S. degree in Electrical Engineering and her Ph.D. Degree in Electrical Engineering and Computer Science from National Taipei University, Taiwan, in 2010 and 2022, respectively. Dr. Villaruz was the recipient of the Phi Tau Phi Scholastic Honor from National Taipei University, Taiwan, in 2022. She is a member of the Faculty of the College of Engineering, MSU-IIT, Philippines from 2007 to present. Her current research interests include high-efficiency power management IC design, low-EMI pulse-width modulation IC design, power management ICs, applications of Field Programmable Arrays and Digital Signal Processing. - 11:26 83.17% Power Conversion Efficiency, 13.5 dB Power Dynamic Range Rectifier for RF Energy Harvesting Applications in 22nm FDSOI Technology
-
This study proposes a novel RF rectifier that has a high power conversion efficiency and wide power dynamic range suitable for RF energy harvesting applications. The proposed RF rectifier utilizes ultra low power (ULP) diodes replacing the diode-connected MOSFETs in a self-biased architecture of a cross coupled differential drive topology. The ULP diode resolves the limitations of higher leakage currents on diode-connected MOSFETs while maintaining the same performance when forward-biased. The design is implemented in 22nm FDSOI Technology. The novel RF rectifier presented a PCE of 77.56% at -20 dBm at 900 MHz with a peak PCE of 83.17% at -23 dBm, a dynamic range of 13.5 dB with a sensitivity of -16.5 dBm at 100 kΩ load.
Wednesday, October 18 10:50 – 11:45
Special session 6 – Millimeter-Wave and Terahertz Communications
- 10:50 Demonstration of a 245 GHz Real-Time Wireless Communication Link
-
A wireless communications system with a carrier frequency of 245 GHz and a data rate of 30 gigabits per second (Gbps) at a 1.2 m distance is demonstrated. The system consists of low-complexity and real-time baseband modules to provide the high-speed wideband signal processing capability. Multi-channel base-band signals are combined and converted to 15.65 ± 6.25 GHz wideband intermediate frequency (IF) signals. A 50 Gbps wireless communication system is currently development, and the technical progress will be presented at the conference. The new wireless communication technology will find great potential for future high-speed communications beyond 5G technology, especially, for space applications, such as intersatellite communication where atmospheric attenuation is negligible.
- 11:08 Design of Terahertz All-Dielectric Antenna via Optimisation
-
Broadband high-gain antennas play a crucial role in facilitating high-capacity inter-chip communication. The traditional design of terahertz antennas typically lack design flexibility. These components rely on analytical solutions based on theories. In this work, we propose an end-fire antenna fully integrated with an effective-medium-clad waveguide, and utilize an inverse design methodology. The maximum gain value of the proposed design is larger than 10 dBi over the IEEE terahertz wireless communication band from 252-325 GHz with a low reflection less than -10 dB. The side lobe level of this antenna also remains relatively low. This component not only provides a possibility to integrate an end-fire antenna with an effective-medium-clad waveguide but also demonstrates the potential application of inverse design in terahertz communication components.
- 11:26 Photonics-Based D-Band Terahertz Wireless Communication System
-
A photonics-based terahertz wireless communication system operating at D-band (110 – 170 GHz) is demonstrated, which achieves a data rate of 5 Gbps without error. We analyze and characterize the D-band system performance at different frequencies, transmitter bias voltage, and data rate from 1 Gbps to 5 Gbps. This system provides a platform for characterization of terahertz devices and interconnects as well as exploration of photonics components for performance enhancement.
Wednesday, October 18 10:50 – 11:45
Tutorial 3 – Enabling Joint Communication and Radio Sensing in Mobile Networks: A Tutorial on Advancement and Challenges
Joint communication and radar/radio sensing (JCAS) is emerging as a main technology for future communications and sensing networks and services. To take full advantage of the ubiquitous mobile networks, one can integrate sensing functionalities into future mobile networks to create a perceptive mobile network (PMN). It is envisaged that PMNs have the potential to revolutionize future 5G and beyond networks by offering ubiquitous sensing for numerous smart applications. In the proposed tutorial, we aim to provide a timely overview of the latest development in PMN, including new theories, methodologies, and applications. We will also discuss the challenges that must be overcome in order to achieve widespread adoption of these networks. Through a combination of theoretical concepts and practical examples, this tutorial will provide attendees with a comprehensive understanding of the current state of perceptive mobile networks, as well as insights into future directions for research and development in this exciting field.
Wednesday, October 18 11:50 – 12:30
Keynote 5 – Prof. David Skellern: Leveraging device, circuit and advanced packaging technologies for integrated space communications and awareness
The Semiconductor Sector Service Bureau (S3B) was initiated by the New South Wales (NSW) Government Office of the Chief Scientist & Engineer and received funding in 2022 as part of a broader effort to enhance the involvement of NSW and Australia in the global semiconductor industry. S3B serves as an advocate for the semiconductor sector, fostering connections among companies, researchers, and local/global semiconductor service providers. These connections encompass training, computer-aided design, intellectual property, prototyping, tooling, packaging, testing, and production.
In its first year of operation, S3B identified numerous opportunities where advanced packaging could have a profound impact, opening up new horizons for semiconductor system design, performance, cost efficiency, and overall effectiveness. In this presentation, we will delve into the realm of advanced semiconductor packaging in the context of a specific project currently underway at Quasar Satellite Technologies (QuasarSat). QuasarSat is in the process of adapting CSIRO’s radioastronomy multi-beam phased-array technology for use in the rapidly expanding field of satellite communications. Achieving this adaptation at X-band and K-band frequencies is only feasible by using custom semiconductor designs.
Over the 20 months leading up to September 2023, the number of operational satellites orbiting Earth experienced a remarkable 38% surge, reaching a total of 6718. This surge can primarily be attributed to technological breakthroughs that have significantly reduced the cost of satellite construction and launch. Projections indicate that the satellite count is poised to increase with an annual compound growth rate approaching 40%, ultimately resulting in an estimated additional 58,000 satellites launched by 2030. This estimation is based on data from “FCC filings since approximately 2016,” as documented in the US Government Accountability Office’s report on Mitigating Environmental and Other Effects of Large Constellations of Satellites.
The most substantial and rapidly expanding portion of this satellite proliferation involves low Earth orbiting (LEO) satellites, positioned at altitudes below 200 kilometres from Earth’s surface. These LEO constellations are central to delivering consumer-level internet services on a global scale. Their operation hinges on a combination of satellite links and ground station internet gateways, serving as a vital bridge between remote users and global internet services. Traditional ground station antennas typically employ single-beam, mechanically steered dishes, enabling communication with one satellite at a time.
QuasarSat’s groundbreaking multibeam phased array ground station design takes on two significant challenges that will become progressively more complex as satellite numbers continue to rise. Firstly, it aims to substantially reduce the cost of ground station communication infrastructure compared to the deployment of traditional ground station antennas. Secondly, it seeks to provide essential space domain awareness to ensure the safe and reliable operation of the satellites in the midst of an increasing presence of numerous satellites and orbital debris.
Against the background of the phased array technology heritage, it is also worth noting that QuasarSat’s system holds the potential to mitigate the effects of satellite radio frequency interference on radioastronomy observations.
Wednesday, October 18 12:30 – 1:30
Lunch
Wednesday, October 18 1:30 – 3:00
Regular session 11: Circuits and Systems II
- 1:30 Investigation of Pilot-Based Compensation Scheme for Signal Distortion with Hexagonal Constellation
-
Quadrature Amplitude Modulation (QAM) has been applied to achieve the high data transmission as one of multi-level modulation schemes. In 16-QAM scheme, high Peak-to-Average Ratio (PAPR) becomes a big problem since the devices of wireless communication are required to be small and achieve the high power efficiency. To solve this problem, hexagonal constellation symbol mapping has been proposed to reduce the PAPR. On the real propagation environment, Orthogonal Frequency Division Multiplexing (OFDM) system which is used to various standards is very sensitive to signal distortion such as multipath channel and frequency offset. When the pilot signals are used to compensate the signal distortion, the combination of pilot signals and the mapping need to be considered. In this paper, a signal compensation scheme with hexagonal constellation is studied and Bit Error Rate (BER) are evaluated with computer simulation.
- 1:48 Investigation of Data Transmission for Wireless Power Transfer System in Seawater
-
Wireless power transfer systems have been attracting attention in recent years. This paper investigates the possibility of data transmission in wireless power transfer in seawater by solenoid coil antenna. Vector Network Analyzer (VNA) is used to obtain the transfer function between the antennas. The computer simulation is performed with MATLAB using Orthogonal Frequency Division Multiplexing (OFDM), Quadrature Phase Shift Keying (QPSK), and 8 Phase Shift Keying (8PSK) modulation. The reliability of data transmission is evaluated by Bit Error Rate (BER) performance.
- 2:06 A Consideration on Higher Convergence Adaptive Equalization Method with Noises Reduction Function Using Total Least Squares Method
-
In the communications systems, if received signals do not include noises, we can accurately perform blind equalization for regenerating the transmitted signals. However, if the received signals include noises, the equalization performance generally deteriorates. In order to solve this problem, the regeneration method for transmitted signals based on Total Least Squares method (TLS method) with a noise reduction unit has been proposed. However, convergence rates of the method are low because it is used the simple gradient method. To solve this problem, noting that Recursive Least Squares method (RLS method) based on LS is high convergence, and we proposed Recursive TLS method (RTLS method), which is an TLS-based update rule following RLS method. However, the method has a problem of the computational complexity. Therefore, in this paper, noting that Normalized Least Mean Square method (NLMS method) based on Means Squared Error (MSE) can be higher convergence rates than LMS method without a significant increase in the computational complexity, we propose the TLS-based update rule following NLMS method. The proposed method is compared with conventional methods.
- 2:24 Effect of High Frequency Noise Using DCMs in FPGA on Power Analysis Attack
-
As the IoT society advances, the leakage of cryptographic keys through side-channel attacks has become a realistic threat. This paper proposes a method to improve the resistance of a system to power analysis attacks when implementing a cryptographic system on an FPGA by utilizing a DCM, which is an unused hard macro. The proposed method achieves a high level of resistance to tampering at a low cost, with only the cost of a small Xorshift pseudo-random number generator.
- 2:42 Deep Learning Detection for Massive MIMO Systems
-
This study explores a technique for addressing detection challenges in large multiple-input multiple-output (MIMO) systems using deep learning (DL) and mathematical principles. The research focuses on enhancing the effectiveness of a smart detection method called Fast-Convergence Sparsely Connected Detection Network. To accomplish this, a novel deep neural network is introduced by modifying its structure and incorporating mathematical tools such as eigenvalues and eigenvectors to improve the initial estimation. The numerical results demonstrate that the proposed approach produces superior Bit Error Rate (BER) performance for two QPSK module scenarios. Given BER = 10^{-2}, the proposed methods performed 1desibel better than Fast-Convergence Sparsely Connected Detection Network. Improved performance is achieved by leveraging this method with three-quarters of the layers from the Fast-Convergence Sparsely Connected Detection Network.
Presenter bio: S. PourmohammadAzizi is pursuing a Ph.D. in Electrical Engineering at the National Taiwan Ocean University. He holds Master’s and Bachelor’s degrees in Mathematics. He has been honored with the Taiwan Government Scholarship for his outstanding research contributions as a Ph.D. researcher. His research interests encompass diverse fields, including Artificial Intelligence, Deep Learning, Tele-Communication, and Dynamical Systems Machine Learning.
Wednesday, October 18 1:30 – 3:00
Regular session 12: Wireless Communication
- 1:30 Robust Semantic Communication Systems Based on Image Transmission
-
Semantic communication, commonly relying on deep neural networks (DNNs), transmits the semantics of data instead of transmitting bits of data accurately or delivering signal waveforms precisely, which can remove redundant data and improve communication efficiency. Due to their sensitivity, deep neural networks (DNNs) are highly susceptible to semantic noise. The mismatch of semantic information between transmitters and receivers in semantic communications is referred to as semantic noise, which is interference that has an impact on semantic interpretation. It can be introduced at the source and in the wireless channel. In this paper, we propose a modular approach to remove semantic noise present at various phases. To eliminate the impact of the semantic noise introduced at the source, we first use a GAN-based denoising approach before the semantic encoder. In order to eliminate the impact of the semantic noise introduced at the wireless channel, we secondly use a denoising autoencoder before the channel decoder. Simulation results show that the classification accuracy of our proposed modular denoising approach is higher than that of the joint transmitter-receiver adversarial training approach at low SNR. Therefore, our proposed modular denoising approach significantly improves the robustness of the system under different channel conditions.
- 1:48 Beamwidth Control of Antenna Array by Polarization Mixing for Base Station Application
-
This paper introduces a novel polarization-mixing strategy to significantly widen the beamwidth of dual-linear-polarized antenna arrays without changing the array topology. A much wider beamwidth is achieved compared to the traditional pattern synthesis methods based on amplitude and phase weighting or sparse arrays. The beamwidth can be controlled by simply tuning one phase shift parameter. In the meantime, the polarization-mixing method leads to spatially variable polarizations (SVP). To obtain polarization diversity required in cellular communication systems, two SVP arrays are designed to have their polarizations orthogonal to each other in all directions of interest. It is shown that the obtained spatially-variable-orthogonal-polarization (SVOP) arrays have much broader beam pattern and better polarization orthogonality (PO) than that of the dual-polarized antenna element.
- 2:06 Performance Analysis of DNN-PCA for DOA Estimation with Three Radio Wave Sources
-
Direction of arrival (DOA) estimation is one of extremely important techniques in array signal processing and thus used in several applications, such as radar systems, source localization, and wireless channel estimation. In this paper, we present a new solution for enhancing the performance of a deep neural network (DNN) specialized in DOA estimation under very noisy environments. After applying principal component analysis (PCA) to the DNN training dataset whose samples were generated at a high signal-to-noise ratio (SNR), we verified that it is possible to strongly reduce the influence from noise in the test data, especially when this was generated at lower SNRs. We also evaluated the effect of 1) different number of antenna elements in the array and 2) different number of reduced dimensions of the training, validation, and test data on the DNN estimation performance. The results presented here are expected to set a precedent in using PCA prior to training DNNs for DOA estimation.
Presenter bio: Daniel Akira Ando received the B.E. degree in communication networks engineering from University of Brasilia, Brazil, in 2018 and the M.E. degree in media networks engineering from Hokkaido University, Japan, in 2021. He is currently pursuing the Ph.D. degree at Hokkaido University, Japan. His research interests are in MIMO signal processing for wireless communications. He received the IEICE RCS Young Researcher Award in 2020. - 2:24 An Optimization Method for Shadow Profile Retrieval with Forward Scatter Shadow Ratio
-
This paper proposes an optimization method of using forward scatter shadow ratio (FSSR) to retrieve the shadow profile of a target. It is shown mathematically that the discrete observations of FSSR can be utilized to retrieve the target’s shadow profile, represented by a finite number of rectangular strips. Numerical analysis is conducted on an idealized FSR system and it is found that using two observation lines instead of one improves the accuracy of target shape retrieval and helps eliminate the ambiguity in locating the target. The proposed method demonstrates abilities of retrieving the shadow profile of targets with various sizes and shapes, with good retrieval performance identified under two conditions: moderate distance between the two observation lines and the target center’s projection on the observation plane being in close proximity to one of the observation lines.
Presenter bio: Dr Xi Shen received the B.E.E.E. degree in electronic engineering from Tsinghua University, Beijing, China, in 2003, and the Ph.D. degree in electronic engineering from the Imperial College London, London, U.K., in 2006. He worked at China Unicom as a telecommunication engineer and then senior engineer for 8 years. He is currently a research associate with the School of Electrical, Electronic, and Computer Engineering, The University of Western Australia, Perth, Australia. His research interests include environmental monitoring using microwave communication links, signal processing in satellite communication and forward scatter radar.
Wednesday, October 18 1:30 – 3:00
Invited Session 2
A/Prof Xiangyun (Sean) Zhou (Australian National University): Splitting Receiver: A New Receiver Design for Wireless Communication Systems
Prof Yonghui Li (University of Sydney): uRLLC for 5G and beyond
A/Prof Zhaoming Lu (Beijing University of Posts and Telecommunications): Dynamic Target Tracking using Wi-Fi Transceivers
Wednesday, October 18 3:00 – 3:30
Afternoon Tea
Wednesday, October 18 3:30 – 5:00
UTS Tech Lab Tour
Add your name to the tour list at the Registration Desk. Meet at the Registration Desk by 3:15 PM on Wednesday October 18th for the tour. A bus will take you to Tech Lab and drop you back at the conference venue before 5:30pm.