Sign in to use this feature.

Years

Between: -

Search Results (1,885)

Search Parameters:
Keywords = computer architecture and design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 535 KiB  
Article
Optimizing Convolutional Neural Network Architectures
by Luis Balderas, Miguel Lastra and José M. Benítez
Mathematics 2024, 12(19), 3032; https://doi.org/10.3390/math12193032 (registering DOI) - 28 Sep 2024
Abstract
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited [...] Read more.
Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices. Full article
Show Figures

Figure 1

22 pages, 12450 KiB  
Article
Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather
by Ye Mu, Jinghuan Hu, Heyang Wang, Shijun Li, Hang Zhu, Lan Luo, Jinfan Wei, Lingyun Ni, Hongli Chao, Tianli Hu, Yu Sun, He Gong and Ying Guo
Animals 2024, 14(19), 2800; https://doi.org/10.3390/ani14192800 - 27 Sep 2024
Abstract
In modern animal husbandry, intelligent digital farming has become the key to improve production efficiency. This paper introduces a model based on improved YOLOv8, Cattle Behavior Recognition-YOLO (CBR-YOLO), which aims to accurately identify the behavior of cattle. We not only generate a variety [...] Read more.
In modern animal husbandry, intelligent digital farming has become the key to improve production efficiency. This paper introduces a model based on improved YOLOv8, Cattle Behavior Recognition-YOLO (CBR-YOLO), which aims to accurately identify the behavior of cattle. We not only generate a variety of weather conditions, but also introduce multi-target detection technology to achieve comprehensive monitoring of cattle and their status. We introduce Inner-MPDIoU Loss and we have innovatively designed the Multi-Convolutional Focused Pyramid module to explore and learn in depth the detailed features of cattle in different states. Meanwhile, the Lightweight Multi-Scale Feature Fusion Detection Head module is proposed to take advantage of deep convolution, achieving a lightweight network architecture and effectively reducing redundant information. Experimental results prove that our method achieves an average accuracy of 90.2% with a reduction of 3.9 G floating-point numbers, an increase of 7.4%, significantly better than 12 kinds of SOTA object detection models. By deploying our approach on monitoring computers on farms, we expect to advance the development of automated cattle monitoring systems to improve animal welfare and farm management. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

17 pages, 1579 KiB  
Article
AIDETECT2: A Novel AI-Driven Signal Detection Approach for beyond 5G and 6G Wireless Networks
by Bibin Babu, Muhammad Yunis Daha, Muhammad Ikram Ashraf, Kiran Khurshid and Muhammad Usman Hadi
Electronics 2024, 13(19), 3821; https://doi.org/10.3390/electronics13193821 - 27 Sep 2024
Abstract
Artificial intelligence (AI) is revolutionizing multiple-input-multiple-output (MIMO) technology, making it a promising contender for the coming sixth-generation (6G) and beyond-fifth-generation (B5G) networks. However, the detection process in MIMO systems is highly complex and computationally demanding. To address this challenge, this paper presents an [...] Read more.
Artificial intelligence (AI) is revolutionizing multiple-input-multiple-output (MIMO) technology, making it a promising contender for the coming sixth-generation (6G) and beyond-fifth-generation (B5G) networks. However, the detection process in MIMO systems is highly complex and computationally demanding. To address this challenge, this paper presents an optimized AI-based signal detection method known as AIDETECT-2 which is based on feed forward neural network (FFNN) for MIMO systems. The proposed AIDETECT-2 network model demonstrates superior efficiency in signal detection in comparison with conventional and AI-based MIMO detection methods, particularly in terms of symbol error rate (SER) at various signal-to-noise ratios (SNR). This paper thoroughly explores various signal detection aspects using FFNN, including the design of system architecture, preparation of data, training processes of the network model, and performance evaluation. Simulation results show that the proposed model demonstrates a significant performance improvement ranging between 13.75% to 99.995% better SER compared to the best conventional method and also achieved between 56.52% to 97.69 better SER compared to benchmark AI-based MIMO detectors at 20 dB SNR for given MIMO scenarios respectively. It also presented the computational complexity analysis of different conventional and AI-based MIMO detectors. We believe that this optimized AI-based network model can serve as a comprehensive guide for deploying deep-learning (DL) neural networks for signal detection in the forthcoming 6G wireless networks. Full article
Show Figures

Figure 1

21 pages, 4470 KiB  
Article
A Reference Modelling Approach for Cost Optimal Maintenance for Offshore Wind Farms
by Rasmus Dovnborg Frederiksen, Grzegorz Bocewicz, Peter Nielsen, Grzegorz Radzki and Zbigniew Banaszak
Sustainability 2024, 16(19), 8352; https://doi.org/10.3390/su16198352 - 25 Sep 2024
Abstract
This paper presents a novel reference model designed to optimize the integration of preventive and predictive maintenance strategies for offshore wind farms (OWFs), enhancing operational decision-making. The model’s flexible and declarative architecture facilitates the incorporation of new constraints while maintaining computational efficiency, distinguishing [...] Read more.
This paper presents a novel reference model designed to optimize the integration of preventive and predictive maintenance strategies for offshore wind farms (OWFs), enhancing operational decision-making. The model’s flexible and declarative architecture facilitates the incorporation of new constraints while maintaining computational efficiency, distinguishing it from existing methodologies. Unlike previous research that did not explore the intricate cost dynamics between predictive and preventive maintenance, our approach explicitly addresses the balance between maintenance expenses and wind turbine (WT) downtime costs. We quantify the impacts of these maintenance strategies on key operational metrics, including the Levelized Cost of Energy (LCOE). Using a constraint programming framework, the model enables rapid prototyping of alternative maintenance scenarios, incorporating real-time data on maintenance history, costs, and resource availability. This approach supports the scheduling of service logistics, including the optimization of vessel fleets and service teams. Simulations are used to evaluate the model’s effectiveness in real-world scenarios, such as handling the maintenance of up to 11 wind turbines per business day using no more than four service teams and four vessels, achieving a reduction in overall maintenance costs in simulated case of up to 32% compared to a solution that aims to prevent all downtime events. The prototype implementation as a task-oriented Decision Support System (DSS) further shows its potential in minimizing downtime and optimizing logistics, providing a robust tool for OWF operators. Full article
Show Figures

Figure 1

24 pages, 6042 KiB  
Article
A Methodology Based on Deep Learning for Contact Detection in Radar Images
by Rosa Gonzales Martínez, Valentín Moreno, Pedro Rotta Saavedra, César Chinguel Arrese and Anabel Fraga
Appl. Sci. 2024, 14(19), 8644; https://doi.org/10.3390/app14198644 - 25 Sep 2024
Abstract
Ship detection, a crucial task, relies on the traditional CFAR (Constant False Alarm Rate) algorithm. However, this algorithm is not without its limitations. Noise and clutter in radar images introduce significant variability, hampering the detection of objects on the sea surface. The algorithm’s [...] Read more.
Ship detection, a crucial task, relies on the traditional CFAR (Constant False Alarm Rate) algorithm. However, this algorithm is not without its limitations. Noise and clutter in radar images introduce significant variability, hampering the detection of objects on the sea surface. The algorithm’s theoretically Constant False Alarm Rates are not upheld in practice, particularly when conditions change abruptly, such as with Beaufort wind strength. Moreover, the high computational cost of signal processing adversely affects the detection process’s efficiency. In previous work, a four-stage methodology was designed: The first preprocessing stage consisted of image enhancement by applying convolutions. Labeling and training were performed in the second stage using the Faster R-CNN architecture. In the third stage, model tuning was accomplished by adjusting the weight initialization and optimizer hyperparameters. Finally, object filtering was performed to retrieve only persistent objects. This work focuses on designing a specific methodology for ship detection in the Peruvian coast using commercial radar images. We introduce two key improvements: automatic cropping and a labeling interface. Using artificial intelligence techniques in automatic cropping leads to more precise edge extraction, improving the accuracy of object cropping. On the other hand, the developed labeling interface facilitates a comparative analysis of persistence in three consecutive rounds, significantly reducing the labeling times. These enhancements increase the labeling efficiency and enhance the learning of the detection model. A dataset consisting of 60 radar images is used for the experiments. Two classes of objects are considered, and cross-validation is applied in the training and validation models. The results yield a value of 0.0372 for the cost function, a recovery rate of 94.5%, and an accuracy rate of 95.1%, respectively. This work demonstrates that the proposed methodology can generate a high-performance model for contact detection in commercial radar images. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 8451 KiB  
Article
Remote Sensing Image Dehazing via Dual-View Knowledge Transfer
by Lei Yang, Jianzhong Cao, He Bian, Rui Qu, Huinan Guo and Hailong Ning
Appl. Sci. 2024, 14(19), 8633; https://doi.org/10.3390/app14198633 - 25 Sep 2024
Abstract
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, [...] Read more.
Remote-sensing image dehazing (RSID) is crucial for applications such as military surveillance and disaster assessment. However, current methods often rely on complex network architectures, compromising computational efficiency and scalability. Furthermore, the scarcity of annotated remote-sensing-dehazing datasets hinders model development. To address these issues, a Dual-View Knowledge Transfer (DVKT) framework is proposed to generate a lightweight and efficient student network by distilling knowledge from a pre-trained teacher network on natural image dehazing datasets. The DVKT framework includes two novel knowledge-transfer modules: Intra-layer Transfer (Intra-KT) and Inter-layer Knowledge Transfer (Inter-KT) modules. Specifically, the Intra-KT module is designed to correct the learning bias of the student network by distilling and transferring knowledge from a well-trained teacher network. The Inter-KT module is devised to distill and transfer knowledge about cross-layer correlations. This enables the student network to learn hierarchical and cross-layer dehazing knowledge from the teacher network, thereby extracting compact and effective features. Evaluation results on benchmark datasets demonstrate that the proposed DVKT framework achieves superior performance for RSID. In particular, the distilled model achieves a significant speedup with less than 6% of the parameters and computational cost of the original model, while maintaining a state-of-the-art dehazing performance. Full article
Show Figures

Figure 1

16 pages, 1029 KiB  
Article
Statistical Analysis of nnU-Net Models for Lung Nodule Segmentation
by Alejandro Jerónimo, Olga Valenzuela and Ignacio Rojas
J. Pers. Med. 2024, 14(10), 1016; https://doi.org/10.3390/jpm14101016 - 24 Sep 2024
Abstract
This paper aims to conduct a statistical analysis of different components of nnU-Net models to build an optimal pipeline for lung nodule segmentation in computed tomography images (CT scan). This study focuses on semantic segmentation of lung nodules, using the UniToChest dataset. Our [...] Read more.
This paper aims to conduct a statistical analysis of different components of nnU-Net models to build an optimal pipeline for lung nodule segmentation in computed tomography images (CT scan). This study focuses on semantic segmentation of lung nodules, using the UniToChest dataset. Our approach is based on the nnU-Net framework and is designed to configure a whole segmentation pipeline, thereby avoiding many complex design choices, such as data properties and architecture configuration. Although these framework results provide a good starting point, many configurations in this problem can be optimized. In this study, we tested two U-Net-based architectures, using different preprocessing techniques, and we modified the existing hyperparameters provided by nnU-Net. To study the impact of different settings on model segmentation accuracy, we conducted an analysis of variance (ANOVA) statistical analysis. The factors studied included the datasets according to nodule diameter size, model, preprocessing, polynomial learning rate scheduler, and number of epochs. The results of the ANOVA analysis revealed significant differences in the datasets, models, and preprocessing. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Precision Oncology)
Show Figures

Graphical abstract

25 pages, 60939 KiB  
Article
DETR-ORD: An Improved DETR Detector for Oriented Remote Sensing Object Detection with Feature Reconstruction and Dynamic Query
by Xiaohai He, Kaiwen Liang, Weimin Zhang, Fangxing Li, Zhou Jiang, Zhengqing Zuo and Xinyan Tan
Remote Sens. 2024, 16(18), 3516; https://doi.org/10.3390/rs16183516 - 22 Sep 2024
Abstract
Optical remote sensing images often feature high resolution, dense target distribution, and uneven target sizes, while transformer-based detectors like DETR reduce manually designed components, DETR does not support arbitrary-oriented object detection and suffers from high computational costs and slow convergence when handling large [...] Read more.
Optical remote sensing images often feature high resolution, dense target distribution, and uneven target sizes, while transformer-based detectors like DETR reduce manually designed components, DETR does not support arbitrary-oriented object detection and suffers from high computational costs and slow convergence when handling large sequences of images. Additionally, bipartite graph matching and the limit on the number of queries result in transformer-based detectors performing poorly in scenarios with multiple objects and small object sizes. We propose an improved DETR detector for Oriented remote sensing object detection with Feature Reconstruction and Dynamic Query, termed DETR-ORD. It introduces rotation into the transformer architecture for oriented object detection, reduces computational cost with a hybrid encoder, and includes an IFR (image feature reconstruction) module to address the loss of positional information due to the flattening operation. It also uses ATSS to select auxiliary dynamic training queries for the decoder. This improved DETR-based detector enhances detection performance in challenging oriented optical remote sensing scenarios with similar backbone network parameters. Our approach achieves superior results on most optical remote sensing datasets, such as DOTA-v1.5 (72.07% mAP) and DIOR-R (66.60% mAP), surpassing the baseline detector. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

14 pages, 2847 KiB  
Article
Waveguide-Enhanced Nanoplasmonic Biosensor for Ultrasensitive and Rapid DNA Detection
by Devesh Barshilia, Akhil Chandrakanth Komaram, Lai-Kwan Chau and Guo-En Chang
Micromachines 2024, 15(9), 1169; https://doi.org/10.3390/mi15091169 - 21 Sep 2024
Abstract
DNA is fundamental for storing and transmitting genetic information. Analyzing DNA or RNA base sequences enables the identification of genetic disorders, monitoring gene expression, and detecting pathogens. Traditional detection techniques like polymerase chain reaction (PCR) and next-generation sequencing (NGS) have limitations, including complexity, [...] Read more.
DNA is fundamental for storing and transmitting genetic information. Analyzing DNA or RNA base sequences enables the identification of genetic disorders, monitoring gene expression, and detecting pathogens. Traditional detection techniques like polymerase chain reaction (PCR) and next-generation sequencing (NGS) have limitations, including complexity, high cost, and the need for advanced computational skills. Therefore, there is a significant demand for enzyme-free and amplification-free strategies for rapid, low-cost, and sensitive DNA detection. DNA biosensors, especially those utilizing plasmonic nanomaterials, offer a promising solution. This study introduces a novel DNA-functionalized waveguide-enhanced nanoplasmonic optofluidic biosensor using a nanogold-linked sorbent assay for enzyme-free and amplification-free DNA detection. Integrating plasmonic gold nanoparticles (AuNPs) with a glass planar waveguide (WG) and a microfluidic channel, fabricated through cost-effective, vacuum-free methods, the biosensor achieves specific detection of complementary target DNA sequences. Utilizing a sandwich architecture, AuNPs labeled with detection DNA probes enhance sensitivity by altering evanescent wave distribution and inducing plasmon resonance modes. The biosensor demonstrated exceptional performance in DNA detection, achieving a limit of detection (LOD) of 33.1 fg/mL (4.36 fM) with a rapid response time of approximately 8 min. This ultrasensitive, rapid, and cost-effective biosensor exhibits minimal background nonspecific adsorption, making it highly suitable for clinical applications and early disease diagnosis. The innovative design and fabrication processes offer significant advantages for mass production, presenting a viable tool for precise disease diagnostics and improved clinical outcomes. Full article
Show Figures

Figure 1

15 pages, 3754 KiB  
Article
A Multi-Task Model for Pulmonary Nodule Segmentation and Classification
by Tiequn Tang and Rongfu Zhang
J. Imaging 2024, 10(9), 234; https://doi.org/10.3390/jimaging10090234 - 20 Sep 2024
Abstract
In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as [...] Read more.
In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as they are typically designed for a single task only. Therefore, we propose a multi-task network (MT-Net) that integrates shared backbone architecture and a prediction distillation structure for the simultaneous segmentation and classification of pulmonary nodules. The model comprises a coarse segmentation subnetwork (Coarse Seg-net), a cooperative classification subnetwork (Class-net), and a cooperative segmentation subnetwork (Fine Seg-net). Coarse Seg-net and Fine Seg-net share identical structure, where Coarse Seg-net provides prior location information for the subsequent Fine Seg-net and Class-net, thereby boosting pulmonary nodule segmentation and classification performance. We quantitatively and qualitatively analyzed the performance of the model by using the public dataset LIDC-IDRI. Our results show that the model achieves a Dice similarity coefficient (DI) index of 83.2% for pulmonary nodule segmentation, as well as an accuracy (ACC) of 91.9% for benign and malignant pulmonary nodule classification, which is competitive with other state-of-the-art methods. The experimental results demonstrate that the performance of pulmonary nodule segmentation and classification can be improved by a unified model that leverages the potential correlation between tasks. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 15169 KiB  
Technical Note
SfM Photogrammetry for Cost-Effective 3D Documentation and Rock Art Analysis of the Dombate Dolmen (Spain) and the Megalithic Sites of Chã dos Cabanos and Chã da Escusalha (Portugal)
by Simón Peña-Villasenín, Mariluz Gil-Docampo, Juan Ortiz-Sanz, Luciano Vilas Boas, Ana M. S. Bettencourt and Manés F. Cabanas
Remote Sens. 2024, 16(18), 3480; https://doi.org/10.3390/rs16183480 - 19 Sep 2024
Abstract
SfM (structure from motion) photogrammetry is a technique developed in the field of computer vision that enables the generation of three-dimensional (3D) models from a set of overlapping images captured from disparate angles. The application of this technique in the field of cultural [...] Read more.
SfM (structure from motion) photogrammetry is a technique developed in the field of computer vision that enables the generation of three-dimensional (3D) models from a set of overlapping images captured from disparate angles. The application of this technique in the field of cultural heritage, particularly in the context of megalithic monuments, is inherently challenging due to the spatial constraints of these environments and the usual limitations posed by their architectural design, which often results in poor lighting conditions. This article presents an accurate and cost-efficient methodology for the study and documentation of rock art, which has been applied to three megalithic monuments in the Iberian Peninsula: one in Spain and two in Portugal. The three working environments are complex, but the combination of techniques used and improvements such as rendering for the enhancement of engravings and the creation of 3D stop-motion models made it possible to integrate all the information in 3D formats that allow its universal dissemination. This not only preserves the heritage in graphic form but also makes it accessible to the public, both for study and for virtual visits. Full article
Show Figures

Figure 1

12 pages, 2872 KiB  
Article
Low-Light Image Enhancement via Dual Information-Based Networks
by Manlu Liu, Xiangsheng Li and Yi Fang
Electronics 2024, 13(18), 3713; https://doi.org/10.3390/electronics13183713 - 19 Sep 2024
Abstract
Recently, deep-learning-based low-light image enhancement (LLIE) methods have made great progress. Benefiting from elaborately designed model architectures, these methods enjoy considerable performance gains. However, the generalizability of these methods may be weak, and they may suffer from the overfitting risk in the case [...] Read more.
Recently, deep-learning-based low-light image enhancement (LLIE) methods have made great progress. Benefiting from elaborately designed model architectures, these methods enjoy considerable performance gains. However, the generalizability of these methods may be weak, and they may suffer from the overfitting risk in the case of insufficient data as a result. At the same time, their complex model design brings serious computational burdens. To further improve performance, we exploit dual information, including spatial and channel (contextual) information, in the high-dimensional feature space. Specifically, we introduce customized spatial and channel blocks according to the feature difference of different layers. In shallow layers, the feature resolution is close to that of the original input image, and the spatial information is well preserved. Therefore, the spatial restoration block is designed for leveraging such precise spatial information to achieve better spatial restoration, e.g., revealing the textures and suppressing the noise in the dark. In deep layers, the features contain abundant contextual information, which is distributed in various channels. Hence, the channel interaction block is incorporated for better feature interaction, resulting in stronger model representation capability. Combining the U-Net-like model with the customized spatial and channel blocks makes up our method, which effectively utilizes dual information for image enhancement. Through extensive experiments, we demonstrate that our method, despite its simplicity of design, can provide advanced or competitive performance compared to some state-of-the-art deep learning- based methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

53 pages, 8811 KiB  
Article
An Evaluation of the Security of Bare Machine Computing (BMC) Systems against Cybersecurity Attacks
by Fahad Alotaibi, Ramesh K. Karne, Alexander L. Wijesinha, Nirmala Soundararajan and Abhishek Rangi
J. Cybersecur. Priv. 2024, 4(3), 678-730; https://doi.org/10.3390/jcp4030033 - 18 Sep 2024
Abstract
The Internet has become the primary vehicle for doing almost everything online, and smartphones are needed for almost everyone to live their daily lives. As a result, cybersecurity is a top priority in today’s world. As Internet usage has grown exponentially with billions [...] Read more.
The Internet has become the primary vehicle for doing almost everything online, and smartphones are needed for almost everyone to live their daily lives. As a result, cybersecurity is a top priority in today’s world. As Internet usage has grown exponentially with billions of users and the proliferation of Internet of Things (IoT) devices, cybersecurity has become a cat-and-mouse game between attackers and defenders. Cyberattacks on systems are commonplace, and defense mechanisms are continually updated to prevent them. Based on a literature review of cybersecurity vulnerabilities, attacks, and preventive measures, we find that cybersecurity problems are rooted in computer system architectures, operating systems, network protocols, design options, heterogeneity, complexity, evolution, open systems, open-source software vulnerabilities, user convenience, ease of Internet access, global users, advertisements, business needs, and the global market. We investigate common cybersecurity vulnerabilities and find that the bare machine computing (BMC) paradigm is a possible solution to address and eliminate their root causes at many levels. We study 22 common cyberattacks, identify their root causes, and investigate preventive mechanisms currently used to address them. We compare conventional and bare machine characteristics and evaluate the BMC paradigm and its applications with respect to these attacks. Our study finds that BMC applications are resilient to most cyberattacks, except for a few physical attacks. We also find that BMC applications have inherent security at all computer and information system levels. Further research is needed to validate the security strengths of BMC systems and applications. Full article
Show Figures

Figure 1

24 pages, 10077 KiB  
Article
Emotion Recognition Using EEG Signals through the Design of a Dry Electrode Based on the Combination of Type 2 Fuzzy Sets and Deep Convolutional Graph Networks
by Shokoufeh Mounesi Rad and Sebelan Danishvar
Biomimetics 2024, 9(9), 562; https://doi.org/10.3390/biomimetics9090562 - 18 Sep 2024
Abstract
Emotion is an intricate cognitive state that, when identified, can serve as a crucial component of the brain–computer interface. This study examines the identification of two categories of positive and negative emotions through the development and implementation of a dry electrode electroencephalogram (EEG). [...] Read more.
Emotion is an intricate cognitive state that, when identified, can serve as a crucial component of the brain–computer interface. This study examines the identification of two categories of positive and negative emotions through the development and implementation of a dry electrode electroencephalogram (EEG). To achieve this objective, a dry EEG electrode is created using the silver-copper sintering technique, which is assessed through Scanning Electron Microscope (SEM) and Energy Dispersive X-ray Analysis (EDXA) evaluations. Subsequently, a database is generated utilizing the designated electrode, which is based on the musical stimulus. The collected data are fed into an improved deep network for automatic feature selection/extraction and classification. The deep network architecture is structured by combining type 2 fuzzy sets (FT2) and deep convolutional graph networks. The fabricated electrode demonstrated superior performance, efficiency, and affordability compared to other electrodes (both wet and dry) in this study. Furthermore, the dry EEG electrode was examined in noisy environments and demonstrated robust resistance across a diverse range of Signal-To-Noise ratios (SNRs). Furthermore, the proposed model achieved a classification accuracy of 99% for distinguishing between positive and negative emotions, an improvement of approximately 2% over previous studies. The manufactured dry EEG electrode is very economical and cost-effective in terms of manufacturing costs when compared to recent studies. The proposed deep network, combined with the fabricated dry EEG electrode, can be used in real-time applications for long-term recordings that do not require gel. Full article
Show Figures

Figure 1

24 pages, 5436 KiB  
Article
An Efficient SM9 Aggregate Signature Scheme for IoV Based on FPGA
by Bolin Zhang, Bin Li, Jiaxin Zhang, Yuanxin Wei, Yunfei Yan, Heru Han and Qinglei Zhou
Sensors 2024, 24(18), 6011; https://doi.org/10.3390/s24186011 - 17 Sep 2024
Abstract
With the rapid development of the Internet of Vehicles (IoV), the demand for secure and efficient signature verification is becoming increasingly urgent. To meet this need, we propose an efficient SM9 aggregate signature scheme implemented on Field-Programmable Gate Array (FPGA). The scheme includes [...] Read more.
With the rapid development of the Internet of Vehicles (IoV), the demand for secure and efficient signature verification is becoming increasingly urgent. To meet this need, we propose an efficient SM9 aggregate signature scheme implemented on Field-Programmable Gate Array (FPGA). The scheme includes both fault-tolerant and non-fault-tolerant aggregate signature modes, which are designed to address challenges in various network environments. We provide security proofs for these two signature verification modes based on a K-ary Computational Additive Diffie–Hellman (K-CAA) difficult problem. To handle the numerous parallelizable elliptic curve point multiplication operations required during verification, we utilize FPGA’s parallel processing capabilities to design an efficient parallel point multiplication architecture. By the Montgomery point multiplication algorithm and the Barrett modular reduction algorithm, we optimize the single-point multiplication computation unit, achieving a point multiplication speed of 70776 times per second. Finally, the overall scheme was simulated and analyzed on an FPGA platform. The experimental results and analysis indicate that under error-free conditions, the proposed non-fault-tolerant aggregate mode reduces the verification time by up to 97.1% compared to other schemes. In fault-tolerant conditions, the proposed fault-tolerant aggregate mode reduces the verification time by up to 77.2% compared to other schemes. When compared to other fault-tolerant aggregate schemes, its verification time is only 28.9% of their consumption, and even in the non-fault-tolerant aggregate mode, the verification time is reduced by at least 39.1%. Therefore, the proposed scheme demonstrates significant advantages in both error-free and fault-tolerant scenarios. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Back to TopTop