Sign in to use this feature.

Years

Between: -

Search Results (99)

Search Parameters:
Keywords = deep leaning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4389 KiB  
Article
Fault Classification of 3D-Printing Operations Using Different Types of Machine and Deep Learning Techniques
by Satish Kumar, Sameer Sayyad and Arunkumar Bongale
AI 2024, 5(4), 1759-1778; https://doi.org/10.3390/ai5040087 - 27 Sep 2024
Viewed by 325
Abstract
Fused deposition modeling (FDM), a method of additive manufacturing (AM), comprises the extrusion of materials via a nozzle and the subsequent combining of the layers to create 3D-printed objects. FDM is a widely used method for 3D-printing objects since it is affordable, effective, [...] Read more.
Fused deposition modeling (FDM), a method of additive manufacturing (AM), comprises the extrusion of materials via a nozzle and the subsequent combining of the layers to create 3D-printed objects. FDM is a widely used method for 3D-printing objects since it is affordable, effective, and easy to use. Some defects such as poor infill, elephant foot, layer shift, and poor surface finish arise in the FDM components at the printing stage due to variations in printing parameters such as printing speed, change in nozzle, or bed temperature. Proper fault classification is required to identify the cause of faulty products. In this work, the multi-sensory data are gathered using different sensors such as vibration, current, temperature, and sound sensors. The data acquisition is performed by using the National Instrumentation (NI) Data Acquisition System (DAQ) which provides the synchronous multi-sensory data for the model training. To induce the faults, the data are captured under different conditions such as variations in printing speed, temperate, and jerk during the printing. The collected data are used to train the machine learning (ML) and deep learning (DL) classification models to classify the variation in printing parameters. The ML models such as k-nearest neighbor (KNN), decision tree (DT), extra trees (ET), and random forest (RF) with convolutional neural network (CNN) as a DL model are used to classify the variable operation printing parameters. Out of the available models, in ML models, the RF classifier shows a classification accuracy of around 91% whereas, in the DL model, the CNN model shows good classification performance with accuracy ranging from 92 to 94% under variable operating conditions. Full article
(This article belongs to the Special Issue Intelligent Systems for Industry 4.0)
18 pages, 8879 KiB  
Article
Introducing Cement-Enhanced Clay-Sand Columns under Footings Placed on Expansive Soils
by Abdullah A. Shaker and Muawia Dafalla
Appl. Sci. 2024, 14(18), 8152; https://doi.org/10.3390/app14188152 - 11 Sep 2024
Viewed by 373
Abstract
The risk posed by expansive soils can be lessened by placing foundations at a more deep level below the surface. Structures are able to withstand uplift forces because overburden pressure partially suppresses swelling pressure. In order to transfer the forces to a sufficiently [...] Read more.
The risk posed by expansive soils can be lessened by placing foundations at a more deep level below the surface. Structures are able to withstand uplift forces because overburden pressure partially suppresses swelling pressure. In order to transfer the forces to a sufficiently deep depth, this study suggests introducing shafts of a low-expansion overburden material. Soil improved with cement is chosen for this purpose. This study suggests using sand with added excavated natural clay and cement. The expansive clay is added to sand in ratios of 10, 20, 30, 40 and 60%. The clay–sand mixture is then enhanced by cement of 1, 2, 4 and 8% by the weight of the mixture under four curing periods of 1, 7, 28, and 90 days. This material is recommended for use under lean concrete to transfer the loads to lower levels below the foundation depth. The thickness of this material depends on the stresses exerted, the type and the properties of the subsurface soils. The cement-enhanced clay–sand shaft’s properties are examined in this work with regard to the swelling potential, compressibility, and the unconfined compressive strength for different clay contents and curing conditions. Stiff shafts were formed and found to support stresses from 600 to 3500 kPa at cement additions in the range of 1% to 8%. Clay content above 30% is found to be not suitable for Al-Qatif clay due to the compressibility and low strength of the mixture. When two percent or more of cement is added, the swelling potential is significantly reduced. This is reliant on the pozzolanic interactions of soils and cement as well as the clay mineralogy. Determining how cement affects clay–sand combinations in regions with expansive soils would facilitate the introduction of a novel, inexpensive technology to support loads applied by the superstructure. Full article
(This article belongs to the Special Issue Foundation Treatment in Civil Engineering)
Show Figures

Figure 1

13 pages, 10032 KiB  
Article
Releaf: An Efficient Method for Real-Time Occlusion Handling by Game Theory
by Hamid Osooli, Nakul Joshi, Pranav Khurana, Amirhossein Nikoofard, Zahra Shirmohammadi and Reza Azadeh
Sensors 2024, 24(17), 5727; https://doi.org/10.3390/s24175727 - 3 Sep 2024
Viewed by 363
Abstract
Receiving uninterrupted videos from a scene with multiple cameras is a challenging task. One of the issues that significantly affects this task is called occlusion. In this paper, we propose an algorithm for occlusion handling in multi-camera systems. The proposed algorithm, which is [...] Read more.
Receiving uninterrupted videos from a scene with multiple cameras is a challenging task. One of the issues that significantly affects this task is called occlusion. In this paper, we propose an algorithm for occlusion handling in multi-camera systems. The proposed algorithm, which is called Real-time leader finder (Releaf), leverages mechanism design to assign leader and follower roles to each of the cameras in a multi-camera setup. We assign leader and follower roles to the cameras and lead the motion by the camera with the least occluded view using the Stackelberg equilibrium. The proposed approach is evaluated on our previously open-sourced tendon-driven 3D-printed robotic eye that tracks the face of a human subject. Experimental results demonstrate the superiority of the proposed algorithm over the Q-leaning and Deep Q Networks (DQN) baselines, achieving an improvement of 20% and 18% for horizontal errors and an enhancement of 81% for vertical errors, as measured by the root mean squared error metric. Furthermore, Releaf has the superiority of real-time performance, which removes the need for training and makes it a promising approach for occlusion handling in multi-camera systems. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

19 pages, 6922 KiB  
Article
A Study of Classroom Behavior Recognition Incorporating Super-Resolution and Target Detection
by Xiaoli Zhang, Jialei Nie, Shoulin Wei, Guifu Zhu, Wei Dai and Can Yang
Sensors 2024, 24(17), 5640; https://doi.org/10.3390/s24175640 - 30 Aug 2024
Viewed by 371
Abstract
With the development of educational technology, machine learning and deep learning provide technical support for traditional classroom observation assessment. However, in real classroom scenarios, the technique faces challenges such as lack of clarity of raw images, complexity of datasets, multi-target detection errors, and [...] Read more.
With the development of educational technology, machine learning and deep learning provide technical support for traditional classroom observation assessment. However, in real classroom scenarios, the technique faces challenges such as lack of clarity of raw images, complexity of datasets, multi-target detection errors, and complexity of character interactions. Based on the above problems, a student classroom behavior recognition network incorporating super-resolution and target detection is proposed. To cope with the problem of unclear original images in the classroom scenario, SRGAN (Super Resolution Generative Adversarial Network for Images) is used to improve the image resolution and thus the recognition accuracy. To address the dataset complexity and multi-targeting problems, feature extraction is optimized, and multi-scale feature recognition is enhanced by introducing AKConv and LASK attention mechanisms into the Backbone module of the YOLOv8s algorithm. To improve the character interaction complexity problem, the CBAM attention mechanism is integrated to enhance the recognition of important feature channels and spatial regions. Experiments show that it can detect six behaviors of students—raising their hands, reading, writing, playing on their cell phones, looking down, and leaning on the table—in high-definition images. And the accuracy and robustness of this network is verified. Compared with small-object detection algorithms such as Faster R-CNN, YOLOv5, and YOLOv8s, this network demonstrates good detection performance on low-resolution small objects, complex datasets with numerous targets, occlusion, and overlapping students. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 31926 KiB  
Article
Deep Learning and Histogram-Based Grain Size Analysis of Images
by Wei Wei, Xiaohong Xu, Guangming Hu, Yanlin Shao and Qing Wang
Sensors 2024, 24(15), 4923; https://doi.org/10.3390/s24154923 - 30 Jul 2024
Viewed by 497
Abstract
Grain size analysis is used to study grain size and distribution. It is a critical indicator in sedimentary simulation experiments (SSEs), which aids in understanding hydrodynamic conditions and identifying the features of sedimentary environments. Existing methods for grain size analysis based on images [...] Read more.
Grain size analysis is used to study grain size and distribution. It is a critical indicator in sedimentary simulation experiments (SSEs), which aids in understanding hydrodynamic conditions and identifying the features of sedimentary environments. Existing methods for grain size analysis based on images primarily focus on scenarios where grain edges are distinct or grain arrangements are regular. However, these methods are not suitable for images from SSEs. We proposed a deep learning model incorporating histogram layers for the analysis of SSE images with fuzzy grain edges and irregular arrangements. Firstly, ResNet18 was used to extract features from SSE images. These features were then input into the histogram layer to obtain local histogram features, which were concatenated to form comprehensive histogram features for the entire image. Finally, the histogram features were connected to a fully connected layer to estimate the grain size corresponding to the cumulative volume percentage. In addition, an applied workflow was developed. The results demonstrate that the proposed method achieved higher accuracy than the eight other models and was highly consistent with manual results in practice. The proposed method enhances the efficiency and accuracy of grain size analysis for images with irregular grain distribution and improves the quantification and automation of grain size analysis in SSEs. It can also be applied for grain size analysis in fields such as soil and geotechnical engineering. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

19 pages, 4438 KiB  
Article
Automatic Quality Assessment of Pork Belly via Deep Learning and Ultrasound Imaging
by Tianshuo Wang, Huan Yang, Chunlei Zhang, Xiaohuan Chao, Mingzheng Liu, Jiahao Chen, Shuhan Liu and Bo Zhou
Animals 2024, 14(15), 2189; https://doi.org/10.3390/ani14152189 - 27 Jul 2024
Viewed by 647
Abstract
Pork belly, prized for its unique flavor and texture, is often overlooked in breeding programs that prioritize lean meat production. The quality of pork belly is determined by the number and distribution of muscle and fat layers. This study aimed to assess the [...] Read more.
Pork belly, prized for its unique flavor and texture, is often overlooked in breeding programs that prioritize lean meat production. The quality of pork belly is determined by the number and distribution of muscle and fat layers. This study aimed to assess the number of pork belly layers using deep learning techniques. Initially, semantic segmentation was considered, but the intersection over union (IoU) scores for the segmented parts were below 70%, which is insufficient for practical application. Consequently, the focus shifted to image classification methods. Based on the number of fat and muscle layers, a dataset was categorized into three groups: three layers (n = 1811), five layers (n = 1294), and seven layers (n = 879). Drawing upon established model architectures, the initial model was refined for the task of learning and predicting layer traits from B-ultrasound images of pork belly. After a thorough evaluation of various performance metrics, the ResNet18 model emerged as the most effective, achieving a remarkable training set accuracy of 99.99% and a validation set accuracy of 96.22%, with corresponding loss values of 0.1478 and 0.1976. The robustness of the model was confirmed through three interpretable analysis methods, including grad-CAM, ensuring its reliability. Furthermore, the model was successfully deployed in a local setting to process B-ultrasound video frames in real time, consistently identifying the pork belly layer count with a confidence level exceeding 70%. By employing a scoring system with 100 points as the threshold, the number of pork belly layers in vivo was categorized into superior and inferior grades. This innovative system offers immediate decision-making support for breeding determinations and presents a highly efficient and precise method for assessment of pork belly layers. Full article
(This article belongs to the Special Issue Genetic Improvement in Pigs)
Show Figures

Figure 1

5 pages, 174 KiB  
Comment
Comment on Martínez-Delgado et al. Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions. Sensors 2021, 21, 5273
by Josiah Z. R. Misplon, Varun Saini, Brianna P. Sloves, Sarah H. Meerts and David R. Musicant
Sensors 2024, 24(13), 4361; https://doi.org/10.3390/s24134361 - 5 Jul 2024
Viewed by 575
Abstract
The paper “Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions” (Sensors 2021, 21, 5273) proposes a novel approach to predicting blood glucose levels for people with type 1 diabetes mellitus (T1DM). By [...] Read more.
The paper “Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions” (Sensors 2021, 21, 5273) proposes a novel approach to predicting blood glucose levels for people with type 1 diabetes mellitus (T1DM). By building exponential models from raw carbohydrate and insulin data to simulate the absorption in the body, the authors reported a reduction in their model’s root-mean-square error (RMSE) from 15.5 mg/dL (raw) to 9.2 mg/dL (exponential) when predicting blood glucose levels one hour into the future. In this comment, we demonstrate that the experimental techniques used in that paper are flawed, which invalidates its results and conclusions. Specifically, after reviewing the authors’ code, we found that the model validation scheme was malformed, namely, the training and test data from the same time intervals were mixed. This means that the reported RMSE numbers in the referenced paper did not accurately measure the predictive capabilities of the approaches that were presented. We repaired the measurement technique by appropriately isolating the training and test data, and we discovered that their models actually performed dramatically worse than was reported in the paper. In fact, the models presented in the that paper do not appear to perform any better than a naive model that predicts future glucose levels to be the same as the current ones. Full article
(This article belongs to the Special Issue Sensors, Systems, and AI for Healthcare II)
20 pages, 2633 KiB  
Article
Deep Low-Carbon Economic Optimization Using CCUS and Two-Stage P2G with Multiple Hydrogen Utilizations for an Integrated Energy System with a High Penetration Level of Renewables
by Junqiu Fan, Jing Zhang, Long Yuan, Rujing Yan, Yu He, Weixing Zhao and Nang Nin
Sustainability 2024, 16(13), 5722; https://doi.org/10.3390/su16135722 - 4 Jul 2024
Cited by 1 | Viewed by 815
Abstract
Integrating carbon capture and storage (CCS) technology into an integrated energy system (IES) can reduce its carbon emissions and enhance its low-carbon performance. However, the full CCS of flue gas displays a strong coupling between lean and rich liquor as carbon dioxide liquid [...] Read more.
Integrating carbon capture and storage (CCS) technology into an integrated energy system (IES) can reduce its carbon emissions and enhance its low-carbon performance. However, the full CCS of flue gas displays a strong coupling between lean and rich liquor as carbon dioxide liquid absorbents. Its integration into IESs with a high penetration level of renewables results in insufficient flexibility and renewable curtailment. In addition, integrating split-flow CCS of flue gas facilitates a short capture time, giving priority to renewable energy. To address these limitations, this paper develops a carbon capture, utilization, and storage (CCUS) method, into which storage tanks for lean and rich liquor and a two-stage power-to-gas (P2G) system with multiple utilizations of hydrogen including a fuel cell and a hydrogen-blended CHP unit are introduced. The CCUS is integrated into an IES to build an electricity–heat–hydrogen–gas IES. Accordingly, a deep low-carbon economic optimization strategy for this IES, which considers stepwise carbon trading, coal consumption, renewable curtailment penalties, and gas purchasing costs, is proposed. The effects of CCUS, the two-stage P2G system, and stepwise carbon trading on the performance of this IES are analyzed through a case-comparative analysis. The results show that the proposed method allows for a significant reduction in both carbon emissions and total operational costs. It outperforms the IES without CCUS with an 8.8% cost reduction and a 70.11% reduction in carbon emissions. Compared to the IES integrating full CCS, the proposed method yields reductions of 6.5% in costs and 24.7% in emissions. Furthermore, the addition of a two-stage P2G system with multiple utilizations of hydrogen further amplifies these benefits, cutting costs by 13.97% and emissions by 12.32%. In addition, integrating CCUS into IESs enables the full consumption of renewables and expands hydrogen utilization, and the renewable consumption proportion in IESs can reach 69.23%. Full article
Show Figures

Figure 1

15 pages, 4722 KiB  
Article
Adversarial Robustness Enhancement for Deep Learning-Based Soft Sensors: An Adversarial Training Strategy Using Historical Gradients and Domain Adaptation
by Runyuan Guo, Qingyuan Chen, Han Liu and Wenqing Wang
Sensors 2024, 24(12), 3909; https://doi.org/10.3390/s24123909 - 17 Jun 2024
Cited by 3 | Viewed by 605
Abstract
Despite their high prediction accuracy, deep learning-based soft sensor (DLSS) models face challenges related to adversarial robustness against malicious adversarial attacks, which hinder their widespread deployment and safe application. Although adversarial training is the primary method for enhancing adversarial robustness, existing adversarial-training-based defense [...] Read more.
Despite their high prediction accuracy, deep learning-based soft sensor (DLSS) models face challenges related to adversarial robustness against malicious adversarial attacks, which hinder their widespread deployment and safe application. Although adversarial training is the primary method for enhancing adversarial robustness, existing adversarial-training-based defense methods often struggle with accurately estimating transfer gradients and avoiding adversarial robust overfitting. To address these issues, we propose a novel adversarial training approach, namely domain-adaptive adversarial training (DAAT). DAAT comprises two stages: historical gradient-based adversarial attack (HGAA) and domain-adaptive training. In the first stage, HGAA incorporates historical gradient information into the iterative process of generating adversarial samples. It considers gradient similarity between iterative steps to stabilize the updating direction, resulting in improved transfer gradient estimation and stronger adversarial samples. In the second stage, a soft sensor domain-adaptive training model is developed to learn common features from adversarial and original samples through domain-adaptive training, thereby avoiding excessive leaning toward either side and enhancing the adversarial robustness of DLSS without robust overfitting. To demonstrate the effectiveness of DAAT, a DLSS model for crystal quality variables in silicon single-crystal growth manufacturing processes is used as a case study. Through DAAT, the DLSS achieves a balance between defense against adversarial samples and prediction accuracy on normal samples to some extent, offering an effective approach for enhancing the adversarial robustness of DLSS. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

11 pages, 2761 KiB  
Article
A Deep Learning Approach for the Fast Generation of Synthetic Computed Tomography from Low-Dose Cone Beam Computed Tomography Images on a Linear Accelerator Equipped with Artificial Intelligence
by Luca Vellini, Sergio Zucca, Jacopo Lenkowicz, Sebastiano Menna, Francesco Catucci, Flaviovincenzo Quaranta, Elisa Pilloni, Andrea D'Aviero, Michele Aquilano, Carmela Di Dio, Martina Iezzi, Alessia Re, Francesco Preziosi, Antonio Piras, Althea Boschetti, Danila Piccari, Gian Carlo Mattiucci and Davide Cusumano
Appl. Sci. 2024, 14(11), 4844; https://doi.org/10.3390/app14114844 - 3 Jun 2024
Viewed by 667
Abstract
Artificial Intelligence (AI) is revolutionising many aspects of radiotherapy (RT), opening scenarios that were unimaginable just a few years ago. The aim of this study is to propose a Deep Leaning (DL) approach able to quickly generate synthetic Computed Tomography (CT) images from [...] Read more.
Artificial Intelligence (AI) is revolutionising many aspects of radiotherapy (RT), opening scenarios that were unimaginable just a few years ago. The aim of this study is to propose a Deep Leaning (DL) approach able to quickly generate synthetic Computed Tomography (CT) images from low-dose Cone Beam CT (CBCT) acquired on a modern linear accelerator integrating AI. Methods: A total of 53 patients treated in the pelvic region were enrolled and split into training (30), validation (9), and testing (14). A Generative Adversarial Network (GAN) was trained for 200 epochs. The image accuracy was evaluated by calculating the mean and mean absolute error (ME and ME) between sCT and CT. RT treatment plans were calculated on CT and sCT images, and dose accuracy was evaluated considering Dose Volume Histogram (DVH) and gamma analysis. Results: A total of 4507 images were selected for training. The MAE and ME values in the test set were 36 ± 6 HU and 7 ± 6 HU, respectively. Mean gamma passing rates for 1%/1 mm, 2%/2 mm, and 3%/3 mm tolerance criteria were respectively 93.5 ± 3.4%, 98.0 ± 1.3%, and 99.2 ± 0.7%, with no difference between curative and palliative cases. All the DVH parameters analysed were within 1 Gy of the difference between sCT and CT. Conclusion: This study demonstrated that sCT generation using the DL approach is feasible on low-dose CBCT images. The proposed approach can represent a valid tool to speed up the online adaptive procedure and remove CT simulation from the RT workflow. Full article
(This article belongs to the Special Issue Developments of Diagnostic Imaging Applied in Radiotherapy)
Show Figures

Figure 1

19 pages, 1791 KiB  
Article
A Deep Learning Approach to Predict Supply Chain Delivery Delay Risk Based on Macroeconomic Indicators: A Case Study in the Automotive Sector
by Matteo Gabellini, Lorenzo Civolani, Francesca Calabrese and Marco Bortolini
Appl. Sci. 2024, 14(11), 4688; https://doi.org/10.3390/app14114688 - 29 May 2024
Cited by 1 | Viewed by 831
Abstract
The development of predictive approaches to estimate supplier delivery risks has become vital for companies that rely heavily on outsourcing practices and lean management strategies in the era of the shortage economy. However, the literature that presents studies proposing the development of such [...] Read more.
The development of predictive approaches to estimate supplier delivery risks has become vital for companies that rely heavily on outsourcing practices and lean management strategies in the era of the shortage economy. However, the literature that presents studies proposing the development of such approaches is still in its infancy, and several gaps have been found. In particular, most of the current studies present approaches that can only estimate whether suppliers will be late or not. Moreover, even if autocorrelation in data has been widely considered in demand forecasting, it has been neglected in supplier delivery risk predictions. Finally, current approaches struggle to consider macroeconomic data as input and rely mostly on machine learning models, while deep learning ones have rarely been investigated. The main contribution of this study is thus to propose a new approach that for the first time simultaneously adopts a deep learning model able to capture autocorrelation in data and integrates several macroeconomic indicators as input. Furthermore, as a second contribution, the performance of the proposed approach has been investigated in a real automotive case study and compared with those studies resulting from approaches that adopt traditional statistical models and models that do not consider macroeconomic indicators as additional inputs. The results highlight the capabilities of the proposed approach to provide good forecasts and outperform benchmarks for most of the considered predictions. Furthermore, the results provide evidence of the importance of considering macroeconomic indicators as additional input. Full article
Show Figures

Figure 1

14 pages, 8617 KiB  
Article
Water Pipeline Leakage Detection Based on Coherent φ-OTDR and Deep Learning Technology
by Shuo Zhang, Zijian Xiong, Boyuan Ji, Nan Li, Zhangwei Yu, Shengnan Wu and Sailing He
Appl. Sci. 2024, 14(9), 3814; https://doi.org/10.3390/app14093814 - 29 Apr 2024
Viewed by 1164
Abstract
Leakage in water supply pipelines remains a significant challenge. It leads to resource and economic waste. Researchers have developed several leak detection methods, including the use of embedded sensors and pressure prediction. The former approach involves pre-installing detectors inside pipelines to detect leaks. [...] Read more.
Leakage in water supply pipelines remains a significant challenge. It leads to resource and economic waste. Researchers have developed several leak detection methods, including the use of embedded sensors and pressure prediction. The former approach involves pre-installing detectors inside pipelines to detect leaks. This method allows for the precise localization of leak points. The stability is compromised because of the wireless signal strength. The latter approach, which relies on pressure measurements to predict leak events, does not achieve precise leak point localization. To address these challenges, in this paper, a coherent optical time-domain reflectometry (φ-OTDR) system is employed to capture vibration signal phase information. Subsequently, two pre-trained neural network models based on CNN and Resnet18 are responsible for processing this information to accurately identify vibration events. In an experimental setup simulating water pipelines, phase information from both leaking and non-leaking pipe segments is collected. Using this dataset, classical CNN and ResNet18 models are trained, achieving accuracy rates of 99.7% and 99.5%, respectively. The multi-leakage point experiment results indicate that the Resnet18 model has better generalization compared to the CNN model. The proposed solution enables long-distance water-pipeline precise leak point localization and accurate vibration event identification. Full article
(This article belongs to the Special Issue Advanced Optical-Fiber-Related Technologies)
Show Figures

Figure 1

21 pages, 497 KiB  
Article
Large Language Model-Informed X-ray Photoelectron Spectroscopy Data Analysis
by J. de Curtò, I. de Zarzà, Gemma Roig and Carlos T. Calafate
Signals 2024, 5(2), 181-201; https://doi.org/10.3390/signals5020010 - 27 Mar 2024
Viewed by 1181
Abstract
X-ray photoelectron spectroscopy (XPS) remains a fundamental technique in materials science, offering invaluable insights into the chemical states and electronic structure of a material. However, the interpretation of XPS spectra can be complex, requiring deep expertise and often sophisticated curve-fitting methods. In this [...] Read more.
X-ray photoelectron spectroscopy (XPS) remains a fundamental technique in materials science, offering invaluable insights into the chemical states and electronic structure of a material. However, the interpretation of XPS spectra can be complex, requiring deep expertise and often sophisticated curve-fitting methods. In this study, we present a novel approach to the analysis of XPS data, integrating the utilization of large language models (LLMs), specifically OpenAI’s GPT-3.5/4 Turbo to provide insightful guidance during the data analysis process. Working in the framework of the CIRCE-NAPP beamline at the CELLS ALBA Synchrotron facility where data are obtained using ambient pressure X-ray photoelectron spectroscopy (APXPS), we implement robust curve-fitting techniques on APXPS spectra, highlighting complex cases including overlapping peaks, diverse chemical states, and noise presence. Post curve fitting, we engage the LLM to facilitate the interpretation of the fitted parameters, leaning on its extensive training data to simulate an interaction corresponding to expert consultation. The manuscript presents also a real use case utilizing GPT-4 and Meta’s LLaMA-2 and describes the integration of the functionality into the TANGO control system. Our methodology not only offers a fresh perspective on XPS data analysis, but also introduces a new dimension of artificial intelligence (AI) integration into scientific research. It showcases the power of LLMs in enhancing the interpretative process, particularly in scenarios wherein expert knowledge may not be immediately available. Despite the inherent limitations of LLMs, their potential in the realm of materials science research is promising, opening doors to a future wherein AI assists in the transformation of raw data into meaningful scientific knowledge. Full article
Show Figures

Figure 1

17 pages, 854 KiB  
Review
Surveying Quality Management Methodologies in Wooden Furniture Production
by Ewa Skorupińska, Miloš Hitka and Maciej Sydor
Systems 2024, 12(2), 51; https://doi.org/10.3390/systems12020051 - 3 Feb 2024
Cited by 5 | Viewed by 3745
Abstract
Furniture production is a specific industrial sector with a high human labor demand, a wide range of materials processed, and short production runs caused by high customization of end products. The difficulty of measuring the aesthetic requirements of customers is also specific to [...] Read more.
Furniture production is a specific industrial sector with a high human labor demand, a wide range of materials processed, and short production runs caused by high customization of end products. The difficulty of measuring the aesthetic requirements of customers is also specific to furniture. This review of academic papers identifies and explains effective quality management strategies in furniture production. The reviewed literature highlights a range of quality management methodologies, including concurrent engineering (CE), total quality management (TQM), lean manufacturing, lean six sigma, and kaizen. These strategies encompass a variety of pro-quality tools, such as 5S, statistical process control (SPC), quality function deployment (QFD), and failure mode and effects analysis (FMEA). The strengths of these quality management strategies lie in their ability to enhance efficiency, reduce waste, increase product diversity, and improve product quality. However, the weaknesses concern implementation challenges and the need for culture change within organizations. Successful quality management in furniture production requires tailoring strategies to the specific context of the furniture production industry. Additionally, the importance of sustainability in the furniture industry is emphasized, which entails incorporating circular economy principles and resource-efficient practices. The most important finding from the literature analysis is that early detection and correction of poor quality yields the most beneficial outcomes for the manufacturer. Therefore, it is essential to strengthen the rigor of quality testing and analysis during the early stages of product development. Consequently, a deep understanding of consumer perspectives on required furniture quality is crucial. The review identified two research gaps: (1) the impact of unnecessary product over-quality on the efficiency of furniture production and (2) the influence of replacing CAD drawings with a model-based definition (MBD) format on quality management in furniture production. Full article
Show Figures

Figure 1

14 pages, 5073 KiB  
Article
A Neural Network-Based Flame Structure Feature Extraction Method for the Lean Blowout Recognition
by Puti Yan, Zhen Cao, Jiangbo Peng, Chaobo Yang, Xin Yu, Penghua Qiu, Shanchun Zhang, Minghong Han, Wenbei Liu and Zuo Jiang
Aerospace 2024, 11(1), 57; https://doi.org/10.3390/aerospace11010057 - 7 Jan 2024
Viewed by 1213
Abstract
A flame’s structural feature is a crucial parameter required to comprehensively understand the interaction between turbulence and flames. The generation and evolution processes of the structure feature have rarely been investigated in lean blowout (LBO) flame instability states. Hence, to understand the precursor [...] Read more.
A flame’s structural feature is a crucial parameter required to comprehensively understand the interaction between turbulence and flames. The generation and evolution processes of the structure feature have rarely been investigated in lean blowout (LBO) flame instability states. Hence, to understand the precursor features of the LBO flame, this work employed high-speed OH-PLIF measurements to acquire time-series LBO flame images and developed a novel feature extraction method based on a deep neural network to quantify the LBO features in real time. Meanwhile, we proposed a deep neural network segmentation method based on a tri-map called the Fire-MatteFormer, and conducted a statistical analysis on flame surface features, primarily holes. The statistical analysis results determined the relationship between the life cycle of holes (from generation to disappearance) and their area, perimeter, and total number. The trained Fire-MatteFormer model was found to represent a viable method for determining flame features in the detection of incipient LBO instability conditions. Overall, the model shows significant promise in ascertaining local flame structure features. Full article
Show Figures

Figure 1

Back to TopTop