Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Practical Evaluation and Performance Analysis for Deepfake Detection using Advanced AI Models

Introduction: In the 21st century of digital technology, deepfakes are increasingly becoming a serious cause for the nation. Deepfake technology, which can generate extremely realistic fake images and movies, can be used for both creative and harmful objectives..Nowadays it is very difficult to identify which image/video/media is original or fake.

Objective: Our objective of this paper is to create a robust and reliable model that recognizes the Deepfake media using some of the advanced artificial intelligence techniques like:- Machine Learning and Deep learning classifiers

Material/methods: In our research work we have used digital tools and advanced technologies that capture real-time images, and videos as well as cameras, and microphones that are used to monitor. The data we have collected from the Kaggle repository and as well as a real-time environment for training as well as testing purposes. The Edge Devices is used for video processing and analysis purposes. Deep learning algorithms like CNN, RNN, VGG16, MTCNN, InceptionResnetV1, and the Facenet_pytroch are used to identify which one is real or fake. The extensive feature selection algorithms(Recursive Feature Elimination, PCA, CORR) are used to improve the effectiveness of the model

Results: For the effectiveness of our model, we compare the training and testing accuracy of the algorithms. The performance metrics ( Accuracy, precision, recall, and F1-score) are used for unseen environment data. Our experimental work gave an excellent result with an accuracy of 95% by MTCNN, 98% by InceptionResnetV1 98% by the Facenet_pytroch, and 92% by CNN.

  • Open access
  • 0 Reads
Application of quantum computing algorithms in the synthesis of control systems for dynamic objects

Currently, the main focus in the automation of technological processes is on developing control systems that enhance the quality of the control process. Because the systems being controlled are often complex, multidimensional, and nonlinear, quantum computing algorithms offer an effective solution. Although there are several intelligent control methods available to improve the quality of technological processes, each has certain drawbacks. Quantum algorithms, which rely on the principles of quantum correlation and superposition, are designed to optimize control while minimizing energy and resource consumption. This article discusses the diesel fuel hydrotreating process, a critical step in oil refining. The primary goal of hydrotreating is to enhance fuel quality by removing sulfur, nitrogen, and oxygen compounds. To accurately model this process, it is essential to consider not only the external factors affecting it but also its physical characteristics. By doing so, the mathematical model becomes more precise. Based on this approach, a quantum fuzzy control system for the diesel fuel hydrotreating process was developed using quantum algorithms. These algorithms can rapidly analyze large amounts of data and make decisions. At the same time, a computer model of a fuzzy quantum control system for the process of hydrotreating diesel fuel was constructed, and a number of computational experiments were carried out. As a result, a 1.8% reduction in energy costs for the diesel fuel hydrotreating process was achieved.

  • Open access
  • 0 Reads
Monitoring of agri-environmental variables in a coffee farm through an experimental IoT network to optimize decision-making by applying deep learning models

Industry 4.0, automation, and data processing are transforming business models across various sectors, including agriculture. This work focuses on the coffee sector in Colombia, analyzing the current situation and proposing 4.0 technologies as tools to improve processes such as production and the detection of nutritional deficiencies in crops. Trends are explored, and coffee farms in the department of Quindío, Colombia, are visited. Interviews with coffee growers are also conducted to gather information about their work and needs. Additionally, an experimental IoT network model is proposed to collect data on certain agro-environmental variables, which employs the LoRaWAN protocol to send and receive data between sensor nodes and the base station. The term “Digital Coffee Grower” is also defined as an artificial intelligence model that replicates or emulates the decision-making of an expert coffee grower. The implementation of technology in the coffee-growing area is reflected upon, where empirical processes are still evident, but without undermining the experience and knowledge of local coffee growers. Preliminary results are evaluated through an MLP (multilayer perceptron) neural network model. Despite initially having few data sets, the concept of “Digital Coffee Grower” promises to substantially improve the decision-making process in coffee plantations. Finally, the importance of continuing data collection and cleaning, as well as experimenting with artificial intelligence models to generate significant advances in this field, is emphasized.

  • Open access
  • 0 Reads
Missing data imputation using machine learning techniques applied to IoT air quality sensors: A case study in Amazonia

The problem of poor air quality in the Amazon is a serious issue, as air pollution in the region negatively affects public health, resulting in thousands of premature deaths and severe damage to the environment. Monitoring emissions is crucial for enforcing laws that restrict these emissions and for preventing fires and their devastating consequences. For this reason, an air quality monitoring network has been implemented in the Amazon region, currently with several sensors distributed throughout the state of Acre/Brazil. However, many sensors have significant data gaps, in some cases with more than 80% loss. This is due to power failures, internet connection problems and device defects, thus compromising the consistency and accuracy of air quality measurements. This paper investigates the use of imputation techniques applied to estimate missing data from Amazon sensors collected from January 1, 2020 to December 31, 2023. Simple imputation techniques (Mean, Median) and those based on machine learning (MICE, KNN and MissForest) were selected. In the experiments, missing data was randomly introduced into the complete dataset (from 10% to 50%), and the techniques were compared using the following evaluation metrics: Mean Square Error (MSE), Root Mean Square Error (RMSE) and coefficient of determination (R²). The results showed that advanced techniques such as KNN and MICE are superior to simpler techniques, with lower MSE and RMSE, as well as a higher R². Even for the most critical case (50% missing data), KNN achieved an MSE of 0.0013 and an R² of 0.85, and MICE presented an MSE of 0.0013 and an R² of 0.93, standing out as effective methods for data imputation.

  • Open access
  • 0 Reads
Accurate Classification of Acute Lymphoblastic Leukemia Subtypes Using Stacked Ensemble Learning on Peripheral Blood Smear Images
,

In this study, we leveraged a publicly available dataset containing 3256 peripheral blood smear (PBS) images, prepared in the bone marrow laboratory of Taleqani Hospital in Tehran, Iran. This dataset consists of blood samples from 89 patients suspected of Acute Lymphoblastic Leukemia (ALL). The images were captured using a Zeiss camera at 100x magnification and stored as JPG files. The dataset is divided into two primary classes: benign hematogones and malignant lymphoblasts. The malignant lymphoblasts are further categorized into three subtypes: Early Pre-B, Pre-B, and Pro-B ALL. The definitive classification of these cell types and subtypes was performed by a specialist using flow cytometry tools.

To classify these images into four distinct categories, we employed a stacked ensemble learning approach. Our model stack included three base models, DenseNet121, VGG16, and VGG19, with a K-Nearest Neighbors (KNN) classifier acting as the meta-model. This ensemble method capitalizes on the strengths of each individual model to improve overall classification performance. Our approach achieved a high accuracy of 94%, demonstrating its robustness and reliability in distinguishing between the various cell types and subtypes within the dataset.

The significant accuracy attained underscores the potential of advanced machine learning techniques in medical image analysis, particularly in the context of hematological malignancies. Our findings suggest that such methodologies could greatly enhance diagnostic precision and efficiency, leading to better patient outcomes. This study illustrates the promising application of deep learning models in the automated classification of ALL subtypes, paving the way for future advancements in the field.

  • Open access
  • 0 Reads
Control of the spread of infectious diseases in cows on farms

Controlling infectious diseases in animals on farming holdings plays an important role in ensuring livestock health and productivity. Infectious diseases can spread rapidly among animals, leading to severe consequences such as reduced productivity, increased mortality, and substantial economic losses. Therefore, implementing effective disease control measures is crucial for safeguarding animal welfare and farmers' livelihoods. When animals are kept outdoors in large areas, identifying diseases and their sources of contamination can be particularly challenging. The vastness of these environments makes it difficult to monitor every animal closely and detect early signs of illness. Additionally, the mingling of animals from different areas can facilitate the spread of diseases, making it harder to pinpoint and control outbreaks. This paper presents an architecture designed to mitigate these challenges. To sum up, this solution uses a set of IoT sensors that try to identify the proximity of healthy animals to animals with disease to stop transmission. The IoT sensors will provide farmers with real-time data, enabling them to swiftly isolate infected animals and implement targeted interventions. By improving disease detection and monitoring, this technology will help farmers maintain healthier herds and reduce the risk of widespread outbreaks, thereby enhancing both animal welfare and agricultural productivity significantly.

  • Open access
  • 0 Reads
Self-diagnosis of Applications – Architectural Solution and Ontology

Software package management tools have become common and are available for practically all SDKs. They allow for the definition of dependencies between packages, ensuring consistent use of their respective versions, especially during installation, updating, configuration, and removal. These tools are primarily used in the software development phase by programmers. While the utility of software package managers and the added value they provide to programmers during the development stage are unquestionable, there are still many gaps concerning the remaining phases of the software lifecycle—commonly referred to as the maintenance stage. The need for maintenance arises from the outdatedness of packages, resulting from incompatibilities with other packages, the introduction of improvements and optimizations, the correction of errors, the elimination of vulnerabilities, and so on. Although it is usually possible to identify packages that are deprecated or obsolete, updating is still a manual process initiated by the programmer. In this paper, authors propose a solution, still in its prototype stage, aimed at equipping applications with the means to report their status concerning update needs, particularly for critical updates. The solution consists of a background service that processes technical reports published by various sources, an ontology used to standardize information and concepts from responsibility disclosure reports, a REST service used by applications to obtain a self-diagnosis of their condition and a REST client that is automatically installed in the application.

  • Open access
  • 0 Reads
Parametric Middleware Routing and Management Services Platform Model for Smart Cities
, ,

Smart cities are made to provide services, including software services, for citizens.. There are many services that these cities provide, and the number of active users and connected devices may vary. Traditional approaches to software design and development do not take into account both the high-level and low-level management of complex services, which include IoT devices, real-time applications, and AI-related processing frameworks. Some of the most important components of software system management are internal task routing and services management. Smart city systems are developed and implemented for real-world cities using real-world data. For example, Kyiv city is the largest city in Ukraine in terms of population and total area. In this study, data on Kyiv city are used as a foundation of the proposed smart city system model. We provide a generalized software service model that is based on control parameters, benchmarks, and weights. These services are provided in an algorithmic and software model version that allows for their implementation in already existing services or when developing new smart city solutions. The smart city platform is based on the integration of various components; in turn, each individual component consists of its own set of software and hardware services and components. The tasks of sub-services and module management are as crucial as they are complex. The system manager module is a middleware/core-layer software system. Event handling, routing, and service/process activation are determined by the appropriate mathematical calculation mechanism. This role can be filled by special routing and management services, each being platform- and deployment-agonistic. While routing services are designed following standard protocols, APIs, and middleware-layer services, sub-service management systems are low-layer data/process processing- and computation-first systems.

  • Open access
  • 0 Reads
Conflict management of users' comfort preferences in a smart environment—a case study

Managing user comfort preferences in a smart environment presents unique challenges due to conflicting requirements and expectations. This paper explores innovative strategies to harmonize diverse user preferences within shared smart spaces. As smart environments become increasingly prevalent in homes, offices, and public buildings, the need to accommodate individual comfort settings for temperature, lighting, and noise while minimizing conflicts among users becomes critical.

This study investigates a specific case within a multi-occupant smart building, analyzing how conflicts in comfort preferences are identified, addressed, and resolved. By implementing a dynamic preference management system, which utilizes machine learning algorithms and real-time data analytics, the proposed solution aims to balance and optimize individual comfort levels. The system considers historical data, context-aware adjustments, and predictive modeling to preemptively address potential conflicts.

The findings demonstrate that integrating advanced computational techniques with user feedback mechanisms significantly enhances the overall comfort experience. The research highlights the importance of adaptive systems that can learn and evolve with user preferences, ultimately leading to more harmonious coexistence in shared smart environments.

This paper contributes to the field of smart environment management by providing a comprehensive framework for conflict resolution and offering practical insights into the deployment of user-centric comfort management systems. The case study underscores the potential of technology to create more responsive and personalized smart environments that cater to the diverse needs of their occupants.

  • Open access
  • 0 Reads
A low-cost solution to improve video projector management and connectivity using virtualization

The video projector is an essential tool for various activities such as teaching, organizational tasks, and conferences. This article introduces a low-cost and effective architecture designed to enhance video projection resources, including older models, through the incorporation of virtualization for improved management and sharing capabilities. The proposed solution addresses prevalent connectivity issues caused by differing connector types, transmission protocols, and configuration incompatibilities (such as frequency and resolution) between video projectors and computers. These incompatibilities frequently lead to delays and challenges in effectively utilizing video projection resources.

The innovative architecture utilizes a Raspberry Pi combined with three virtualized applications to create a user-friendly system. This system not only facilitates the efficient management of video projection resources but also allows for seamless sharing and connectivity across various devices. By leveraging virtualization, the architecture ensures compatibility and adaptability, reducing the downtime typically associated with setup and configuration issues.

The implementation of this solution is aimed at enhancing the functionality and accessibility of video projectors, enabling new paradigms in working and teaching environments. The approach provides a low-cost and practical method to upgrade existing video projection infrastructure, thereby extending the lifespan and utility of older projectors while introducing modern capabilities and improving overall user experience.

Top