October 19 ~ 20, 2024, Sydney, Australia
Tadashi Ogino, Department of Information Science, Meisei University, Tokyo, Japan
SHONAN, an advanced system harmonizing human capabilities and information technology (IT), was introduced in light of the COVID-19 pandemic, which prompted the shift of office workers and school students to online platforms. In addition, a specific application of SHONAN, referred to as the narrow area communication system (NAMI), was previously implemented, exclusively sharing text-based information. Further, NAMI uses Bluetooth low energy (BLE) to exchange messages; however, it is difficult to exchange large data, and therefore, we have confirmed that it is possible to exchange large data using Wi-Fi. Thus far, all experimental systems have been designed on paper in advance. This is insufficient for actual dynamic systems. In this paper, we considered a method that can allow NAMI functions to continue to be used even when devices and edges move and the network configuration changes dynamically. Further, we implemented and confirmed the functions in the prototype.
IoT, Sustainable System, Multimedia Data, Autonomous Configuration, Ad Hoc Network.
Maryam Solaiman1, Theodore Mui1, Qi Wang2, Phil Mui3, 1Aspiring Scholars Directed Research Program Fremont, USA, 2University of Texas Austin Austin, Texas, 3Salesforce San Francisco, USA
We model unlearning by simulating a Q-agent (using the reinforcement learning Q-learning algorithm), representing a real-world learner, playing the game of Nim against different adversarial agents to learn the optimal Nim strategy. When the Q-agent plays against sub-optimal agents, its percentage of optimal moves is decreased, analogous to a person forgetting (“unlearning”) what they have learned previously. To mitigate the effect of this “unlearning”, we experimented with modulating the Q-learning so that minimal learning occurs with untrusted opponents. This trust-based modulation is modeled by observing opponent moves that are different from those that a Q-agent has learned. This model parallels human trust which tends to increase with those whom one agrees with. With this modulated learning, we observe that a Q-agent with a baseline optimal strategy is able to robustly retain previously learned strategy. We then ran a three-phase simulation where the Q-agent played against optimal agents in the first phase, sub-optimal agents in the second “unlearning” phase, and optimal or random agents in the third phase. We found that even after unlearning, the Q-agent was quickly able to relearn most of its knowledge about the optimal strategy for Nim.
Reinforcement learning, Q-learning, Nim Game, Unlearning, Learned Memory, Misinformation.
Asmaa EL Harat, Jihad Kilani, Hicham Toumi Youssef Baddi, STIC Lab, FSJ,UCD, EL Jadida, 24000, Morocco.
This survey delves into the complex realm of Internet of Things (IoT) security, highlighting the urgent need for effective cybersecurity measures as IoT devices become increasingly common. It explores a wide array of cyber threats targeting IoT devices and focuses on mitigating these attacks through the combined use of deep learning and machine learning algorithms, as well as edge and cloud computing paradigms. The survey starts with an overview of the IoT landscape and the various types of attacks that IoT devices face. It then reviews key machine learning and deep learning algorithms employed in IoT cybersecurity, providing a detailed comparison to assist in selecting the most suitable algorithms. Finally, the survey provides valuable insights for cybersecurity professionals and researchers aiming to enhance security in the intricate world of IoT.
Internet of Things (IoT), cybersecurity, machine learning, deep learning.
Maryam Solaiman1 and GM Solaiman2. 1Aspiring Scholars Directed Research Program Fremont, USA, 2Cisco Systems, Inc., San Jose, USA
Since their conception in the early 1970s, microprocessors have been put to a multitude of uses through various dif erent designs. While there are many academic papers on the implementation of a microprocessor, only a few are devoted to verification. Design and verification go hand in hand with every stage of a digital circuit implementation. In this paper, we propose a RISC pipelined processor with hazard detection, automatic hazard resolution, and automatic stall insertion. We incorporate a modular approach to design. We proposed a constrained random verification environment to fully verify the design with coverage based verification. Finally we implemented the processor in real hardware to demonstrate operational ability. Our approach could easily be scaled up to design, verification and implementation of a large scale system on chip manufacturing.
MIPS32, microprocessor, hazard detection, Verilog HDL, verification, coverage, computer architecture.
Sonjoy Ranjon Das, Department of Computer Engineering, Northumbria University, London, UK
Lung Cancer (Lc) Presents a Critical Global Health Challenge, Requiring Rapid and Accurate Diagnosis for Effective Treatment. Traditional Diagnostic Methods Often Fall Short in Precision, Leading to Delays. This Study Evaluates the Performance of Six Transfer Learning Models—mobilenetv3, Densenet201, Efficientnetb7, Vgg16, Vgg19, and Inception V3—in Predicting Lung Cancer Using a Dataset of 15,000 Histopathology Images. The Models Classify Lung Cancer Types, Including Adenocarcinoma, Benign, and Squamous Cell Carcinoma. Mobilenetv3 Emerges as the Most Efficient, Achieving 99.70% Accuracy, Outperforming Inception V3 (78%), Densenet201 (93%), Vgg16 (99%), Vgg19 (98%), and Efficientnetb7 (99.50%). Evaluation Metrics Such as Accuracy, Precision, Recall, and F1-score Indicate That Mobilenetv3 and Efficientnetb7 Offer Superior Performance. The Study Suggests These Two Models as the Best Options for Lung Cancer Classification.
Deep Learning, Lung Cancer Prediction, MobileNetV3, VGG16, VGG19, InceptionV3, DenseNet201, EfficientNetB7, SoftMax layer, CT images.
Aeshna Kapoor, Lead Data Scientist, BNY Mellon, New York, USA
The rapid evolution of data-driven technologies has led to the proliferation of big data systems capable of managing and analyzing vast amounts of data. However, many organizations continue to rely on legacy systems that are deeply entrenched in their operations. The challenge lies in integrating these legacy systems with new, AI-driven platforms to create a cohesive, hybrid infrastructure that leverages the strengths of both. This paper presents a comprehensive approach to designing and implementing a hybrid big data infrastructure that combines legacy systems with advanced AI technologies. We explore the challenges, architectural considerations, and the potential benefits of such an integration, aiming to provide a roadmap for organizations seeking to modernize their data infrastructure without completely abandoning their existing investments.
Big Data, Hybrid Infrastructure, Legacy Systems, AI Integration, Data Platforms.
Swapna Krishnakumar Radha, Andrey Kuehlkamp, and Jarek Nabrzyski, Center for Research Computing, University of Notre Dame,Notre Dame Indiana, USA 46556
Attestation of documents like legal papers, professional qualifications, medical records, and commercial documents is crucial in global transactions, ensuring their authenticity, integrity, and trustworthiness. Companies expanding operations internationally need to submit attested financial statements and incorporation documents to foreign governments or business partners to prove their businesses and operations’ authenticity, legal validity, and regulatory compliance. Attestation also plays a critical role in education, overseas employment, and authentication of legal documents such as testaments and medical records. The traditional attestation process is plagued by several challenges, including time-consuming procedures, the circulation of counterfeit documents, and concerns over data privacy in the attested records. The COVID-19 pandemic brought into light another challenge: ensuring physical presence for attestation, which caused a significant delay in the attestation process. Traditional methods also lack real-time tracking capabilities for attesting entities and requesters. This paper aims to propose a new strategy using decentralized technologies such as blockchain and self-sovereign identity to overcome the identified hurdles and provide an efficient, secure, and user-friendly attestation ecosystem.
Attestation, Blockchain technology, Self-sovereign Identity technology.
Olalekan M. Durojaiye, Ramanjit K. Sahi, Department of Mathematics & Statistics, Austin Peay State University, TN, USA
The loan data simulated with Monte Carlo approach and analyzed in the research work provides valuable insights into the borrowers’ financial positions and loan performance. By calculating the debt-to-income ratio (DTI), we identified 122 (50.8%) loans that were at high risk of default. We also used risk-based pricing (RBP) to assign higher interest rates to riskier loans, helping to mitigate the risk of default. The data analysis showed that a higher DTI is associated with a higher risk of default, and a higher RBP is associated with a higher interest rate. Therefore, it is essential to use these metrics when assessing loan applications to ensure a healthy loan portfolio. This analysis can be used to inform loan officers, risk analysts, and other stakeholders involved in the lending process.
Loan Simulation, Interest rate, Risk Mitigation, Debt-To-Income Ratio, Risk-Based Pricing.
Houssam Hamici , Hani Ahmad and Hamido Hourani, Department of Electrical Engineering, PSUT, Amman, Jordan
This work presents a brain tumor classification survey utilizing Convolutional Neural Networks and Vision Transformers methods. The classification is based on a brain tumor dataset comprising 7023 MRI images with four classes present in the dataset: No tumor, Pituitary, Glioma, and Meningioma. The models used for the classification are ResNet101V2, VGG19, MobileNetV2, InceptionV3, Xception, and Vitb16. Two types of experiments were conducted with and without pre-trained weights to classify the dataset with intended models. Since the dataset is relatively small, CNN performs better than ViT since ViT relies on a vast pre-trained dataset to perform very well. The best results were obtained by Inceptionv3 and Xception architectures, both achieving an accuracy of around 98.6%.
BrainTumor(BT), Classification, Convolutional Neural Networks (CNN), Vision Transformers (ViT).
Arijit Das, Tanmoy Nandi, Diganta Saha, Department of Computer Science and Engineering, Jadavpur University,Kolkata, West Bengal, India
This paper presents a novel approach to predicting financial market trends by inte-grating deep learning models with natural language processing (NLP) techniques applied to Twitter data from influential leaders. Unlike traditional models reliant solely on historical fi-nancial data, our method leverages real-time social media information to enhance predictive accuracy. Key contributions include the development of a versatile algorithm capable of gener-ating models for any Twitter handle and financial component, as well as predicting the tem-poral window during which tweets affect stock prices. We also explore the combined effects of multiple Twitter handles on trend prediction. Through a comprehensive survey, we identify research gaps, collect necessary data, and propose a state-of-the-art algorithm with a complete implementation environment. Our results demonstrate significant improvements facilitated by NLP analysis of Twitter data on financial market components. We focus on the Indian and USA financial markets, with potential for extension to other markets. In conclusion, we discuss the socio-economic implications and utility of our approach in informing decision-making processes within financial markets.
Deep Learning, Financial Market Prediction, Twitter Analysis.
Xiangning Lu1, Yu Sun2, 1BASIS International School Nanjing, No.18 Lingshan North Road, Qixia District, Nanjing, Jiangsu, China, 2Computer Science Department, California State Polytechnic University, Pomona, CA91768
This paper presents the design and evaluation of a mobile application aimed at optimizing food inventory management and reducing food waste [1][4]. The application integrates several key features, including a food classification engine powered by the Gemini Image Processing Engine, a waste index calculation, and personalized recipe suggestions [2]. Experiments were conducted to assess the accuracy of the image processing engine under various conditions and the reliability of the waste index calculation based on user input data. The results showed that while the application performs well under optimal conditions, its accuracy and effectiveness can be affected by poor image quality and incomplete data. The paper discusses these findings and proposes improvements to address the identified limitations. Ultimately, the application provides a comprehensive and user-friendly tool for managing food resources, with the potential to significantly reduce waste and promote sustainability in households [3].
Food Inventory Management, Waste Reduction, AI Image Processing, Gemini Engine, Mobile Application.
Bryan Chuang1, Yu Sun2, 1Taipei American School, 800 Zhongshan North Road, Taipei, Taiwan, 2Computer Science Department, California State Polytechnic University, Pomona, CA91768
This paper presents the development of a mobile application that assists users in making healthier food choicesthrough AI-driven food identification and personalized recommendations [1]. The app uses a machine learningmodel to accurately recognize foods from photos and leverages the ChatGPT API to provide concise, relevant dietary advice [2]. Key challenges included enhancing the accuracy of the food detection model and improvingtherelevance and clarity of AI-generated suggestions. These challenges were addressed by training the model withanextensive dataset and refining the AIs prompts for better user engagement. Experiments showed high accuracyinfood recognition and consistent quality in dietary recommendations [3]. The apps intuitive design and real-timeinsights make it a practical tool for users aiming to improve their eating habits. With future updates, such as addingmultilingual support, the app aims to increase its accessibility and ef ectiveness, making it a valuable resource forpromoting healthier lifestyles.
AI, Track Calories, Food, Machine Learning.
Julio Toribio, Frank Huarcaya and Alejandrina Huarcaya, Universidad Peruana de Ciencias Aplicadas, Lima, Perú
A 2023 TomTom study identifies Lima as the most congested city in Latin America and the fifth worldwide, causing significant impacts such as increased stress, long travel times, and lower productivity. Various methods have been analysed and tested with promising results, but not all can meet the set of requirements for developing cities like Lima. To address this, the proposed system uses the Max Pressure (MP) algorithm for traffic signal control and a Bidirectional Long Short-Term Memory (Bi-LSTM) neural network for traffic prediction. The MP algorithm dynamically adjusts signal timing to optimize traffic flow, while the Bi-LSTM predicts future traffic patterns. Applied to a simulation of the Javier Prado Avenue, the system shows promising results. Both the traffic control algorithm and prediction model demonstrate effectiveness, and the developed web app presents a practical tool for easing traffic in Lima’s busiest areas.
Traffic prediction, Traffic Light Control, Inteligent Transportation Systems, Bi-LSTM, Max Pressure.
Jesús María Velásquez-Bermúdez, Founder and Chief Scientific Officer, HYPOTHALAMUS Artificial Intelligence Inc., USA
The document explores the advanced integration of Artificial Intelligence within Enterprise Optimization Systems. The core innovation presented is the transition from traditional Decision Support Systems (DSS) to Enterprise-Wide Optimization Systems (EWOS), which are designed to optimize organizational decision-making processes holistically and autonomously. The Enterprise Artificial Brain concept, inspired by the human brains structure, incorporates artificial components like the neocortex, hypothalamus, and hippocampus to manage, produce, and store knowledge. This integration allows for Autonomous Real-Time Distributed Optimization, significantly enhancing the efficiency and effectiveness of business operations. The document further discusses the application of these principles in various industrial contexts, particularly in the oil and gas sector. HAI’s research underscores the evolution from mental planning models to sophisticated, mathematical optimization models, facilitating integrated business planning/scheduling, and decision-making. By employing technologies such as OPTEX, Optimization Expert System, a generative AI system, HAI demonstrates how artificial brains can autonomously manage complex industrial processes, thereby reducing development time and increasing decision-making accuracy. This approach aims to emulate human cognitive functions through artificial mathematical systems, providing organizations with robust tools for navigating dynamic and uncertain environments.
Bringing these components together enables HAI to start thinking about a revolutionary idea: the Artificial Brain.
Ravi Kumar and Ayushi Kumari, Department of Computer Science and Engineering, Arya College of Engineering and Research Center, Jaipur, Rajasthan India
The integration of Natural Language Processing (NLP) in chatbot technology has revolutionized the healthcare sector, offering innovative solutions for patient care and management. This paper explores the diverse applications of advanced NLP-based chatbots in smart healthcare. These applications include providing medical information, assisting in disease diagnosis, supporting mental health, managing chronic diseases, and enhancing patient engagement. We discuss the underlying technologies, benefits, challenges, and future directions for NLP-based healthcare chatbots.
Natural Language Processing, Smart Healthcare, BERT, Chatbots, Mental Health, Sentiment Analysis.
Zeyu Zhang1, Yu Sun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA92688, 2Computer Science Department, California State Polytechnic University, Pomona, CA91768
This research paper presents the development and evaluation of a personalized mental health support applicationthat leverages AI-driven features for real-time user interaction [1]. The application includes components for facial expression classification and real-time image generation, both of which were subjected to rigorous testing throughtargeted experiments [2]. The first experiment evaluated the accuracy of the emotion recognition system, revealingstrong performance with distinct emotions but highlighting challenges with subtle expressions. The secondexperiment tested the responsiveness of the image generation component, showing ef ective performance with simpleinputs but identifying delays with more complex tasks. While the application demonstrates significant potential, especially in its ability to provide tailored emotional feedback and support, further refinement is needed to enhanceaccuracy, performance, and data security. The findings suggest that with continued development, this applicationcould become a valuable tool in the field of mental health and emotional well-being.
Facial Expression Classification, AI-Driven Mental Health, Real-Time Image Generation, Emotion Recognition, Emotional Well-Being.
Jiasheng Wang1, Yu Sun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
Lumigen is an innovative air quality monitoring system designed to enhance indoor environmental awareness using real-time data visualization [1]. The system combines an air quality sensor connected to a Raspberry Pi with a set of Philips Hue lights that change color based on detected air quality levels [2]. This setup provides immediate visual feedback, alerting users to air quality changes without requiring them to check a separate device. Users can interact with Lumigen through a mobile app that facilitates real-time monitoring, historical data analysis, and customization of air quality alerts and light settings [3]. Experimental evaluations demonstrate that Lumigen effectively detects and responds to variations in air quality, with a rapid response time and high accuracy. Unlike other solutions that may require separate displays or offer limited data insights, Lumigen seamlessly integrates into everyday life, providing both visual and data-driven cues about air quality. Future developments could enhance its portability, integrate automated responses with air purifiers, and offer advanced data analytics to further empower users to manage their indoor environments proactively [4].
Indoor Air Quality, Real-Time Data Visualization, Environmental Sensing, Smart Home Automation.
David Z. Zhang1, Rodrigo Onate2, 1University High School, 4771 Campus Drive. Irvine, CA 92612, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
The BrainBow platform is designed to raise awareness about neurodiversity by analyzing real-time sentiment data from news articles and social media [1]. The system collects data from various sources and applies sentiment analysis to create an inclusivity index, helping families, educators, and communities understand public sentiment on neurodiversity. However, experiments show that while the platform performs well in explicit sentiment analysis, it struggles with nuanced topics such as gender and disability. To improve its accuracy, the platform could benefit from advanced NLP models and more comprehensive datasets [2]. Despite these challenges, BrainBow is a valuable tool for promoting inclusivity and understanding neurodiverse issues.
Neurodiversity, Sentiment Analysis, Inclusivity Index, Real-Time Data, Natural Language Processing (NLP).
Ndidi Anyakora (Ph.D. Candidate, Member, I.E.E.E) and Cajetan M. Akujuobi, The Centre of Excellence for Communication Systems Technology Research (CECSTR),, Roy G. Perry College of Engineering, Prairie View A & M University
With the proliferation of 5G networks, evaluating security vulnerabilities is crucial. This paper presents an implemented 5G standalone testbed operating in the mmWave frequency range for research and analysis. Over-the-air testing validates expected throughputs up to 5Gbps downlink and 1Gbps uplink, low latency, and robust connectivity. Detailed examination of captured network traffic provides insights into protocol distribution and signalling flows. The comparative evaluation shows only 0.45% packet loss on the testbed versus 2.7% in prior simulations, proving improved reliability. The results highlight the efficacy of the testbed for security assessments, performance benchmarking, and progression towards 6G systems. This paper demonstrates a robust platform to facilitate innovation in 5G and beyond through practical experimentation.
5G Networks, Firecell Labkit, Standalone, mmWave, Security Vulnerabilities.
Ndidi Anyakora and Cajetan M. Akujuobi, The Centre of Excellence for Communication Systems Technology Research (CECSTR), Electrical & Computer Engineering Department, Roy G. Perry College of Engineering, Prairie View A & M University
This study investigated signal interference in radio and television broadcasting in Nigeria. To achieve this, field experiments were carried out to investigate the existence of interference and the cause and effect of such interference in the six geopolitical zones of the country to reflect a national spread. These experiments were conducted over eleven months to cover the different climatic conditions of the year in Nigeria. The values obtained were simulated using the existing ITU mathematical models and computer simulations (MATLAB). After an extensive data analysis, the results proved the existence of interference and the interference signal level for each station. It also explored several options for preventing and reducing this interference in transmitting and receiving radio and television broadcasts.
Radio Wave Interference, VHF and UHF Bands, Radio and Television Broadcasts.
Blessing C. Dike and Cajetan M. Akujuobi, Center of Excellence for Communication Systems Technology Research, ECE Dept, Prairie View A&M University Prairie View, Texas, USA
The advent of 5G technologies has ushered in unprecedented demands for efficient spectrum utilization to accommodate a surge in data traffic and diverse communication services. In this context, accurate and reliable spectrum sensing is crucial. We investigated wideband spectrum sensing strategies by comparing non-cooperative cognitive radio (CR) approaches with cooperative methods across multiple sub-bands. Our research led to the development of a sophisticated cooperative wideband spectrum sensing framework that incorporates a K-out-of-N fusion rule at the fusion center to make optimal decisions, selecting an appropriate K for a given number of cooperating CRs. This method aims to combat the noise uncertainty typically affecting traditional non-cooperative energy detection methods in 5G environments under Additive White Gaussian Noise (AWGN) conditions, assumed to be identically and independently distributed (i.i.d). However, our findings indicate that while cooperative sensing significantly improves detection in scenarios with poor signal-to-noise ratios (SNRs) and higher false alarm rates (between 0.5 and 1), it does not consistently outperform non-cooperative methods at very low false alarm rates (0.01 and 0.1). This finding suggests the limited effectiveness of the cooperative sensing method under certain conditions, underscoring the need for further research to optimize these strategies for diverse operational environments
Cooperative Wideband Spectrum Sensing, Non-Cooperative Wideband Spectrum Sensing, Energy Detection, Additive White Gaussian Noise, K-out-of-N Fusion Rule.
Chuqiao Peng1, Yu Sun2, 1Portola High School, 1001 Cadence, Irvine, CA 92618, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
This paper presents the development and evaluation of CAIR, an intelligent mobile application paired with an electronic necklace designed to monitor carbon dioxide (CO2) levels for personal health. Utilizing CO2 sensors, microcontrollers, and secure mobile communication, the device provides real-time air quality data and personalized health advice. Experiments were conducted to assess sensor accuracy and data privacy, showing high reliability and secure data handling. The application was tested across various environments, demonstrating effective monitoring capabilities. CAIR’s innovative approach offers significant benefits in promoting air quality awareness and proactive health management, making it a valuable tool for enhancing personal well-being
Electrocardiogram (ECG or EKG), Substance Abuse.
Dave Steck, Numeric Pictures, New York, USA
This paper examines the idea that if a piece made by Artificial Intelligence is more of a computational process than a creative one, can it be considered "art" by our historical definitions or if we need to evolve our concept of art to keep up with technology.
Artificial Intelligence, art, creativity, philosophy, technology.
Zixiu Qiao1, Alan Xu-Zhang2, 11St. Margaret’s Episcopal School, 31641 La Novia Ave, San Juan Capistrano, CA 92675, 2 Computer Science Department, California State Polytechnic University, Pomona, CA 91768
The fish we eat everyday may well likely contain heavy metals, which are detrimental for one’s health [1]. There are many solutions out there, such as machines and sensors that could detect the amounts of metals in a fish, but for the average person, these solutions are not the most convenient. My app, however, is an easy and convenient way to figure out if your fish has metals. All you need to do is take a picture of your fish and the app will use an AI system to inform you about the levels of metals in the fish and some tips and facts on how to cook the fish. I experimented with the AI accuracy and the accessibility of my app. I can say that the AI is quite accurate and my app is accessible to all IOS devices [2]. Ultimately, my app should be used by everyone because it’s quick, easy to use and access, and doesn’t require any professional knowledge about fish.
Machine Learning, Data Science, Object Detection, Mobile Application.
Zicheng Lin1, Yu Sun2, 1Loomis Chaffee School, 4 Batchelder Rd,Windsor, CT 06095, 2Computer Science Department, California State Polytechnic University, Pomona
This paper addresses the challenges in English language learning, specifically the need for personalized, ef ectivecurriculum generation [1]. To solve this, we propose a Smart English Learning Curriculum Generation MobilePlatform using word root extension, leveraging Artificial Intelligence (AI) [2]. This platform tailors lessons basedon the learner’s proficiency and progress, using AI algorithms and IoT devices to optimize content delivery. Keytechnologies include machine learning for content adaptation and IoT for real-time feedback [3]. Challenges suchas data privacy and interface complexity were resolved through secure data protocols and user-friendly design. Experimentation across various scenarios showed increased engagement and improved retention rates. The resultsindicate that the platform significantly enhances learning by adapting to individual needs. This innovative solutionof ers a dynamic and personalized approach to language education, making it a valuable tool for diverse learners[4].
LLM, Artificial Intelligence, English.
Weichuan Chen1, Yu Sun2, 1Taipei European School, No 31, JianYe Road, ShiLin District, Taipei City, 2Computer Science Department, California State Polytechnic University, Pomona
This paper presents the development and evaluation of Dataflexor, a streamlined data management applicationdesigned to enhance ef iciency in professional environments [1]. The application integrates Firebase services foruser authentication and real-time data synchronization, as well as advanced language models like ChatGPTandGemini for enhanced functionality [2]. Several challenges were addressed during development, includingAPI integration, latency issues, and user data privacy [3]. Experiments were conducted to evaluate the performanceof Firebase Cloud Functions and the response times of integrated APIs under varying traf ic conditions [4]. Theresults revealed that while the application performs well under normal usage, significant performance drops occurduring peak loads, indicating areas for further optimization. The study concludes that, with improvements incustomization options, workflow optimization, and backend scalability, Dataflexor has the potential to becomeapowerful tool for professionals, of ering both ef iciency and flexibility in data management tasks [5].
Data Management, AI Integration, Cloud Functions, Real-Time Data Synchronization, Workflow Optimization