I am currently working on a research paper aimed at publication with IEEE, focusing on the analysis of data surveys and the development of methods for easier, more efficient computations. This project combines my technical skills in data handling with my interest in simplifying complex processes, ensuring that survey data can be interpreted quickly and accurately. The goal is to contribute practical solutions that make data-driven decision-making more accessible across various applications.
Data Survey Research Paper With IEEE
This research paper is being prepared for submission to the Institute of Electrical and Electronics Engineers (IEEE), one of the world’s largest professional organizations dedicated to advancing technology for humanity. IEEE publishes leading journals, conferences, and standards that shape the future of engineering and innovation. Having my work published with IEEE will provide the opportunity to share my findings on data surveys and computational methods with a global community of researchers and professionals. It also represents an important step in my academic and professional journey, as it allows me to contribute to ongoing conversations in the engineering field while building credibility as a researcher.
My current research explores quantization and approximate floating-point (AFP) techniques as methods to optimize machine learning models for real-world deployment. Quantization reduces numerical precision which is converting floating-point values into lower-bit formats such as INT8, FP16, INT4, or even INT2, to improve memory efficiency, computational speed, and power consumption. This process is essential for running ML models on edge devices, mobile AI platforms, and cloud inference systems. Different approaches, including Post-Training Quantization (fast but may reduce accuracy), Quantization-Aware Training (higher accuracy retention), and Dynamic Quantization (quantizing activations at runtime), each balance trade-offs between performance and precision. For example, INT8 models can use up to 75% less memory than FP32 and achieve 4x–10x speedups on AI accelerators. Similarly, AFP methods approximate floating-point representations to reduce complexity while preserving accuracy, enabling more efficient computations. Together, quantization and AFP provide powerful strategies for making AI models smaller, faster, and more practical for deployment across diverse hardware systems.
For this project, I will be running simulations and collecting data using a combination of tools, including Tableau, R, and PyTorch. Tableau will allow me to visualize and interpret survey data effectively, R will be used for statistical analysis and data modeling, and PyTorch will support the development and testing of machine learning models. Together, these platforms provide a strong framework for managing data, performing computations, and generating insights that support the goals of my research.