Jing Su earned his Ph.D. in computer science from Southern Methodist University in 2024. His academic and professional journey is defined by a relentless pursuit of solutions that harmonize cutting-edge theoretical research with tangible, real-world impact. The central theme driving his work is to search for “High performance, refined algorithms, practical implementations” to bridge the gap between theoretical developments and real-world applications. Dr. Su’s work transcends disciplinary boundaries to address pressing challenges in artificial intelligence (AI) and computing systems.
Dr. Su’s research interests span reinforcement learning, fault-tolerant computing, and large language models. In reinforcement learning, he focuses on developing algorithms that enable AI systems to make robust, adaptive decisions in dynamic environments. His work in fault-tolerant computing emphasizes creating resilient architectures capable of sustaining performance under hardware or software failures, which is a critical need for industries reliant on dependable computational infrastructure. Meanwhile, his exploration of LLMs seeks to refine their efficiency and scalability, ensuring these models can be deployed effectively across diverse applications, from natural language processing to decision-support systems.
As an IEEE Senior Member, Dr. Su actively contributes to the global computer science community, advocating for innovations that balance technical rigor with societal relevance. His deep expertise in programming and system design allows him to navigate diverse technical landscapes, from low-level hardware interactions to high-level AI model training. This versatility has enabled collaborations with academia and industry, where he applies his knowledge to optimize workflows, enhance system robustness, and pioneer next-generation AI tools.
Dr. Su envisions a future where AI systems are not only intellectually powerful but also pragmatically aligned with human needs. By integrating fault tolerance into AI architectures, refining reinforcement learning algorithms for adaptability, and streamlining LLMs for accessibility, he aims to democratize advanced technologies while ensuring their ethical and efficient use. His ongoing projects continue to push the boundaries of what AI can achieve, cementing his role as a catalyst for innovation in an era defined by rapid technological evolution.
The evolution of the Network Function Virtualization (NFV) paradigm has revolutionized the way network services are deployed, managed, and scaled. Within this transformative landscape, Virtual Network Function (VNF) resource prediction emerges as a cornerstone for optimizing network resource allocation and ensuring service reliability and efficiency. Traditional resource forecasting methods often struggle to adapt to the dynamic and non-linear nature of changes in resource consumption patterns in modern telecommunication networks. We address this challenge by leveraging the inherent pattern recognition and next-token prediction capabilities of Large Language Model (LLM) without requiring any domain-specific fine-tuning. Our study utilizes Llama2 as the foundation model to evaluate the performance against widely used probability-based models on a public VNF dataset that encompasses real-world resource consumption data of various VNFs for comparative analysis. Our findings suggest that LLM offers a highly effective alternative for VNF resource forecasting, demonstrating significant potential in enhancing network resource management.
@inproceedings{su2024leveraging,title={Leveraging {{Large Language Models}} for {{VNF Resource Forecasting}}},booktitle={2024 {{IEEE}} 10th {{International Conference}} on {{Network Softwarization}} ({{NetSoft}})},author={Su, Jing and Nair, Suku and Popokh, Leo},year={2024},month=jun,pages={258-262},publisher={IEEE},doi={10.1109/NetSoft60951.2024.10588943},}
Optimizing resource allocation in Network Functions Virtualization (NFV) deployment remains a challenging problem due to the complex interactions between network functions and the limited resources available at the network edge. Deep reinforcement learning (DRL) has achieved impressive results in a variety of domains. This paper presents EdgeGym, a reinforcement learning environment to simulate the edge network contexts and constraints for NFV resource allocation. EdgeGym allows researchers and practitioners to evaluate and compare different reinforcement learning algorithms for optimizing the allocation of resources in NFV environments, taking into account various constraints such as affinity policies and maximum latency. We demonstrate the effectiveness of EdgeGym through extensive experiments on training and action masking efficiency. EdgeGym provides a reliable framework for advancing the DRL agent performance in NFV resource allocation and paves the way for further research in this area.
@inproceedings{su2023edgegym,title={{{EdgeGym}}: {{A Reinforcement Learning Environment}} for {{Constraint-Aware NFV Resource Allocation}}},shorttitle={{{EdgeGym}}},booktitle={2023 {{IEEE}} 2nd {{International Conference}} on {{AI}} in {{Cybersecurity}} ({{ICAIC}})},author={Su, Jing and Nair, Suku and Popokh, Leo},year={2023},month=feb,pages={1--7},doi={10.1109/ICAIC57335.2023.10044182},}
Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are two emerging paradigms that enable the feasible and scalable deployment of Virtual Network Functions (VNFs) in commercial-off-the-shelf (COTS) devices, which deliver a range of network services with reduced cost. The deployment of these services requires efficient resource allocation that fulfills the requirements in terms of Quality of Service (QoS) and Service-Level Agreement (SLA) while considering the constraints of the underlying infrastructure, such as maximum latency tolerance and affinity policies. To address this issue, we study the resource allocation problem in SDN/NFV-enabled networks, which involves numerous optimization variables resulting from the multidimensional space of system component parameters and states. Using deep reinforcement learning, we propose a policy gradient-based algorithm with an invalid action masking approach to efficiently tackle the resources allocation problem while handling system constraints in industrial settings. The simulation results unequivocally show the effectiveness and performance of the proposed learning approach for this category of problems.
@inproceedings{su2022optimal,title={Optimal {{Resource Allocation}} in {{SDN}}/{{NFV-Enabled Networks}} via {{Deep Reinforcement Learning}}},booktitle={2022 {{IEEE Ninth International Conference}} on {{Communications}} and {{Networking}} ({{ComNet}})},author={Su, Jing and Nair, Suku and Popokh, Leo},year={2022},month=nov,pages={1--7},issn={2473-7585},doi={10.1109/ComNet55492.2022.9998475},}
Network Function Virtualization (NFV) enables telecommunication operators to place and allocate resources dynamically to address the needs of Internet of Things (IoT), Intelligent Edge Computing (IEC) and emerging 5G services in a highly efficient manner. The efficient Virtual Network Function (VNF) placement and deployment largely depend on optimizing Virtual Machines (VM) compute, storage, and network resource allocation in the cloud-based platforms and their physical hosts. This research further extends our previously defined Information Model of mapped NFV Infrastructure (NFVI) and Virtualized Infrastructure Management (VIM) resources to derive VNF’s placement and optimal resource allocation. Our optimization solution IllumiCore derives the VNF’s optimal placement and minimizes the communication latency among VMs that are part of the VNF and entire communication network. The results demonstrate optimal and improved VNF placement and resource management.
@inproceedings{popokh2021illumicore,title={{{IllumiCore}}: {{Optimization Modeling}} and {{Implementation}} for {{Efficient VNF Placement}}},shorttitle={{{IllumiCore}}},booktitle={2021 {{International Conference}} on {{Software}}, {{Telecommunications}} and {{Computer Networks}} ({{SoftCOM}})},author={Popokh, Leo and Su, Jing and Nair, Suku and Olinick, Eli},year={2021},month=sep,pages={1--7},issn={1847-358X},doi={10.23919/SoftCOM52868.2021.9559076},}