Abstract: |
The immutable nature of blockchain technology, while revolutionary, introduces significant security challenges, particularly in smart contracts. These security issues can lead to substantial financial losses. Current tools and approaches often focus on specific types of vulnerabilities. However, a comprehensive tool capable of detecting a wide range of vulnerabilities with high accuracy is lacking. This talk presents our work on utilizing advanced capabilities of Large Language Models (LLMs) to detect and analyze vulnerabilities in smart contracts. Using a multi-agent conversational approach, which employs a collaborative system with specialized agents to enhance the audit process. Additionally, we explored fine-tuning of LLMs to the problem domain that streamlines data preparation, training, evaluation, and continuous learning. To evaluate the effectiveness of the proposed solution, we compiled two distinct datasets: a labeled dataset for benchmarking against traditional tools and a real-world dataset for assessing practical applications. Experimental results indicate that our solution outperforms all traditional smart contract auditing tools, offering higher accuracy and greater efficiency. Furthermore, our framework can detect complex logic vulnerabilities that traditional tools have previously overlooked. These findings demonstrate that leveraging LLM provides a highly effective method for automated smart contract auditing. |
Bio: |
Dr. Jing Sun earned his PhD in Computer Science from the National University of Singapore in 2004. After completing his doctorate, he joined the University of Auckland as a Lecturer in Computer Science and is now an Associate Professor. His research is centered on AI-driven software engineering, with a strong focus on secure software development. Recently, he has utilized generative AI and large language models (LLMs) to enhance the security and quality of automated software systems. Dr. Sun’s work spans several key areas, including machine learning for automated formal design model repair and LLM-based code generation. He has explored advanced AI techniques, such as GPT and LLMs, for smart contract auditing, a critical component of cybersecurity that addresses vulnerabilities in blockchain systems. In addition, he is investigating verification methods to ensure the accuracy and reliability of AI-generated outputs, thereby bolstering the integrity of complex software systems. To date, Dr. Sun has published 130 research papers in leading venues, including ACM Transactions on Software Engineering and Methodology, Automated Software Engineering, ACM Computing Surveys, Information Sciences, Information and Software Technology, Expert Systems with Applications, and IEEE Transactions on Reliability. He has played active leadership roles in the international research community, serving as a conference chair, program chair, and steering committee chair. More details can be found on his university homepage at https://www.cs.auckland.ac.nz/~jingsun/. |