Menu
About me Kontakt

The article discusses the experiences gathered over a year of using Language Learning Models (LLMs) for application security. The authors share insights about the effectiveness of these models in identifying and remediating security vulnerabilities. They note that LLMs can significantly speed up the code analysis process, which in turn improves the overall security of applications. A critical point raised is the challenge of the quality of data with which the models are trained; inappropriate data can lead to incorrect conclusions. The article emphasizes that human interaction is still essential to ensure accurate interpretations of the results generated by the LLMs. In conclusion, the authors highlight the development and future of this tool in application protection, encouraging readers to continue experimenting with LLMs.