Large language models are now capable of automating attacks that were previously only possible by human adversaries. In this talk, I discuss several ways that adversaries could mis-use current models in order to cause harm both at a larger scale and at a lower cost than they do currently. For example, we find that recent state-of-the-art models can now find 0-day vulnerabilities in large software projects that have been extensively tested by humans for decades. These new capabilities will alter the threat landscape and require we rethink security in the coming years.



This guy who’s job expertise is cyber security and LLMs is very very worried. The models are improving exponentially and finding very advanced vulnerabilities now. This is a serious problem.
“To Black Hat” = hacking
I’m just some hobo and I too am very very worried.