We speak to professor who with colleagues tooled up OpenAI’s GPT-4 and other neural nets

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.…

Source: https://go.theregister.com/feed/www.theregister.com/2024/02/17/ai_models_weaponized/