Science

Why curbing chatbots’ worst exploits is a game of whack-a-mole

Robert Hyrons/Alamy Stock Photo

It has become common for artificial intelligence companies to claim that the worst things their chatbots can be used for can be mitigated by adding “safety guardrails”. These can range from seemingly simple solutions, like warning the chatbots to look out for certain requests, to more complex software fixes – but none is foolproof. And almost on a weekly basis, researchers find new ways to get around these measures, called jailbreaks.

You might be wondering why this is an issue – what’s the worst that could happen? One bleak scenario might be an AI being used to fabricate a lethal bioweapon,…


Source link

Related Articles

Back to top button