ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI's safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation
A former safety researcher at OpenAI says he is “pretty terrified” about the pace of development in artificial intelligence, warning the industry is taking a “very risky gamble” on the technology