Skip to main content

OpenAI Adds a New ‘Instructional Hierarchy’ Protocol to Prevent Jailbreaking Incidents in GPT-4o Mini | Technology News

OpenAI Adds a New ‘Instructional Hierarchy’ Protocol to Prevent Jailbreaking Incidents in GPT-4o Mini OpenAI released a new artificial intelligence (AI) model dubbed GPT-4o Mini last week, which has new safety and security measures to protect it from harmful usage. The large language model (LLM) is built with a technique called Instructional Hierarchy, which will stop malicious prompt engineers from jailbreaking the AI model.

Comments

Popular posts from this blog

OpenAI Might Have Briefly Added New Custom Instruction Options to ChatGPT | Technology News

OpenAI Might Have Briefly Added New Custom Instruction Options to ChatGPT OpenAI might have added several new options to its Custom Instructions feature for ChatGPT on Thursday. Several netizens shared screenshots of these new options in custom instructions that allow users to further personalise the responses generated by ChatGPT. These new options include options to add the user’s nickname, profession, as well as personality traits.

OpenAI Improves File Search Controls for Developers, Said to Improve ChatGPT Responses | Technology News

OpenAI Improves File Search Controls for Developers, Said to Improve ChatGPT Responses OpenAI announced new changes to its File Search system last week, allowing more control to developers when asking the artificial intelligence (AI) chatbots to pick responses. The improvement has been made to the ChatGPT’s application programming interface (API) and will let developers not only check the behaviour of the chatbot’s response retrieval method, it also...