Users won't have to repeat their prompts to the chatbot every time they interact with it.
As per OpenAI’s blog post, there are two questions that the user will have to answer as part of using ‘Custom Instructions.’ The first is ‘What would you like ChatGPT to know about you to provide better responses?’ and the second is ‘How would you like ChatGPT to respond?’
Say for example, if a user is a chef and uses ChatGPT frequently for recipes, then they can answer the first prompt along the lines of: “I am a chef at a Manhattan restaurant.” And for the second question, the user can type in: “When I ask for recipes, give me a variation of 3-4 recipes from the best cooks in the world. And also, make sure the proportions are sized to serve only one person.”
The character limit for typing in a user's response is currently gated at 1500. Custom instructions has generated a fairly positive response on social media.I added - " ChatGPT acts like a philosophy expert and poet. All responses are written in poetry" as a custom instruction, just to test it out
France Dernières Nouvelles, France Actualités
Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.
OpenAI launches official ChatGPT app for AndroidOpenAI announces it will launch ChatGPT on Android in the coming week, following its successful iOS release.
Lire la suite »
OpenAI’s Trust and Safety Head Steps Down as Devs Pledge to Spend More Time Fixing ChatGPTChatGPT can now remember users' previous commands, and OpenAI is promising it will spend more time fine-tuning its GPT-3.5 and GPT-4 models.
Lire la suite »
OpenAI admits GPT-4 may be worse on some tasks after complaintsInsider tells the global tech, finance, markets, media, healthcare, and strategy stories you want to know.
Lire la suite »
Meta, Google, and OpenAI promise the White House they’ll develop AI responsiblyThe promises are voluntary without immediate repercussions.
Lire la suite »
OpenAI loses its trust and safety leaderDave Willner transitions to an advisory role.
Lire la suite »
OpenAI's trust and safety lead is leaving the company | EngadgetOpenAI’s trust and safety lead, Dave Willner, has left the position, as announced via a Linkedin post.
Lire la suite »