It’s no Skynet (yet), but ChatGPT did just make up its own language to extend its 8K-limit conversations. In a stranger turn of events, ChatGPT users have discovered that GPT-4 is able to compress a long conversation and create a sort of compression language that you can then utilize as a new prompt later.
When entered as a new prompt in the chat, this essentially recreates that same exact conversation. In effect, it not only allows you to pick up where you left off but also extends ChatGPT conversations beyond their word limit.
As you know, ChatGPT has a word limit – though as to what that number is, reports seem to vary. Some say GPT-4 gives you about 25,000 words while writer Jeremy Nguyen says it’s around 8,000K words. Not that most users tend to reach that cut-off, but it does mean that if your current conversation goes on for too long, ChatGPT might simply cut off, even mid-sentence. As a result, you would have to start a brand-new chat, which can be extremely frustrating if you're not finished and need more information.
Christened as Shogtongue by gfodor on Twitter, this new compression language lets you get around that word count so you can continue your conversation, which is extremely useful especially if your query has morphed into a complex rabbit hole or your chat has turned into a month-long chat (or you simply need a bestie who will remember everything you’ve told them in the past).
GPT-4 still needs context
According to Nguyen, ChatGPT won’t simply create a compressed message for you when it’s running out of words. You still have to ask it to compress that current conversation with very specific instructions.
In his example, he specified that the compression should be “lossless but results in the minimum number of tokens that could be fed into an LLM like yourself as-is and produce the same output.” He also instructed it to utilize different languages, symbols, and “other up-front priming.”
The same has to be done when entering that compressed message in a new chat to give it a hand – although it seems like some GPT-4 users aren’t doing so, adding context just helps minimize errors.
It’s worth noting that this language isn’t foolproof yet. According to Nguyen, GPT-3.5 is unable to read the compressed language, and “GPT-4 via API struggles.” It works best on GPT-4, which is a refined version of its predecessors, via ChatGPT. So, if you think it’s going to be a long, convoluted conversation that would make Proust proud, you better stick with that chatbot.
If you’d like to know more about ChatGPT, here’s everything you need to know about the AI chatbot. Not a fan? Perhaps you’ll like Google Bard better.
from TechRadar - All the latest technology news https://ift.tt/DLrp4Co
No comments:
Post a Comment