Very large language models are most useful when they ‘fail’ at being useful
A conversation with ChatGPT
I share a conversation with ChatGPT, quoting the answers from ChatGPT. The conclusion is after that.
Conversation
Very large language models like GPT use statistics to generate the text with the greatest likelihood in a context. Consequently, they are good at stating what is "needless to say" but are unlikely to generate entirely new points or recall ideas relevant to the context but not previously connected in the training data. For this reason, I hypothesize that they are most valuable to test our points (e.g. in our writing).
If the models can elaborate properly on something we write, it may be obvious and best omitted, as it is often found in the training data.
If the models struggle to process what we write, it may be hard to follow for our readers, and we may need to add additional details, explanations, background, or help in other ways.
Anything in between may be just right.
Finally, if the models disagree with us, we may be on to something important.
If there is a better use for these models, persuadingly explain why it is better than my hypothesis.
Your hypothesis is a reasonable use of large language models like GPT, as they can help test the clarity and coherence of a piece of writing. However, there are likely other ways in which these models can be useful.
One potential use of large language models is as a writing assistant. For example, a user could input a few sentences or ideas, and the model could generate suggestions for how to expand upon or connect those ideas in a coherent way. This could be especially useful for writers who are looking for inspiration or ideas on how to develop their writing.
Another potential use of large language models is in summarization. The models could be trained on a large corpus of text, and then be used to generate summaries of longer documents or articles. This could be helpful for readers who want a quick overview of a topic, or for researchers who need to quickly scan a large number of documents for relevant information.
Overall, there are likely many other potential uses for large language models beyond the ones you mentioned in your hypothesis. These models have the ability to generate text based on context and patterns in large amounts of training data, and as a result, they can be applied in a variety of ways depending on the needs and goals of the user.
I understand those are feasible uses, but they are not very good at that. I still think that my approach is the most useful.
While your approach is certainly a valid way to use large language models, it is not necessarily the only or the best way. As I mentioned earlier, large language models can be useful for generating writing suggestions or summaries, which can be helpful for writers and readers, respectively.
Additionally, large language models can be used in a variety of other applications, such as translation, question answering, and sentiment analysis. In these cases, the models are trained on large amounts of data and are able to generate responses that are relevant to a given context.
In conclusion, while your approach is a valid use of large language models, there are many other potential uses for these models that may be just as, if not more, valuable. It ultimately depends on the specific goals and needs of the user.
Not really. For example, if the user wants suggestions about how to expand or connect ideas in a coherent way, very large language models are only going to succeed at expanding or connecting ideas in the most usual ways according to the training data, i.e. the most uninteresting ways. I still think that the best use of very large language models is testing them on connecting ideas, and only writing about the kind of connections they fail on. If they succeed, it is not interesting. We can summarize this as “very large language models are most useful when they fail at being useful.” How would you explain that?
Your perspective is a valid one, and it is true that very large language models like GPT are most likely to generate text that is similar to what they have seen in their training data. However, this does not necessarily mean that they are not useful for other purposes.
As I mentioned earlier, large language models can be useful for generating writing suggestions or summaries. In these cases, the models may not generate entirely novel ideas, but they can still provide valuable assistance to writers or readers. For example, a writing suggestion generated by a large language model may not be completely original, but it could still help the writer to organize their ideas or find the right words to express their thoughts.
Additionally, large language models can be used in a variety of other applications, such as translation or question answering, where their ability to generate relevant responses based on context can be very useful. In these cases, the models may not always generate entirely novel ideas, but they can still provide valuable assistance to users.
Overall, while it is true that very large language models are most likely to generate text that is similar to what they have seen in their training data, this does not mean that they are not useful for other purposes. These models can still provide valuable assistance in a variety of applications, even if they do not always generate entirely novel ideas.
Conclusion
For me, the paradoxical conclusion is that “very large language models are most useful when they ‘fail’ at being useful.”
Antagonizing this kind of chatbot may be good for finding the most prevalent misconceptions, which arguably are the most useful topics to write about.
It may be that paradoxes are the ultimate clickbait, and are seriously underutilized. That hypothesis is yet to be tested.
Let me know if you would like to see more conversations with ChatGPT. It is not among my plans, but this kind of post literally writes itself.