Honestly my eyes glommed onto the capital letters first. I brought to mind images from the words, and Homer Simpson is clearer and brighter, and somehow that’s the internal representation of coherence or something. That aspect of using the brightness to indicate the match/answer/solution/better bet might be an instruction I gave my brain at some point too. I’m autistic and I’ve built a lot of my shit like code. It’s kinda like the iron man mask in here to be honest. But so more more elaborate. I often wish I could project it onto a screen. It’s like kinex models doing transformer jiu jitsu and me flicking those little battles off into the darkness to run on their own. I’m afraid I might not be a good candidate for questions of how human cognition normally works. Though I’ve done a lot of zen and drugs and enjoy watching it and analyzing it too.
I’m curious, why do you ask? What does that tell you?
I will admit this is almost entirely gibberish to me but I don’t really have to understand. What’s important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.
You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.
An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.
It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.
Honestly my eyes glommed onto the capital letters first. I brought to mind images from the words, and Homer Simpson is clearer and brighter, and somehow that’s the internal representation of coherence or something. That aspect of using the brightness to indicate the match/answer/solution/better bet might be an instruction I gave my brain at some point too. I’m autistic and I’ve built a lot of my shit like code. It’s kinda like the iron man mask in here to be honest. But so more more elaborate. I often wish I could project it onto a screen. It’s like kinex models doing transformer jiu jitsu and me flicking those little battles off into the darkness to run on their own. I’m afraid I might not be a good candidate for questions of how human cognition normally works. Though I’ve done a lot of zen and drugs and enjoy watching it and analyzing it too.
I’m curious, why do you ask? What does that tell you?
I will admit this is almost entirely gibberish to me but I don’t really have to understand. What’s important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.
You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.
An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.
It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.