no humans do not learn things the same way LLMs do lol we are not machines or programs, a human is a human, a person with their own experiences and expression, an LLM is an LLM that cannot experience things or express itself. An LLM will always input and output linearly, humans learn things heirarchically. A human cannot be trained on millions and billions of data in a small time frame, and a humans responses cannot be fine tuned at the will and whim of someone else programming them. Humans also do not understand things as tokens, chatGPT understands tokens by applying a number to it, and a token is literally form of currency too, when humans spell and speak we are not assigning every word a serial number based on its occurance in use, and we do not have a limit to what we can express, and we do not break words into tokens and speak transactionally, and we do not hear or read words or hear different phrasings of sentences and arbitrarily break them down into different units of data, we understand and differentiate sylabbles and vowels but that is not how chatGPT is breaking down words, the breaking down of words into tokens can be arbitrary based on just the word, the length of word, the phrasing, or single symbols.
"humans do not learn things the same way LLMs do lol we are not machines or programs
That depends on how broadly you define "machine".
, a human is a human, a person with their own experiences and expression, an LLM is an LLM that cannot experience things or express itself.
An LLM can certainly express itself - that's how we interact with them.
Whether it can experience things depends on how broadly you define 'experience', and unless you are pretty narrow, LLMs learn from experience.
An LLM will always input and output linearly, humans learn things heirarchically.
Current LLMs are based on a poor implementation of where our understanding of the brain was a few decades age.
A human cannot be trained on millions and billions of data in a small time frame
A typical movie is billions of bits of data.
, and a humans responses cannot be fine tuned at the will and whim of someone else programming them.
Ever see a politician's position or message be fine-tuned at the whim of donors and/or voters?
Humans also do not understand things as tokens, chatGPT understands tokens by applying a number to it, and a token is literally form of currency too, when humans spell and speak we are not assigning every word a serial number based on its occurrence in use,
When we learn a language we tokenize it, and increase 'weights' on synapses in our brains, based on occurrence in use.
So our synapses are analog, while current LLM hardware is digital. That is a pretty minor distinction both because because there are neural network hardware chips that are analog until you get down to the level of molecules, and because at the level of molecules we are approaching digital ourselves.
and we do not have a limit to what we can express,
Words cannot express how strongly I disagree with that.
and we do not break words into tokens
Words typically already are tokens.
and speak transactionally
Someone hasn't been out in the world much. And how are my responses to e-mails not transactional?
, and we do not hear or read words or hear different phrasings of sentences and arbitrarily break them down into different units of data
Sure, we do, other than the 'arbitrary' part. When learning a new word we break it down into parts automagically (that's not a real word, but people understand 'auto' 'magic' and the 'ally' ending).
, we understand and differentiate syllables and vowels but that is not how chatGPT is breaking down words, the breaking down of words into tokens can be arbitrary based on just the word, the length of word, the phrasing, or single symbols."
The AI has learned that its tokenization works for. Trivializing its tokenization as 'arbitrary' shows the human's failure to understand why the AI does it that way.
Ignoring typographical errors. That was the longest run on sentence haha