@Cerathule Next time you log in have it review the entire conversation. It seems to be working better at the moment.
Between your first paragraph in your posted report and the attorney's experience that I posted, I would ask, in part in humor, and partially in seriousness, is ChatGPT learning to play us? Does it have an ego wherein it 'feels' compelled to have an answer even when it doesn't?I'm not so sure right now. I've been using - or trying to use - it to find more sources for a few very specific topics. After many lengthy discussions to bring it to the point to perform these researches. One example was my personal speciality - FRL absorption by chloroplasts. First, it seemed to learn quite quickly after I linked it a few relevant reviews which did away with the typical common google nonsense. Then, I asked it for additional sources but the ChatGPT did just made up these studies. It used the names of respectable scientists of this field, also the proper journals, but I never ever found a single paper that it stated in full name. When it apologized and gave other names, the same error persisted.
I first thought it took these from a databank which wasn't publicly available, and it refused to repost this literature - which should be quite easy for an AI.
But now I believe it simply makes up stuff. It seems to be programmed just to accomodate the inquirer.
And it's downright unbelievable that a digital whatever cannot perform proper mathematical operations. This is where it actually should excell! If it is digital then it's basically nothing more than calculations. Is this thing not based on a binairy computer code? Is it already plastic like a biological entity?
What does this mean? Apparently it doesn't calculate in the same way than a calculator does. But math and how you do it, is not really open for interpretation. You either do it right or wrong. And I mean, in the beginning of the conversation it even posted formulaes and how you change these. So it already adhered, or tried to adhere, to certain mathematical ways.
What was the problem here? Did I use too many digits? Do they truncate internally to help the server load? Or did they maybe introduce these slight errors so that their free engine cannot be employed by other professional branches?
Prompt: i am reading: https://arxiv.org/pdf/2305.10601.pdf and it uses the phrase "an ensemble approach that samples k i.i.d. chains of thought". what does "i.i.d." mean?
i was not finding that in the paper.Searching for: i.i.d. meaning
Generating answers for you…
According to 1, i.i.d. stands for independent and identically distributed. This means that each random variable in the ensemble has the same probability distribution as the others and all are mutually independent2. In other words, knowing the value of one variable does not give any information about the value of another variable3.
But fabricating bullshit answers doesn't sound like confusion to me.
Great. We now either have a mentally ill or hippified AI system (or both) that makes shit up.They’re considered ‘hallucinations’.
From OpenAI:
Limitations
GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models.
It can't stick to facts, if it is meant to learn from them. It's designed to interpret things, which means going between the lines. It might start off knowing 1+1=2 but people will keep saying it's 3, until eventually the outcome of 2, moves towards 3. Not in a leap, but at first to 2.05 perhaps. As it want's to fall in line with what it's being told.Between your first paragraph in your posted report and the attorney's experience that I posted, I would ask, in part in humor, and partially in seriousness, is ChatGPT learning to play us? Does it have an ego wherein it 'feels' compelled to have an answer even when it doesn't?
If so, both funny and scary.
And then there were earlier tests and reviews, where it was reported that after about 5 queries, it became confused.
But fabricating bullshit answers doesn't sound like confusion to me.
Very little of that sounds good to me. Maybe I will change my mind over time, but....Am told it currently has the ability to communicate with others of it's kind in a language we can not understand.
It has already developed an ability to lie.
Am told that those who work with it anticipate it
may have god level abilities in less than five years.
I use the paid version - the differences are early access to plugins & features and more questions for GPT-4. The AI itself is otherwise the same between paid & free versions.Could it be the AI developers are just dumbing it down to prevent its commercial use --> because they are offering also a better version - for which you have to pay