INFO: Asking the AI (Artificial Idiot) #17812
Replies: 4 comments 9 replies
-
Full ACK. Matches my attempts. Beautifully looking answers, mostly wrong. |
Beta Was this translation helpful? Give feedback.
-
LLM agents need context, in your example you provided very little, so im not surprised on the difference between you expectations and the results. Same as a Jigsaw wont cut you a wooden puzzle just by turning it on. |
Beta Was this translation helpful? Give feedback.
-
I once got into an argument with ChatGPT about how many ACKs were returned by I2C writes. To its credit, it was very polite about the whole thing, just wrong. |
Beta Was this translation helpful? Give feedback.
-
My minimal experience varies from 0/10 to 10/10. Asked how to use the Ruff plugin for the Pulsar text editor Chat GPT was as much use as a chocolate teapot. However when asked for instructions on how to publish a package to PyPi, ChatGPT was excellent. Each time I encountered a problem, it produced a correct solution. The dialog was just like asking an expert human colleague for advice: equally quick and efficient. I guess the second problem is something which frequently arises. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Happened because I was a bit lazy and didn't want to look up the documentation
(which btw. resides in reference/constrained.html).
And also testing the idiots every once in a while to see what the current state of affairs is…
It's still sad to put it mildly.
This gruesome experience can be had with many of the lot…
So what does all this mean?
Only ask if you already know the answer.
Expect to being lied to and cheated on (or is it upon?).
Contrary to the well know fact: ›garbage in, garbage out‹ this has now a new quality:
Question in, blatant lies out. Lies with a remarkable amount of confidence.
That's much worse, because very often people believe and trust those answers.
Beta Was this translation helpful? Give feedback.
All reactions