![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
When working through my retirement planning as a federal employee, ChatGPT gave me so many wrong answers about how the benefits work that it was practically worthless. When making an important decision based on facts you have to double check everything it says.
Saying this is not being an LLM denialist. If you put ChatGPT in charge of important decisions it will make important mistakes.
So you the user need to be smart enough to know when ChatGPT is wrong. A lot of users aren’t that smart, so they can become entranced by what appears to be omniscience. This can be dangerously naive.
Have I sworn off LLMs? No. But I know to double check their results. They’re like a research assistant who is overly confident and doesn’t check their work before turning it in. For $20/month, it is what it is.
Saying this is not being an LLM denialist. If you put ChatGPT in charge of important decisions it will make important mistakes.
So you the user need to be smart enough to know when ChatGPT is wrong. A lot of users aren’t that smart, so they can become entranced by what appears to be omniscience. This can be dangerously naive.
Have I sworn off LLMs? No. But I know to double check their results. They’re like a research assistant who is overly confident and doesn’t check their work before turning it in. For $20/month, it is what it is.