One thing I’ve noticed about ChatGPT is it’s really bad at anything that allows multiple interpretations. It will have fairly unmovable positions around more scientific things like vaccines, but around politics/news, it will pretty much support whatever conclusion I ask it to, and will abandon those positions if I make counterarguments, but about the factual elements or interpretations of it. It’s become hugely noticeable problem in trusting it for anything like say; the news.
One thing I’ve noticed about ChatGPT is it’s really bad at anything that allows multiple interpretations. It will have fairly unmovable positions around more scientific things like vaccines, but around politics/news, it will pretty much support whatever conclusion I ask it to, and will abandon those positions if I make counterarguments, but about the factual elements or interpretations of it. It’s become hugely noticeable problem in trusting it for anything like say; the news.
"I don't use AI Chatbots, I go to reputable sources like TrueAmericanPatriotNewsDaily.truth"
[articles generated by ChatGPT]