David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 2 months agoDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comexternal-linkmessage-square104fedilinkarrow-up1253arrow-down10
arrow-up1253arrow-down1external-linkDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 2 months agomessage-square104fedilink
minus-squarequeermunist she/her@lemmy.mllinkfedilinkEnglisharrow-up13·2 months agoUnless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary. Nirvana fallacy. All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.
Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.
All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.