- cross-posted to:
- technology@lemmy.world
“admit” like this was something they denied? Everyone always knew that ai text detectors don’t work and they never claimed otherwise.
The issue is that schools have been using detectors to flag AI essays. When students (wrongly) get caught up in them, they get penalized even if they never used any AI to help them write the essay in question. Sort of like a plagiarism filter falsely flagging a paragraph as plagiarized, even if the student didn’t plagiarize it.
Oh yeah it’s an issue, but none of that is on OpenAI. There’s no admission here. It’s a statement from an authority to shut up the idiots, like a map maker saying earth is a sphere, something we already know but somehow it’s still believed by many.
This isn’t an admittance, as that implies a fault. This is a statement of fact. Of course AI writing detectors don’t work, any human can write in any style, and an AI can replicate any writing style.
AI can replicate any writing style.
This is false, mostly because AI outputs nonsense that almost looks like real writing. It’s all firmly in the uncanny valley of gibberish.
Is true that an AI cannot spot AI writing, but for anything longer than a paragraph or two a human can spot AI output most of the time.
This feels a little like people who think they can always spot plastic surgery, when really they can just always spot the bad-okay cases, but completely miss the good outcomes of plastic surgery