ByteOnBikes@slrpnk.net to People Twitter@sh.itjust.worksEnglish · 4 days agoGrok got no chillslrpnk.netimagemessage-square91fedilinkarrow-up11.17Karrow-down112file-text
arrow-up11.16Karrow-down1imageGrok got no chillslrpnk.netByteOnBikes@slrpnk.net to People Twitter@sh.itjust.worksEnglish · 4 days agomessage-square91fedilinkfile-text
Woops. Sorry mods. Reposting with links. https://nitter.space/thelillygaddis/status/1904852790460965206#m https://archive.is/vuiMj https://x.com/thelillygaddis/status/1904852790460965206#m
minus-squarepeoplebeproblemslinkfedilinkEnglisharrow-up20arrow-down3·4 days agoAny AI model is technically a black box. There isn’t a “human readable” interpretation of the function. The data going in, the training algorithm, the encode/decode, that’s all available. But the model is nonsensical.
minus-squarePieisawesome@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up34·4 days agoThat’s not true, there are a ton of observabity tools for the internal workings. The top post on HN is literally a new white paper about this. https://news.ycombinator.com/item?id=43495617
minus-squaredaddy32@lemmy.worldlinkfedilinkEnglisharrow-up1·3 days agoSome simpler “AI models” are also directly explainable or readable by humans.
minus-squareWilshire@sopuli.xyzlinkfedilinkarrow-up3arrow-down1·4 days agoThey also made a video: https://youtu.be/Bj9BD2D3DzA
minus-squareneatchee@lemmy.worldlinkfedilinkarrow-up9arrow-down2·4 days agoIn almost exactly the same sense as our own brains’ neural networks are nonsensical :D
minus-squareaeshna_cyanea@lemm.eelinkfedilinkEnglisharrow-up2·edit-23 days agoYeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition
Any AI model is technically a black box. There isn’t a “human readable” interpretation of the function.
The data going in, the training algorithm, the encode/decode, that’s all available.
But the model is nonsensical.
That’s not true, there are a ton of observabity tools for the internal workings.
The top post on HN is literally a new white paper about this.
https://news.ycombinator.com/item?id=43495617
Thank you that’s amazing
Some simpler “AI models” are also directly explainable or readable by humans.
They also made a video:
https://youtu.be/Bj9BD2D3DzA
In almost exactly the same sense as our own brains’ neural networks are nonsensical :D
Yeah despite the very different evolutionary paths there’s remarkable similarities between idk octopus/crow/dolphin cognition