stabby_cicada@slrpnk.net to solarpunk memes@slrpnk.net · 2 months agoone more shitty techbrofad downslrpnk.netexternal-linkmessage-square180fedilinkarrow-up1544arrow-down165file-text
arrow-up1479arrow-down1external-linkone more shitty techbrofad downslrpnk.netstabby_cicada@slrpnk.net to solarpunk memes@slrpnk.net · 2 months agomessage-square180fedilinkfile-text
minus-squareNigelFrobisher@aussie.zonelinkfedilinkarrow-up64arrow-down3·2 months agoWhat’s actually going to kill LLMs is when the sweet VC money runs out and the vendors have to start charging what it actually costs to run.
minus-squarePriorityMotif@lemmy.worldlinkfedilinkarrow-up19·2 months agoYou can run it on your own machine. It won’t work on a phone right now, but I guarantee chip manufacturers are working on a custom SOC right now which will be able to run a rudimentary local model.
minus-squareTʜᴇʀᴀᴘʏGⒶʀʏ@lemmy.blahaj.zonelinkfedilinkarrow-up10·2 months agoYou can already run 3B llms on cheap phones using MLCChat, it’s just hella slow
minus-squareMystikIncarnate@lemmy.calinkfedilinkEnglisharrow-up7·2 months agoBoth apple and Google have integrated machine learning optimisations, specifically for running ML algorithms, into their processors. As long as you have something optimized to run the model, it will work fairly well. They don’t want to have independent ML chips, they want it baked into every processor.
minus-squarePriorityMotif@lemmy.worldlinkfedilinkarrow-up5arrow-down1·2 months agoJokes on them because I can’t afford their overpriced phones 😎
minus-squareMystikIncarnate@lemmy.calinkfedilinkEnglisharrow-up2·2 months agoThat’s fine, Qualcomm has followed suit, and Samsung is doing the same. I’m sure Intel and AMD are not far behind. They may already be doing this, I just haven’t kept up on the latest information from them. Eventually all processors will have it, whether you want it or not. I’m not saying this is a good thing, I’m saying this as a matter of fact.
minus-squareTriflingToad@lemmy.worldlinkfedilinkarrow-up1·edit-21 month agoIt will run on a phone right now. Llama3.2 on Pixel 8
minus-squareVictor Gnarly@lemmy.worldlinkfedilinkEnglisharrow-up13arrow-down1·edit-22 months agoThis isn’t the case. Midjourney doesn’t receive any VC money since it has no investors and this ignores genned imagery made locally off your own rig.
minus-squareMatch!!@pawb.sociallinkfedilinkEnglisharrow-up2arrow-down1·2 months agoyeah but that’s pretty alright all told, the tech bros do not have the basic competency to do that and they can’t sell it to dollar-sign-eyed ceos
What’s actually going to kill LLMs is when the sweet VC money runs out and the vendors have to start charging what it actually costs to run.
You can run it on your own machine. It won’t work on a phone right now, but I guarantee chip manufacturers are working on a custom SOC right now which will be able to run a rudimentary local model.
You can already run 3B llms on cheap phones using MLCChat, it’s just hella slow
Both apple and Google have integrated machine learning optimisations, specifically for running ML algorithms, into their processors.
As long as you have something optimized to run the model, it will work fairly well.
They don’t want to have independent ML chips, they want it baked into every processor.
Jokes on them because I can’t afford their overpriced phones 😎
That’s fine, Qualcomm has followed suit, and Samsung is doing the same.
I’m sure Intel and AMD are not far behind. They may already be doing this, I just haven’t kept up on the latest information from them.
Eventually all processors will have it, whether you want it or not.
I’m not saying this is a good thing, I’m saying this as a matter of fact.
It will run on a phone right now. Llama3.2 on Pixel 8
This isn’t the case. Midjourney doesn’t receive any VC money since it has no investors and this ignores genned imagery made locally off your own rig.
yeah but that’s pretty alright all told, the tech bros do not have the basic competency to do that and they can’t sell it to dollar-sign-eyed ceos