Stable Diffusion uses a dataset from Common Crawl, which pulled art from public websites that allowed them to do so. DeviantArt and ArtStation allowed this, without exception, until recently.
Where did the AI companies get their code from? Is scraped from the likes of stack overflow and GitHub.
They don’t have the proprietary code that is used to run companies because it’s proprietary and it’s never been on a public forum available for download.
Humans using past work to improve, iterate, and further contribute themselves is not the same as a program throwing any and all art into the machine learning blender to regurgitate “art” whenever its button is pushed. Not only does it not add anything to the progress of art, it erases the identity of the past it consumed, all for the blind pursuit of profit.
Me not knowing everything doesn’t mean it isn’t known or knowable. Also, there’s a difference between things naturally falling into obscurity over time and context being removed forcefully.
And then there’s when its too difficult to upkeep them, exactly like how you can’t know everything.
We probably ain’t gonna stop innovation, so we mine as well roll with it (especially when its doing a great job redistributing previously expensive assets)
If it’s “too difficult” to manage, that may be a sign it shouldn’t just be let loose without critique. Also, innovation is not inherently good and “rolling with it” is just negligent.
Devil’s advocate. It means that only large companies will have AI, as they would be the only ones capable of paying such a large number of people. AI is going to come anyway except now the playing field is even more unfair since you’ve removed the ability for an individual to use the technology.
Instituting these laws would just be the equivalent of companies pulling the ladder up behind them after taking the average artist’s work to use as training data.
How would you even go about determining what percentage belongs to the AI vs the training data? You could argue all of the royalties should go to the creators of the training data, meaning no one could afford to do it.
How would you identify text or images generated by AI after they have been edited by a human? Even after that, how would you know what was used as the source for training data? People would simply avoid revealing any information and even if you did pass a law and solved all of those issues, it would still only affect the country in question.
Literally the definition of greed. They dont deserve royalties for being an inspiration and moving a weight a fraction of a percentage in one direction…
It’s arguably not good that we’re normalizing people being able to use this while its training relied on other creators who were not compensated.
My programming training relied on other creators who were not compensated.
I imagine creators who… released their work for free, and/or open source?
Were they in public forums and sites like stack overflow and GitHub where they wanted people to use and share their code?
Stable Diffusion uses a dataset from Common Crawl, which pulled art from public websites that allowed them to do so. DeviantArt and ArtStation allowed this, without exception, until recently.
Where did the AI companies get their code from? Is scraped from the likes of stack overflow and GitHub.
They don’t have the proprietary code that is used to run companies because it’s proprietary and it’s never been on a public forum available for download.
Humans using past work to improve, iterate, and further contribute themselves is not the same as a program throwing any and all art into the machine learning blender to regurgitate “art” whenever its button is pushed. Not only does it not add anything to the progress of art, it erases the identity of the past it consumed, all for the blind pursuit of profit.
Oh yeah tell me who invented the word ‘regurgitate’ without googling it. Cause the its historical identity is important right?
Or how bout who first created the internet?
Its ok if you dont know, this is how humans work, on the backs of giants
Me not knowing everything doesn’t mean it isn’t known or knowable. Also, there’s a difference between things naturally falling into obscurity over time and context being removed forcefully.
And then there’s when its too difficult to upkeep them, exactly like how you can’t know everything.
We probably ain’t gonna stop innovation, so we mine as well roll with it (especially when its doing a great job redistributing previously expensive assets)
If it’s “too difficult” to manage, that may be a sign it shouldn’t just be let loose without critique. Also, innovation is not inherently good and “rolling with it” is just negligent.
If its too difficult to manage, we should manage it?🤨
It’s too difficult for you to manage not for it to manage. Keep up.
Does meandering into other’s conversations and arbitrarily insulting people make you feel better about yourself?
Devil’s advocate. It means that only large companies will have AI, as they would be the only ones capable of paying such a large number of people. AI is going to come anyway except now the playing field is even more unfair since you’ve removed the ability for an individual to use the technology.
Instituting these laws would just be the equivalent of companies pulling the ladder up behind them after taking the average artist’s work to use as training data.
How would you even go about determining what percentage belongs to the AI vs the training data? You could argue all of the royalties should go to the creators of the training data, meaning no one could afford to do it.
How would you identify text or images generated by AI after they have been edited by a human? Even after that, how would you know what was used as the source for training data? People would simply avoid revealing any information and even if you did pass a law and solved all of those issues, it would still only affect the country in question.
Then we shouldn’t have artists because they looked at other art without paying.
Oonga boonga wants his royalty checks for having first drawn a circle 25,000 years ago.
As distinct from human artists who pay dividends for every image they’ve seen, every idea they’ve heard, and every trend they’ve followed.
The more this technology shovels into the big fat network of What Is Art, the less any single influence will show through.
Literally the definition of greed. They dont deserve royalties for being an inspiration and moving a weight a fraction of a percentage in one direction…