Representative take:
If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt “cat” to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of “cat”.
I had severe decision paralysis trying to pick out quotes cause every post in that thread is somehow the worst post in that thread (and it’s only an hour old so it’s gonna get worse) but here:
solving the severe self-amplifying racial bias problems in your data collection and processing methodologies is easy, just order the AI to not be racist
…god damn that’s an actual argument the orange site put forward with a straight face
So this is how the tokenism sausage is made!