Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.
Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.
But we have resources to make public transit for all today and delaying this is stupid because we lose a lot of lives from cars and they are very NOT enviromentally friendly too. I know it’s not related to the issue of the post but I felt that people making a fuss about that while ignoring all the deaths caused by humans so far is dumb. Not saying that it’s not an issue, it absolutely is but it’s not nearly anywhere near to being as big of a problem as current problems with cars.
Edit: I miessed the last part of your comment lol
Absolutely, its pretty much proven than automated transport will be leaps safer then human operated vehicles. A big hurdle are our moral cultures that since the distant parts have always dealt with loss by pointing towards someone or something to blame and assessing what should have happened and who should compensate who. Till we solved this problem it matters less that we can technically build it but that we simply dont want to yet.
Similarly we technically have the food production to end world hunger, and have enough homes to end homelessness, but because of primitive social fears collective society just doesn’t want to commit to fixing these problems.
My hope is on Social (online) education powered by AI but il have to wait and see if we can even create that.