Seems to me there are two categories of danger: 1) where AI functions as intended; 2) where AI makes mistakes and doesn't work as intended.
We think of computer programs as being objective and coldly analytical when run correctly, but consider the subject of genetic racial differences. We know there are mean genetic differences between ra…
Seems to me there are two categories of danger: 1) where AI functions as intended; 2) where AI makes mistakes and doesn't work as intended.
We think of computer programs as being objective and coldly analytical when run correctly, but consider the subject of genetic racial differences. We know there are mean genetic differences between races. An AI program to assign work opportunities developed by Democrats (equality) would be very different from an AI developed by populists (assignment by merit). At least until populists realized the consequences of their approach.
The use of AI allows leaders to offload difficult decisions and likely sets results in stone. Once the original developers have died off, nobody knows how to change whatever program they're relying on, however dysfunctional.
Then of course, if you develop an AI program able to detect any dissidence before it get a chance to spread, there is no chance at all of correcting mistakes or abuses.
And imagine if they allow the AI to write its own code based on everything it is trained on but without a conscience using only the limitations written by the original programmers. What could go wrong? 😒
Seems to me there are two categories of danger: 1) where AI functions as intended; 2) where AI makes mistakes and doesn't work as intended.
We think of computer programs as being objective and coldly analytical when run correctly, but consider the subject of genetic racial differences. We know there are mean genetic differences between races. An AI program to assign work opportunities developed by Democrats (equality) would be very different from an AI developed by populists (assignment by merit). At least until populists realized the consequences of their approach.
The use of AI allows leaders to offload difficult decisions and likely sets results in stone. Once the original developers have died off, nobody knows how to change whatever program they're relying on, however dysfunctional.
Then of course, if you develop an AI program able to detect any dissidence before it get a chance to spread, there is no chance at all of correcting mistakes or abuses.
And imagine if they allow the AI to write its own code based on everything it is trained on but without a conscience using only the limitations written by the original programmers. What could go wrong? 😒
And this discussion is based on our country. Given the amount of viruses created by Bad actors, it's hard to imagine a good future.