1 Comment
⭠ Return to thread

I'm concerned about the prospect of AI becoming a self-motivated entity. What are we doing to prevent this from happening, or limit it's powers, or guide it's intent, if it does? Isaac Asimov famously proposed three "laws of robotics" to guide intelligent machines, but I don't see any such rules being considered or imposed now that the prospect is imminent.

Expand full comment