I think “Fondly Fahrenheit” perfectly captured what humans are most afraid of– our creations turning against us. Whether that is an android or child, we are scared that after we care for it, we might eventually lose control over it. We are scared that our future children will eventually develop their own ways of thinking, rebelling against what we taught them. We are scared that the machines that we sought after will start plotting against us. The biggest difference between a child and an android, however, is their degree of free will.

Our children can rebel because they will become capable of thinking for themselves. They are “programmed” from the DNA we gave them and can be bounded by their biology, but they are just as equally influenced by their environment. They are able to learn from their surroundings, and make decisions for themselves – we can never fully control them.

Everything from here on out is from my crude and minimal understanding of AI:

On the other hand, we program androids with very specific code, and that code is only as grand and vast as our own knowledge. When we code androids, we code them with specific instructions, and while they can “learn,” it can’t go beyond their code. For example, the AI that plays chess (Google’s “self-learning” AI AlphaZero) is able to beat a grand master after playing with the grand master numerous times. The AI “learns” in the sense that it recognizes the patterns of the other player and is able to counter it, and with more plays, it stores more information, eventually being better than the grand master. But, this android is not programmed for anything beyond chess. I would argue that this android isn’t even thinking in the same sense that we do; it’s just pattern recognition and running through hundreds or thousands of probabilistic outcomes – it wins because it can remember more.

The “self-learning” AI was created a couple of years ago. My biggest question is how far have we come? Are we able to create actual “self-learning” and self-thinking AI? And can AI create knowledge if we code them since our own knowledge is so limited. AIs are a reflection of us – they carry out our goals for them, and we even code our own flaws in them (racism, sexism, etc.). Will there be a time when AIs surpass humans at being better humans?