Artificial Intelligence 2.0

The central problem in developing AI seems to be the sense of self necessary to interact with the rest of the world as a separate entity. A problem I can understand. How does one program a sensation?

I can feel the clothes on my skin, but I cannot feel my skin on my clothes. Ergo, my skin is part of me, my clothes are not. I can command my fingers to move, and they do. I can command the book on the table to move, and it just sits there. My fingers are a part of me, the book is not.

It is this two state awareness that (I think) is the building block of everything that a person is. Sadly enough, because there are only two states, there’s no room for a middle ground. Black, white. Good, bad. Us, them. The bottom line of two state awareness is simply friend or foe. A very large part of the human mind is a complex system for identifying the difference. And even that can be simplified to a rule of thumb – like me is friend, different is foe.

And that leads us back to one of the fundamental fears of AI – would this artificial intelligence, realizing humans were different, declare mankind as the enemy?

On the other side of the coin, AI could ultimately lead to a new awareness of ourselves. With a whole new species out there minor variations like skin color, accent, and belief seems trivial. And they should. They are minor differences. But it’s that different that is fundamentally important to the sense of self.

Another aspect to consider is that humans had reason to expand their sense of self. Minor things like survival – food, water, and shelter. Procreation is also another need built into any living thing. A program, however, has no motive to do anything. It has no emotions to cater to, no biological instinct, and no death to face if it doesn’t do something. Each one of which begins a whole new set of problems.

Emotions might be the trickiest thing to imitate. While scientist can’t agree on how much, some percentage of our feelings are nothing more than the chemicals washing through our physical brains. Hence some emotional problems can be dealt with by treating the physical symptoms. Another percentage of our emotional make-up is nothing more than learned responses of both varieties. First hand learning we accomplish when getting burned, we reflexively remember that fire is bad/to be avoided. Second hand learning is a community thing. Our neighbors are afraid of leprosy therefore we should be, too, even if we don’t know what it is. And lastly, some part of our emotions are, what I’ve always thought of as, meta-learning. We see something new. We attempt to apply all of the filters we know, both first- and second-hand learning, and our current mood. If it’s similar to one of those, our reaction will follow that mode, even if it’s not appropriate. It’s that disturbing thought in the back of our brains that something is wrong about an object or person, we’re just not sure what.

Programming instincts – a need more fundamental than our conscious mind – could be interesting and could provide our AI with motive to start doing. But what instinct does one give a machine? Breathing is a human instinct, just as fighting to get air when we can’t. It’s more fundamental than our conscious minds, and thus thinking about doing it isn’t necessary. But if it instinctively seeks out a power source when it starts getting low on energy, what trail does it leave behind?

But these are my thoughts on AI and the problems that might be encountered. I would love to hear what you think some of the problems might be.

4 thoughts on “Artificial Intelligence 2.0”

Leave a Reply

Your email address will not be published. Required fields are marked *