Actually, it can. It's called "metaprogramming". A program could be created to look for area of vulnerability, however, the programming which looks for said vulnerabilities is subject to human error as well.AlexKnight wrote:In the end, it's the humans that designed the computer who created the problems. Vulnerabilities exist because humans missed something.
A computer can't patch a vulnerability on its own, it still needs a human to do the work.
I don't believe machines will ever truly be self-aware. I think we will be able to mimic this behavior, however, consider what would need to be done to build a machine that questions why it exists - what would need to be done to make a machine that reflects on its own actions. Exactly which emotions trigger?AlexKnight wrote:Also, a computer doesn't do things it isn't programmed to do, not unless true AI has developed without me knowing it.
So far, we only have VI's (virtual intelligence), even though we call it AI.
As soon as a computer becomes self aware, we're in trouble.
We as human beings have no control over what emotions are triggered by certain events. For example, if someone did something terrible to someone you care about, you cannot choose to feel good about it, and to be happy for the person who committed the act. You could try to pretend to be happy about it, and try to smile about it, but in such a case, you are making a choice - just as a machine would. Obviously, it would be the wrong choice, but I think my point stands.
Programs behave based on logical choices which are legal in their instruction set, or on randomization via a generated or input seed value. There is no spontaneous reaction in machines. The closest thing to "spontaneous" would be "randomization", which is very different.
People like Stephen Hawking and Michio Kaku are bringing this kind of nonsense up in order to try to remain relevant in a world of science and technology which has left them behind. Don't believe the panicked hype. It's just silly.
I've been a computer geek since I was 7 years of age. I've built and repaired countless computer systems - some off the shelf, some proprietary. I've written several programs for my own use. I've used so many different software packages for so many highly varied purposes that I've lost count. I know computers, and electronics. I know operating systems on levels most would never care to. I don't believe true sentience in machines will ever be a reality. I believe it will always be a form of mimicry.
Before that's possible, the machine will need to have a will to survive. In order to have a will to survive, the machine will have to fear death. Exactly how would a machine have a fearful reaction without being told when to do so? Emotions are triggered by what we have learned. The key word here is TRIGGERED, not CHOSEN. Exactly how does a programmer go about writing software where emotion will accurately trigger based on an perceived event? Through a series of boolean-like statements? Case statements? All of it would still be mimicry and choices, not emotion. Emotions are not a choice. A lot of this speculation about truly sentient machines is futurist fantasy. It's a pleasant fantasy, but still a fantasy.Xephyr wrote:When we reach that point that we can build a computer that can think for itself and do as it wishes, we would have created the beginning of our extinction. Just like there are good and bad people, there can become good and bad robots. Even then so, we as humans will need a way to terminate these robots, via some means they will never know about. If done any other way to prevent mankind from being overtaken, robots will realize that if you try to harm it by disabling it because it "bugged out" or "misbehaved", it will fight you to save its life.
The only way that machines will have the tiniest slightest chance at true sentience is when we can replicate the human brain, in its entirety - all of its functions - all of its characteristics, all of its storage capacity - with synthetic materials. This is thousands of years away - at a time when atomic construction has long since replaced traditional manufacturing, and it is an everyday boring thing. True machine sentience would have to result from a spontaneous function of nature, and said function would have to allow nature to directly interact with the machine. This is something over which we have ZERO control.
The pathway we are more likely to take is to incorporate intelligent machines into our own genetic makeup, where the human and the fabricated being become one, and nature eventually finds a way to incorporate the two into its complicated matrix at the atomic level. Powerful cyborgs, complemented by intelligent robots - very very sexy intelligent robots.