Thanks for your input Island Man. I appreciate different approaches of the matter, that's what makes it intriguing. We are uncertain about the amount of control we can have in the future and we're talking about huge things that certainly will influence our lives. Therefore I also agree with ttdtt because we develop AI to help us and perform certain tasks more efficient, not because scientists want to become digital alchemists and deliberately open pandora's box.
It learns how to do its task by being fed lots and lots of data. It finds patterns in the data and creates and tweaks its own rules for processing data based on those patterns. It is like a child that learns to talk and learns the meanings of words by listening to others talk and the context in which specific words are used. It's learning the way we learn, albeit in a far more rudimentary way - for now.
How did it "learn" to learn in the first place, by itself or by intelligent human input?
How does it "find" without acting within the laws of programming? Does it "want" otherwise? No, it has no will or desire.
I believe that we can only reason and try to predict from experience and so far our experience is that codes do not write themselves, they are written by us. A defect or virus will not improve its working, a program can even be shut down because of it. So why expect such a huge positive "mutation" in non-living matter/digital machinery?
A lot of people get hung up on the issue of consciousness and self awareness. But these aren't needed for AI to be a danger.
True, and that's why I mentioned the millenium bug. There was potential danger (so we were told), and no one will claim it had to do with some AI being self aware.
All that's needed is superintelligence. AI has already proven itself superior to us in basic strategic reasoning used in games
I disagree, it is not superiour to the sum of our collective intelligence in any way. Algorhythms are not capable of learning, they can solve problems because they are programmed to perform certain tasks, such as calculation and data processing and darn, they do it superfast! But let's take chess. It has no freedom to create new rules, it can only search all options, no matter how many. It can even detect patterns of the human player because it can have access to all past and current data that has been stored during its gameplay history. None of those options is unknown to our collective knowledge of chess. It is not intelligent in the sense that it can create something "new" in the world of chess.
I know that people like Stephen Hawking fear a 'rise of the robots' in the future, but so far every smart machinery that has been invented by humans has been improved or replaced because it became outdated. So why would a machine or computer suddenly be "miraculously blessed" with the will and intention to jump ahead its own purpose, become smarter and do its own thing?
Furthermore, wouldn't any attempt to create, test and develop AI only take place under strict and controlled circumstances/protocol? How could any certain artificial attempt to think for itself go unnoticed by the scientists involved? Wouldn't they rigorously study everything that is happening in detail before going any further?
Like was said in the OP, I also don't worry too much but for my own reasons. I do NOT fear a human extinction because of AI.