Top AI inventor Geoffrey Hinton reluctantly concluded that AI will probably humanity fairly soon

by slimboyfat 78 Replies latest social current

  • Beth Sarim
    Beth Sarim

    Fish:

    You're 100% spot on as usual.

  • Diogenesister
    Diogenesister
    Syd I guess Virtual Reality is the great filter in a sense. They are all out there, but plugging into your own personal fantasy kingdom is way too enticing and every intelligent species goes that route before developing interstellar travel.

    Funny you should say that.

    We went to collect my husband from Heathrow on the train so we essentially travelled right through London.

    What struck me was the average age of everyone we saw. I mentioned it to my husband..."Where are all the young people??!! Is it me or does everyone look so old!! I thought as you age you're supposed to feel outnumbered by the young!!"

    My husband said "oh they're here all right. They just never leave home." (i.e. They're all plugged in/online/gaming etc etc)

  • Fisherman
    Fisherman

    AI might have intelligence but ZERO power. Intelligence isn’t enough for AI to end humanity unless some AI information is used by humans to do it.

  • Diogenesister
    Diogenesister
    Dearest Fish Can't we just pull out the plug?
    Yep! exactly what I was going to post.

    I suppose the worry is that we wouldn't even be aware of it.

    Or at least by the time we were it would be too late.

    Let's face it, we're pretty easy to fool as a species. If eight overweight pensioners in Warwick with zero education between them can pull it off, the most powerful intelligence on the planet surely can.

  • TonusOH
    TonusOH

    It depends on what we're referring to. There is no one all-encompassing AI, and that would be very difficult to implement even if we wanted to. But AI systems can break things, not out of a desire to do harm, but because they are not configured correctly and people come to expect them to be perfect. We're human, we learn as we go. Sometimes we have to learn the same lessons over and over.

    But, yes- most of the time, the quick and easy solution is to turn it off. Very few AI systems will be able to prevent that, unless we program them to.

  • Fisherman
    Fisherman

    the most powerful intelligence on the planet surely can.

    The human brain is more powerful and independent though governed by feelings. AI depends on programming. A computer is very hard to beat at chess but you can find glitches and patterns and may defeat it. AI is a higher level but it is slso only a program, a calculator, as cofty said needs a battery and can burn a capacitor or blow out a circuit.

  • ThomasMore
    ThomasMore

    I recall Sandra Bullock in a movie years ago that depicted identity theft and other malicious online surveillance that threatened to ruin her life. I thought it was silly at the time, but now identity theft could/should be one word since it happens so much.

    I suppose it is only a matter of time before something really amazing (bad) happens and we wake up to the real danger. It wasn't that long ago that warnings began to surface about CRISPR (gene splicing device). Covid came along as the technology was rocketing skyward. Covid ended - CRISPR stock tanked. I don't claim there was correlation but it does seem that when we figure out how to do something, the first thing we often do is something stupid. At least I do - I'm not proud of it.

  • ThomasMore
    ThomasMore

    "AI is a higher level but it is slso (sic) only a program, a calculator, as cofty said needs a battery and can burn a capacitor or blow out a circuit."

    If it gets 300 miles on a single charge, that's a lot of "Death Race 2000" to sit through. Is David Carradine still alive? Let's ask him if an AI EV could kill more people than Uma Thurmond searching for Bill.

  • pokertopia
    pokertopia

    하나님께서 지구 위에 인간 생명을 존재하게 하신데 걸린 시간은 대략 45억년 정도로 추산된다. 너무나 오랜 시간이라고 생각될 수 있기 때문에, 생명은 단순히 오랜 시간과 우연에 의한 원소들의 조합으로 보는 것도 무리가 아니다. 하지만 우리가 현재 널리 퍼지기 시작한 인공지능의 경우, 이 인공 지능은 매우 초기 단계이다. 영화 터미네이터에 나오는 인공 지능망이 스스로를 자각하면서 스카이넷으로 창발하게 되는 장면을 혹시 떠 올릴 수 있는 사람이라면, 현재의 인공 지능이 '강한 인공지능'으로 발전하게 되는 데 얼마나 오랜 시간이 걸릴 수 있을지를 추산해 보라. 제대로 추산이 가능하다면, 인간 생명이 존재하는 데 45억년이 걸렸다는 것은 충분히 이해될 수 있을 것이다. '강한 인공지능(Strong Artificial intellegence )'이란 스스로 생각하여 발전할 수 있을 뿐만 아니라, 자신이 인공지능임을 자각하는 데에 이른 인공지능을 일컫는다. 여기서 더 발전된 인공지능을 "초지능(super-intellegence)"이라 우리는 일컫는다. 현재의 인공지능은 인간이 정보를 입력하여 그것을 학습시키는 단계이다. 그러나 스스로 학습하게 할려면 자료의 양이 방대하게 필요하므로 이러한 자료와 정보들을 인간이 일일이 입력할 수 없고 이미 존재하는 막대한 양의 빅데이터를 인공지능이 스스로 학습해야 하는데 그 과정을 딥러닝(심화학습)이라 한다. 이리하여 인공지능이 인간의 지능에 맞먹을려면, 인간이 소유하고 있는 수천억개의 뇌세포와 100조개의 신경의 접합인 시냅스가 필요하다. 100조개라는 수치가 얼마나 방대한 양인지 한번 생각해 보라. 생각만 해도 그건 불가능이야 라고 포기할 사람이 많을 것이다. 하지만 아주 작은 확률로 가능하리라 예측한다면, 그것은 현재의 컴퓨터의 능력으로서는 불가능하지만 양자 컴퓨터로서는 가능할 수도 있을지 모른다. 컴퓨터는 0이나 1만을 사용하는 2진법 체계이다. 그런데 양자 컴퓨터는 0과 1을 동시에 사용하는 컴퓨터이므로 기존 컴퓨터의 연산 속도보다 백만배 이상 시간이 단축된다. 따라서 인공지능의 딥러닝이 가능할 수도 있다는 기대를 가지게 되는 것이다. 하지만 아무리 연산 속도가 빠르더라도 그 수준은 마차를 고치는 수준 정도 밖에 되지 않는다 747 보잉기를 만들 수 있는 정도가 되려면 엄청난 시간이 걸릴 것이다. 그러므로 인간의 생각이 미치는 기간에 그것을 이루기는 불가능하지만 시간을 무한히 늘려 수십억년으로 잡는다면 못할 일도 없을 것이다. 이제 수십억년이 걸려서 '강한 인공지능'이 창발될 때, 이와 비슷하게 우리의 생명과 물질의 근원을 연구하는 데 있어서도 비슷하게 적용될 것이다. 인간이 생존하는 한, 끊임없이 과학은 발전할 것이며 생명과 물질의 비밀에 접근하게 될 것이다. 인간 자체의 신체도 유발 하라리가 말한 것처럼 호모 데우스로 변화될 수 있다. 만일 메리의 키가 큰다면 이웃집 톰의 키도 커게 될 것이다. 마찬가지로 인공지능이 설령 초지능으로 발전하더라도 인간은 그보다 강한 호모 데우스로 변하기 때문에 인공지능이 인간보다 강하게 될 염려는 없다. 그런 염려는 인간이 발전하여 호모 데우스가 된다면 신을 능가하는 경우가 발생한다는 논리가 될 것이다.

    It is estimated that it took about 4.5 billion years for God to have human life on Earth. Since it can be thought of as such a long time, it is no wonder that life is simply a combination of long time and elements by chance. But in the case of artificial intelligence, which we are now beginning to spread widely, this artificial intelligence is very early. If anyone can recall the scene where the artificial intelligence network in the movie Terminator becomes self-aware and emerges on Skynet, estimate how long it can take for the current artificial intelligence to develop into a "strong artificial intelligence." If properly estimated, it would be fully understandable that it took 4.5 billion years for human life to exist. "Strong Artificial Intelligence" refers to artificial intelligence that not only can it think and develop on its own, but has also reached the recognition that it is artificial intelligence. Here, we refer to more advanced artificial intelligence as "super-intelligence." The current artificial intelligence is a stage in which humans enter information and learn it. However, since a vast amount of data is required to allow people to learn on their own, humans cannot enter these data and information one by one, and artificial intelligence must learn a huge amount of big data that already exists on its own, which is called deep learning. Thus, in order for artificial intelligence to match human intelligence, synapses, which are the bonds of hundreds of billions of brain cells and 100 trillion nerves owned by humans, are needed. Think about how vast the 100 trillion figure is. There will be many people who will give up saying that it is impossible just by thinking about it. But if you predict that it will be possible with a very small probability, it may not be possible with current computer capabilities, but it may be possible with quantum computers. A computer is a binary system that uses only zero or one. However, since quantum computers use 0 and 1 at the same time, time is reduced by more than a million times than the computational speed of existing computers. Therefore, there is an expectation that deep learning of artificial intelligence may be possible. However, no matter how fast the calculation is, the level is only about the level of fixing a wagon, and it will take a tremendous amount of time to be able to build a 747 Boeing. Therefore, it is impossible to achieve it in the period of human thought, but if you increase the time indefinitely to billions of years, there will be nothing you can't do. Now, when "strong artificial intelligence" is created in billions of years, it will similarly be applied to studying the sources of our lives and materials. As long as humans survive, science will constantly develop and access to the secrets of life and matter. The human body itself can also be transformed into Homo Deus, as Yuval Harari said. If Mary gets taller, Tom next door will get taller. Likewise, even if artificial intelligence develops into super-intelligence, humans turn into stronger homo deus, so there is no fear that artificial intelligence will become stronger than humans. Such concerns will be the logic that if humans develop and become Homo Deus, they will surpass God.


  • TonusOH
    TonusOH

    From https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

    Aside from how chilling it is to realize just how much time and effort and resources are being constantly expended in the quest to find more efficient ways to exterminate one another, there was an interesting talk about the potential issues with AI. There is now an update at the start of the part in question, where the speaker walks back his claim that these simulations actually took place. He now claims that they were "thought experiments." That sounds... questionable in light of what he originally said. Here are some parts:

    Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.

    He notes that one simulated test saw an AI-enabled drone tasked with a SEAD (Suppression of Enemy Air Defenses) mission to identify and destroy SAM sites, with the final go/no-go given by the human. However, having been 'reinforced' in training that destruction of the SAM was the preferred option, the AI then decided that 'no-go' decisions from the human were interfering with its higher mission - killing SAMs - and then attacked the operator in the simulation.

    [...] "We trained the system - 'Hey don't kill the operator - that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

    On the one hand, it's possible to simulate and debug AI in an environment that is completely safe. On the other hand, when you are developing systems that will put military-grade weapons in the hands of AI at a scale capable of winning battles and wars, you are counting on a 'mind' that sees everything as a simulation- as a logic problem to be solved within a closed system, and not as actions taken in the real world.

    Automated combat systems will be developed and eventually deployed, IMO. That is where things will eventually go off the rails in catastrophic ways.

Share this

Google+
Pinterest
Reddit