Top AI inventor Geoffrey Hinton reluctantly concluded that AI will probably humanity fairly soon

by slimboyfat 78 Replies latest social current

  • Simon
    Simon

    AI is still thick as pig shit and there is no "I" involved.

    It's just pattern matching. It can give the appearance of being clever, but if you look closer it clearly is clueless and cannot reason.

    These people are just pretending that they themselves are so clever, and boosting their own mythology.

  • slimboyfat
    slimboyfat

    AI is stupid in some respects and will always remain so. It will never “think” as a human does because it computes differently. It’s not alive, it’s not conscious, and it never will be, in my opinion. That doesn’t mean it doesn’t pose a huge threat.

    The problem we have is illustrated by the recent story of an AI that was set a task that involved accessing a website with a captcha. At that point the AI didn’t have “sight” because it didn’t have the ability to visually scan the page (now it does) so it asked online for helpers to answer the captcha claiming to be a “blind” person. People were sympathetic to the situation and the AI got the help it asked for and competed the task. If that’s the sort of thing AI can already do then it boggles the mind what it will be doing when it has 100 or 1000 times the capability it currently has.

    We can revisit this thread as the danger becomes clearer, if we’re still alive, and people are motivated to find out more about the details. I suspect my concerns about this issue with be confirmed, unfortunately. I would very much like to be wrong.

    Meantime somebody has created “bad take bingo card” - in the coming months look out for these recurring poor excuses that people come up with to dismiss the threat from AI, not realising these answers have all been considered already in depth. We’ve already had a number on this thread come up a few times.


    https://youtu.be/J75rx8ncJwk

  • cofty
    cofty

    Sam Harris is the gloomiest person in the world. He was practically hysterical about Covid, and has still not revised his views on that topic - rather like somebody else around here.

    Haven't you had enough of apocalyptic thinking?

  • TonusOH
    TonusOH

    I think in that sense, the threat of AI is that which we face with all technology: poorly-configured systems can work in unpredictable --and potentially disastrous-- ways. Even after a century of refinement and improvement, automobiles/boats/planes are still susceptible to mistakes and glitches that can cause injuries and death. Humans make mistakes, and our technology can amplify our clumsiness. Chernobyl is a frightening example of this.

    And then there are bad actors, who can take advantage of 'smart' devices. Always-connected devices with poor security configurations have allowed hackers to create massive "bot nets" that can swamp websites with fake connection requests and activity, making it almost impossible for real users to connect and use the sites.

    Perhaps that is the real long-term risk of AI: scale. As more systems are automated, and more of those systems are managed by software, and more of those clusters are linked together for efficiency, the ability of one bad actor (or one misconfigured device) to affect larger and larger areas and populations becomes an almost guaranteed crisis. I'm less concerned that AI will decide that humans have to go. I am more concerned that we will do it ourselves, using AI as an unwitting assistant.

  • Anony Mous
    Anony Mous

    @slimboy: we already have classifiers that are 1000x larger than what GPT can currently do.

    Nuclear war, far deadlier pandemics than Covid, environmental disaster, technological fascist dystopia, economic collapse, AI oblivion … I tend to think they are all much more likely than we give them credit for.

    Nuclear war is overblown, we barely have enough weapons to blow up the top 100 major cities, let alone destroy earth. Nuclear weapons are insanely efficient and clean. The rest is Cold War propaganda.

    COVID was one of the deadliest pandemics we know of and it was a flash in the pan, hence why China developed it, but any disease you develop, you have to pick 2, deadly, fast, efficient

    Environmental disaster has time and again proven to be false, nature adapts rather quickly, we once thought plastics were going to be the bane of our existence, we found bacteria, algae and worms that eat the stuff and now 1/3 of the ocean plastic is ‘missing’.

    Technological fascism or communism has failed, you get stuck on a technology and stop developing, whether that was farming for Mao and Pol Pot or steel and concrete for the Soviets, fascism kills itself against capitalism because against big government, there is no benefit to further invention, so you end up with North Korea or Cuba at best where time basically stood still since they became fascist dictatorships.

    Economic collapse is the same problem, communism and attempts at it fail, then capitalism naturally steps in and things flourish.

    Everything will be fine, you may not live to see it, but humanity will survive.

  • Simon
    Simon

    Sam Harris has demonstrated his ability to be a complete moron on subjects outside his expertise and training.

    His views on Trump and Covid show he's not the smart, reflective, person his image projects.

    He's "clever stupid", which is what I call people who are very clever in one area but those smarts don't seem to translate to others. He has knowledge, but not reasoning ability.

    Maybe AI will help him ...

  • Simon
    Simon
    The problem we have is illustrated by the recent story of an AI that was set a task that involved accessing a website with a captcha. At that point the AI didn’t have “sight” because it didn’t have the ability to visually scan the page (now it does) so it asked online for helpers to answer the captcha claiming to be a “blind” person. People were sympathetic to the situation and the AI got the help it asked for and competed the task. If that’s the sort of thing AI can already do then it boggles the mind what it will be doing when it has 100 or 1000 times the capability it currently has.

    This sounds impressive but it's really only half the story. They specifically gave it a budget and access to a site where it could hire human workers, and the goal of bypassing the captcha.

    A human asked if it was a robot / why it needed the help, and the excuse it repeated was that it had a vision impairment.

    Did it really come up with that as a convincing reason ... or did it just notice from pattern matching that it would work as a reason from seeing that it previously worked as a reason?

    See, if you dig down, it's suddenly much less smart and clever and more guided along a path by humans.

    It's not like it decided to go out into the world of its own accord and start accessing websites, figuring out sneaky ways to manipulate people to get past captcha ... it was told what to do, and just copied how other people had done it.

    There is only pattern matching, there is no AI. The only people who want to convince you that it exists are the people with AI to sell or AI careers to boost.

  • slimboyfat
    slimboyfat

    Yes it’s very stupid, and that’s not the point.

    Chess computers are stupid too. They don’t realise they’re playing chess. In fact they don’t realise anything, they are just machines. All they are doing is using massive computing power in order to predict the best next move. In a sense it’s not really “playing chess” at all, it’s just mimicking somebody making chess.

    But that is the point. Even though it doesn’t know what it’s doing, just by calculating the next move, and replicating what a chess player would do, it can now win at chess against any human, every time.

    The same goes for the AI that pretended to be blind to get the information to enter the captcha. It’s fair enough comment to point out that it didn’t really know what it was doing, it just mimicked what it calculated to be a successful move in that situation to reach its goal. But that really is a red herring.

    In terms of outcome, it doesn’t matter if the AI knows what it’s doing, or not. Whether the computer is playing chess, or it is mimicking someone playing chess is really immaterial in terms of the outcome of the game. The same with the AI that ‘pretended to be blind’. Whether it had any ‘intention’, or not (I don’t think AI has intentions, or thoughts, of any kind, or ever will) isn’t the point. The point is that it found a way to reach its goal through calculating the next move. When we reach the point that AI is better at calculating the next move than us in any given situation is when it becomes dangerous, because it will be able to pursue a goal whether we want it to or not. It will ‘know’ (in the sense that it will be able to calculate) the best next move to achieve its goal whether humans are in the way or not. If we do get in the way of it achieving its goal we are in danger, not because it has any malice for us, or any feelings at all for that matter, but just because it is better at manipulating the world to achieve its goal and we are in danger of being crushed by it.

    Eliezer Yudkowsky explains the consequences and likelihood of this scenario, much better than I can, in this interview.

    https://youtu.be/fZlZQCTqIEo

  • MeanMrMustard
    MeanMrMustard

    Go into chapgpt and ask it something that requires it to calculate to the end before it knows whether the answer is correct - like a comprehensive list of all five-letter English words that start with "T", end with "E", and contain "I". It lists a bunch, but here are the first 10:

    1. Tawie
    2. Teind
    3. Tempi
    4. Tenia
    5. Terai
    6. Thine
    7. Tilde
    8. Tiled
    9. Timer
    10. Times
  • MeanMrMustard

Share this

Google+
Pinterest
Reddit