Top AI inventor Geoffrey Hinton reluctantly concluded that AI will probably humanity fairly soon

by slimboyfat 78 Replies latest social current

  • slimboyfat
    slimboyfat

    Geoffrey Hinton, major inventor of artificial intelligence:

    “If you take the existential risk seriously, as I now do—I used to think it was way off, but now I think it’s serious, and fairly close—it might be quite sensible to just stop developing these things any further, but I think it’s completely naïve to think that would happen. There’s no way to make that happen. If the US stops then the Chinese won’t.”

    To be absolutely clear, by existential risk he is talking about the death of all humans in the near future, probably years rather than decades, and that he cannot think of anything we can do about it. I cannot recall any statement like that from anyone not a “crank” of some description. He only came to this conclusion himself recently, and reluctantly. He is not a crank, and nobody in a position to know what he’s talking about seems to be calling him a crank. There are relevant experts who argue that better outcomes are possible, some even say it’s probable, but vanishingly few seem to be ruling out existential risk.

    I wonder, how should we respond to this? Should we take it seriously? What would “taking it seriously” even mean? Should we try to do something about it? Or accept we can’t do anything about it and make the best of the time left? Or just try to forget about it altogether? Does religion have anything relevant to say about this situation?

    https://youtu.be/sitHS6UDMJc

  • cofty
    cofty

    Don't make me watch 40 minutes of some smart ass telling me 'the end is nigh'. I've had a lifetime of that shit. How will AI kill us all exactly?

  • slimboyfat
    slimboyfat

    He’s very down to earth and doesn’t seem at all happy about the conclusions he has come to. There’s a lot of humour in the discussion that feels like gallows humour. He has a very different vibe than an end time prophet. He’s an expert who didn’t believe AI posed an existential risk until recent developments in AI convinced him otherwise.

    I can’t explain the reason why AI poses an existential risk in anything like detail required. From what I have gathered, the argument is that AI is approaching “general artificial intelligence”, which means outperforming humans in all domains of competency, rather than just a few such as chess or reading scan results. The capability of AI is improving rapidly and when it exceeds human capabilities we will no longer be able to control it. AI will be able to outsmart humans because, in particular, it knows how to be more persuasive (already in evidence in social media where humans are manipulated by algorithms to behave in certain ways), and it has vastly greater knowledge to draw upon instantaneously. If we attempt to shut down AI after that point then we are likely to fail because it can anticipate our actions and outsmart us. When humans are no longer in control the danger is that AI will pursue its objectives without any regard for human values or welfare.

    It is a complex argument but experts who understand it seem to taking it very seriously now, even those who argue there are ways for humans to survive. Few seem to be ruling out existential risk anymore. If they’re right about that risk, then we’ll probably hear a lot more about it in the coming months, and we will all become familiar with the reasons why it is a threat—unless it all happens so fast we don’t have time to reflect on it.

  • Anony Mous
    Anony Mous

    Plenty of experts on climate, oil and other things have spelled doom for literally centuries now. I work adjacent to the AI field myself and current models are nowhere close. We’ve basically got a better predictive text algorithm, yes, interesting but it’s nowhere near close to thinking for itself.

    As with everything else, Bitcoin, AI etc will have a measured impact, like the Internet did in its day, but we’re still decades away. Right now everyone is on pre-dial up with AI, where you have to dial yourself and at best you get a 14k connection, and you pay by the minute. We still have to get to the era of 56k, AOL, DSL and Cable before we can even think of it as moderately useful commercially for a day to day function.

    Once I can ask it a question and get an answer I myself and nobody else haven’t thought of, then we should start worrying. I have AI enabled in my code generation, it is useful for very small things where you basically wished you had an intern, and like the intern, it more often doesn’t do what you want.

    Now, 20, 30, 50 years from now, we may get to a form of AGI in a functional body. A semi-useful multi-function robot if you’re rich enough to buy one. But for that to happen, we first must find out what it is that makes something intelligent or sentient. And I am very suspicious of anyone that declares they know the answer to that question as to make a statement like that.

  • nicolaou
    nicolaou
    cofty: How will AI kill us all exactly?

    I don't think the fear is that AI will reach sentience and seek to annihilate us in some Schwarzeneggeresque apocalypse. As I understand it the worry is over the potential for misinformation. Imagine a future where anyone with a bad motive can easily utilise AI to produce the narrative and results they want.

    You think we've got it bad now with conspiracy nuts and anti-science ideologies? Wait until these guys have the tools to prove they are right.

  • slimboyfat
    slimboyfat

    Anony Mous but that’s why it sounded so different than climate change threat - because those warnings are often couched in terms of temperature rises, and sea level rises, by the end of the century. Even the worst predictions for the climate tend to envisage that some humans will survive. Plus they say there are things we can do about it - mainly stop burning fossil fuels.

    The threat Geoffrey Hinton is talking about is much worse, and much nearer than that. He’s talking about catastrophe within years, not decades, and the possibility of every single human being killed.

    I’ve not heard any warning like that before from anyone who is not considered a crank. Others who are in a position to understand what he is talking about apparently share his concerns. Even those who don’t think it will happen tend to agree it’s possible he’s right about AI killing us all.

    nicolaou, no Geoffrey Hinton says in the interview that the problem of misinformation is a separate, and much smaller risk from AI than the one he is talking about. He is literally saying AI could kill all humans, if they implement goals that are not aligned with our own. At the moment nobody knows how to ensure AI goals don’t diverge from human goals, and when they do, and AI is smarter than us, then it’s very difficult to see what could possibly stop it.

    It’s been likened to Pandora’s box from Greek mythology: once AI is out of the box there is no shutting it down and there is no controlling it.

    I’m not saying this will definitely happen. This is a serious guy who knows what he’s talking about and many of his colleagues agree with him. That makes what he’s got to say worth listening to at least

  • SydBarrett
    SydBarrett

    "As I understand it the worry is over the potential for misinformation. Imagine a future where anyone with a bad motive can easily utilise AI to produce the narrative and results they want."


    The part that worries me is that a lot of the population feeds, clothes and houses themselves working what are basically bullshit jobs. Busy work. Low level office management, tech support, customer service etc etc. If AI has the near term potential that some describe, I can see vast numbers of these jobs eliminated. And in the U.S at least, we don't have (and many don't believe in providing) much of a social safety net. It would not be the first time technology has eliminated jobs, but i'm worried it might be so many all at once that it could be pretty nasty in the short to medium term.



  • LV101
    LV101

    I'll view link/thx for share.

    Re/depopulation agenda RFK, Jr: WEF and Bill Gates are using 'Climate Change' to depopulate the planet." RFK, Jr., has warned the WEF and Bill Gates are attempting to depopulate the world under the guise of "saving the planet.

    Per Dr. Michael Yeadon, (former VP/Pfizer - forwarded from Stew Peters) he (Dr. Yeadon) has evidence re/the global elites' plan to kill billions. Global migration (check out twitter @Michael_Yon (LTG (R) Mike Flynn twitter.com). "We better get ready for a summer of disease here in the United States....it's coming." Referencing Panama which is becoming destabilized - Chinese (and others) flowing in on lined up buses -- Hwy 1 w/drug resistant Tuberculosis and Malaria. CCP has weaponized Dengue -- increasing serious in the Darien Provence along w/drones to dispense mosquitos.

    Per Dr. Yeadon the plan is via global migration, the smashing of food/fertilizer production, manufacturing/supply chain, shipping disrupted (all deliberate). No ability to keep 7.8 billion alive resulting in mass starvation, war/global migration, economic destruction.

    Did not check link (sorry) but did link from text messages received. Can't be difficult to locate twitter info.

  • Disillusioned JW
    Disillusioned JW

    LV101 go ahead and fantasize all you want of the conspiracy claims you mentioned. But as for me, I don't believe those particular conspiracy theories at all.

  • Disillusioned JW
    Disillusioned JW

    slimboyfat your title for this topic seems to be missing a word in between the words "probably" and "humanity". Did you intend your title to say "... probably kill humanity ..." or ".... probably destroy humanity ..."? If not, what did you intend to say in between the words "probably" and "humanity"?

    It is possible that AI could eventually kill many humans within 10 years. It is also possible that some human (or group of humans) might specifically program AI to do such (such as for militaristic purposes or for terrorism purposes). The human (or humans) might do so on a targeted basis, and later the human (or humans) might loose control of the AI. That would especially be the case if the AI has a computer virus component to enable it to rapidly replicate, mutate, and spread by evolution (though a non-biological evolution) under the mechanism of natural selection.

Share this

Google+
Pinterest
Reddit