Top AI inventor Geoffrey Hinton reluctantly concluded that AI will probably humanity fairly soon

by slimboyfat 78 Replies latest social current

  • SydBarrett

    "Nuclear war is overblown, we barely have enough weapons to blow up the top 100 major cities"

    How do you figure that? No, a nuclear war won't "destroy the earth" as in sterilize it, but a few billion dead followed by mass starvation among those left would be pretty apocalyptic, not "overblown".

    Number of warheads by country:

    USA: 3,750

    Russia: 5,977

    China: 350

    UK: 260

    France: 290

    India: 160

    Pakistan: 165

    Israel: not known but estimated at around 90

    I agree that it's difficult to imagine a scenario where all the nuclear powers are launching everything they have at each other all at once, but if it did happen, it would mess up a lot more than 100 cities.

  • Simon
    The same goes for the AI that pretended to be blind to get the information to enter the captcha. It’s fair enough comment to point out that it didn’t really know what it was doing, it just mimicked what it calculated to be a successful move in that situation to reach its goal. But that really is a red herring.

    I get your point about Chess computers, but that IS the point though - it's not really AI. There is no intelligence, no reasoning - it can't put 2 + 2 together and get 4 (for real, you can convince it that 5 is the right answer). It can't deduce or combine information unless someone else has already done that.

    It's a great tool, it is a step up from spell checkers and grammar checkers when writing articles, but it can't write anything truly original.

    Plus, it can be hugely wrong but supremely confident. That is where it's dangerous - when people use it for something they don't have expertise in, and trust what it tells them.

  • slimboyfat
    It can't deduce or combine information unless someone else has already done that.

    Then what is it doing when it beats humans at chess if not discovering new ways of doing things that humans have not thought of?

    Or what is it doing when it solves problems humans have been working on for years?

    When it discovers a new way to kill all humans too, I guess we can say it isn’t anything “new” that it’s discovered, just a recombination of previously known data. It won’t be much comfort when we’re dead.

  • TonusOH

    That's a good question. I think that chess computers win by 'studying' thousands --possibly millions-- of matches, including many classic moves and beginning strategies. The one thing a computer does better than a human brain is calculate large and complex probabilities in nanoseconds. They are hyper-fast pattern-recognition engines.

    I mentioned elsewhere an article about a Go master who beat a champion-level computer in 14 of 15 matches by using a strategy that was so simple and basic that no player worth his salt would ever use it. To a human, the strategy was both obvious and blatant, and very easy to counter once you recognized it. But the computer had no matches to study, and struggled to find a way to counter it.

    Now, I think that this is partly lazy programming. And experiences like that can teach AI programmers how to better prepare an AI bot to learn and improve. And that might be the real random factor here: we don't know how well any AI has been programmed until we have enough experience working with it to see what it does. Which is probably a scary prospect once we start giving these AI bots important jobs to do.

  • Anony Mous
    Anony Mous

    @SydBarrett, first of all, what is the yield of those weapons? Almost of them sit on top of a rocket and those carry on average 1KT barely enough to according to one government document:

    This magnitude of detonation is not large enough to destroy a city, but large enough to destroy a large building and much of a city block.

    And currently the US and Russia keeps about 1000 of these smaller warheads ready for deployment, the rest is stockpiled and could not be used on short order. Some are even referred to as nuclear artillery, with even smaller yields.

    Strategic nukes, the largest ones commonly in use destroy, about 4-10 city blocks although the largest ones would require special bombers, such as the ones that destroyed parts of Nagasaki and Hiroshima being strapped UNDER a B52. Even during WW2, the US never thought their missions against Japan would succeed, the B52 bombers were so exposed and had to fly so low due to the weight they thought for sure they would be shot down.

    Here’s the effect distances of the largest atomic bomb ever built (Tsar bomba). Note that most strategic nukes are not even 10% of this and 1% for tactical nukes, in which the most common nukes available today we’re measuring damage in feet, not miles.

    Effect distances for a 100 megaton surface burst:

    Radiation radius: 4.34 mi
    500 rem ionizing radiation dose; likely fatal, in about 1 month; 15% of survivors will eventually die of cancer as a result of exposure.
    Fireball radius: 4.92 mi
    Maximum size of the nuclear fireball; relevance to damage on the ground depends on the height of detonation. If it touches the ground, the amount of radioactive fallout is significantly increased. Anything inside the fireball is effectively vaporized.
    Heavy blast damage radius (20 psi): 6.28 mi
    At 20 psi overpressure, heavily built concrete buildings are severely damaged or demolished; fatalities approach 100%. Often used as a benchmark for heavy damage in cities.
  • slimboyfat

    Either Geoffrey Hinton or Eliezar Yudkowsky explain how the chess computer works in one of the videos, I can’t remember which, but it was interesting because it wasn’t what I had thought. Apparently there are three separate processes involved, roughly something like: 1) work out the most likely next move, 2) work out all the possible moves, 3) work out all the responses to all the possible moves. Then the actual move chosen is some compex combination of those different processes. Or something like that, again they explain it better.

    It’s completely correct to say that all these AI machines are really doing is processing huge amounts of data, and using that as a predictive tool to simulate outcomes. In that sense it is “stupid”. It doesn’t even know what it’s doing.

    But what people don’t seem to appreciate is 1) how powerful that ‘dumb’ process becomes at scale 2) the fact that everything, including human actions, can ultimately be reduced to predicable probabilities 3) that the emergent effects of the process can appear to simulate intelligence.

    The fact that it isn’t technically an intelligent thinking entity is neither here nor there, it’s the effect that counts.

    Today the big news is that AI can now predict human emotions and responses better than humans can. The implications of that are simply m mind boggling. Here we have been struggling to understand, predict and react to one another for millennia, with variable results, and here comes along a machine that can do it better than any of us. No doubt marketers and advertisers will be racing to use the new tool first. But in the long term I don’t see any other outcome than the demise of humanity altogether.

    That’s not even mentioning the claim I read this morning that AI can ‘read thoughts’ from brain scans. That’s just insane, if true. The world is mutating faster than we can understand what is even happening.

    Mr Mustard is completely right about AI not currently being able to anticipate the end of its response from the beginning. We can chalk that down on the ‘AI is stupid’ tally, no doubt about it. It doesn’t ‘think’ the way we do, despite its superhuman capabilities in other areas. That’s why it’s surprising to us what elementary thought processes currently seem to be beyond its ability. Maybe it will never master knowing the end of a story from the beginning, or perhaps it will figure it out by the end of this week. I don’t know. I don’t see that it really matters because the things that it can do are astonishing either way.

  • resolute Bandicoot
    resolute Bandicoot

    @ Anony mous -The heavy bombers used over Japan were B17's and B29's, B52's came along much latter

  • SydBarrett

    "@ Anony mous -The heavy bombers used over Japan were B17's and B29's, B52's came along much latter"

    There's so much wrong with his post...where to begin. The Polaris and Minuteman III currently each contain 3-5 W78 MIRV warheads each with a yield of 335-350 kilotons. Each city hit gets a shotgun effect of 3-5 strikes spaced out around the city.

    I don't know where he got the idea that most only contain a 1 kiloton warhead. The only weapon i'm aware of that was so small was the 'Davy Crockett". It was basically a tactical artillery piece to presumably be used against advancing soviet soldiers in the cold war. It was taken out of service in 1971. I'm not claiming they no longer exist just because they were removed from active service,

    Yes, the bombs used in Hiroshima and Nagasaki were big and heavy and required bombers. But they got much better at building the things and by the 70's, 3-5 could be carried atop a minuteman missile. Each with 20-25 times the yield of Hiroshima. Here's a pic for scale:

    And here is a map showing damage of a 330 kiloton airburst over midtown manhattan. He may be correct that the area completely vaporized may be a few city blocks, but the gray circle shows most residential buildings collapse from Harlem all the way to lower Manhattan. The final orange circle shows 3rd degree burns for anyone caught in the open extending into Brooklyn and New Jersey. Perhaps he has a much more pedantic version of what 'destroying a city' means.. but anyways, this has gone way off subject from the original post.

  • slimboyfat

    Well I’ve not found much in the way of disproving the threat of AI so far. It’s not cranks or doom mongers who are raising the alarm, but experts who work on it and understand AI best. Geoffrey Hinton says he disbelieved AI disaster scenarios during nearly 50 years of research, and only changed his mind within the past year when he realised the new models were advancing far quicker than he anticipated. Steven Pinker presented a shallow and ill informed overview of the AI topic in his book Rationality.

  • resolute Bandicoot
    resolute Bandicoot

    And what is wrong with my post Syd?


Share this