Top AI inventor Geoffrey Hinton reluctantly concluded that AI will probably humanity fairly soon

by slimboyfat 78 Replies latest social current

  • slimboyfat
    slimboyfat

    Okay notsurewheretogo, I will try to listen to that video, but it is worth noting that DeGrasse Tyson has no discernible expertise in AI. Hinton, in the original video is a leading expert on the topic. That makes a huge difference. The pattern seems to be that the greater expertise the person has in AI, the more worried they are about it.

    DeGrasse Tyson is a public intellectual who unfortunately people turn to for opinions on all sorts of things from lab leak theories, to vaccine safety, and political issues. He will quite happily give his views on any topic. It doesn’t mean he knows what he’s talking about. He is an expert on astronomy and physics and used to concentrate mainly on issues such whether Pluto is a planet or not. That’s his area of expertise.

    I’m not saying there won’t be any value in the video, but it doesn’t have anywhere near the same weight, or it shouldn’t have, as input from Hinton or other experts in the field of AI.

    Paul Christiano is an expert in AI who says that it probably won’t kill us. He’s a good person to hear the other side of the argument. But even he thinks the existential risk is real (I think he puts it at about 10% if I remember correctly). He’s more upbeat than some of his colleagues however.

    https://youtu.be/GyFkWb903aU

  • notsurewheretogo
    notsurewheretogo

    Appreciate his major is not AI but his points are pretty valid.

    As Cofty said...unplug things seems to be the easy answer.

  • slimboyfat
    slimboyfat

    Sorry, it might be an “easy” answer but it’s the kind of answer people give if they haven’t actually read or listened to anything about the topic. I’m not surprised if that’s DeGrasse Tyson’s take. It means he hasn’t actually listened to the experts at all.

    Because how exactly do you turn off a machine that is more intelligent than you? And not just a little more intelligent, ultimately orders of magnitude more intelligent. It could be like trying to beat someone at chess who is 100 or a 1000 times better than you. Good luck with that. Bear in mind that these machines are now being installed in robots that have all the physical capabilities of humans and 100 times the strength or more too. Pulling the plug out is physically as well an intellectually a serious challenge, to put it mildly. It is no way an “easy” solution.

    Those seriously attempting to deal with the problem don’t argue that we can retrieve control from a rogue AI. Because if you reach the point when you need to switch the thing off to save yourself then it’s because it is super intelligent and is posing a serious threat, by which point it is too late to switch it off. You see the problem?

    Our best hope (according to the experts) is to align it with our goals. But that seems pretty hard too. That’s why they think we have a serious problem on our hands. Glib answers from Deagrasse Tyson about solutions that have already been considered are not terribly helpful.

    I am actually eager to find if people have come across realistic answers to the concerns of Hinton and others. That’s why I started the thread. But you posting DeGrasse Tyson giving his glib type response and I think - oh my goodness we really are doomed lol! 🙂

  • notsurewheretogo
    notsurewheretogo

    Jeepers slim....read what you are posting.

  • SydBarrett
    SydBarrett

    "As Cofty said...unplug things seems to be the easy answer."

    Who decides when things get unplugged? I often think the internet as a whole has been a net negative for society and I believe a lot of others would sometimes agree. I still use it everyday.

  • Anony Mous
    Anony Mous

    @Cofty: who unplugs what exactly?

    First, define what the topic is we are talking about, people here seem to think that AGI and ChatGPT are nearly equivalent, they’re not even in the same realm. ChatGPT is a simple model, it can fit in a very small memory footprint. There are models that are 100-200x larger than GPT and even 1000x larger is well within the realm of anyone with $100k to spare for the hardware.

    If you make the case that GPT models are ‘evil’ and you need to halt their progression, there are much better algorithms currently in use that are much more useful and powerful.

    I work on “AI” models (advanced self-driving filters, which we call classifiers, not AI) that can do automated path tracing and activation measurements in brains, this allows us to model the movement of blood and relationship of thousands of individual ‘nodes’ in the brain.

    China is using similar models on a city and perhaps even country wide scale in a pipeline with widespread CCTV to classify individual nodes aka people, their movements and potential relationships. London has even more CCTV and uses something similar to classify threats. So the US agrees to put a pause on GPT, which is a simplified model of what we use but applied to language, do you think China will agree, the U.K.? The military won’t have some secret program that continues the research?

    In the end, these AI models are just computer programs, we can disassemble and understand them, we’re just too lazy to actually do this because for the most part they are very wasteful. Biological creatures are much better optimized and more well róunded. I do think we can eventually approximate a simple creature like a dog, but for human level intelligence and beyond, we first need to understand what that even means before we can build a program that goes into that direction.

  • enoughisenough
    enoughisenough

    I haven't read every word on this topic or watched the videos...just scanned...what thing I did notice was the robots with AI mentioned...I have watched more than one robot say it would kill humans. Sci-fi seems to be pre-emptive programming...if that is any indicator, then AI will become god-like and the robots will kill. We have been steadily fed doses of AI and getting used to it , thinking it harmless...we have the smart phones, appliances, even the little devices you talk to that will tell you what 2+4 is or turn on and off the lights or tv. In other words, the diet of AI we get everyday makes us think it is good.

    I thought I would throw in a scenario just for grins. We hear alot about needing to reduce the carbon footprint on earth ( hence the depopulation of people ) and planning for us to eat bugs. What if at some juncture AI decides this is all the smart thing to do and it has the power to shut down the electric grids, the water pipelines, etc...or push that button on the 5g installations that zap people. I see the potential for it to kill off humanity, but I also believe there is a creator who has his own plan. The Bible does say if God doesn't step in, all would die.( all this from your resident crackpot under the tin foil hat )

  • joey jojo
    joey jojo
    slimboyfat
    further edit: that’s another point some of these people make. We talk about “AI” as if it’s one thing, when really there are going to be many different AIs, and if they are correct about the risk, it only takes one of them to be misaligned with human goals, combined with super intelligence, is a deadly recipe.

    There are also a lot of everyday programmers who work on machine learning or other AI projects of their own as hobbies. All it takes is a basic understanding of a programming language like Python, which is considered one of the most user friendly languages for beginners. A couple of months practise and you can start building your own AI project for world domination.

  • slimboyfat
    slimboyfat

    Jeepers creepers, how’d you get those beepers

    I guess you mean I’m being alarmist. I suppose so. On balance I guess AI probably won’t kill us, but having listened to the arguments, I can’t give any good reason for that. It could just be normalcy bias or recency bias that makes me think it “can’t happen”. But of course disasters happen all the time. Species go extinct all the time, especially since humans took c9ntrol of the planet. If we destroy ourselves that will be just one more species to add to the list.

    Our generation and our parents’ generation have been so incredibly lucky in avoiding nuclear war and other catastrophes. I think we have become far too complacent about the risks we face in general. (Steven Pinker is like the extreme version of this - in the future his books might be regarded as hitting something like “peak complacency” for humanity - assuming we survive and can look back on them, of course.)

    Nuclear war, far deadlier pandemics than Covid, environmental disaster, technological fascist dystopia, economic collapse, AI oblivion … I tend to think they are all much more likely than we give them credit for. I don’t think this is overly pessimistic. It is realistic.

    But I am interested in hearing realistic reasons for optimism!

    https://youtu.be/hg7qdowoemo

  • cofty
    cofty

    I've read Pinker's books and I disagree with the idea that he is complacent. On the contrary he insists that there is absolutely no reason to assume that the gains of the past will not be lost unless we are vigilant.

    He is the perfect antidote to the JW (and SBF) - doom and gloom - mentality.

Share this

Google+
Pinterest
Reddit