Is AI going to change the world?

by Reasonfirst 54 Replies latest jw friends

  • slimboyfat
    slimboyfat

    Geoffrey Hinton on how AI differs from human intelligence and why it is a concern.

    https://youtu.be/-9cW4Gcn5WY

  • slimboyfat
    slimboyfat

    My own experience of Midjourney is that it has improved over the past few months. At first it made a terrible mess of human hands, but the latest images I have asked it to produce include pretty good drawings of hands. The fuzzy details are beginning to get clearer and the images are more and more accurate and usable. As it is the images that Midjourney and other programmes produce are useable and are already replacing the labour of artists.

    Take this image from a few weeks ago. I asked for a depiction of Joseph teaching Jesus how to be a carpenter. This was one of the results. The hands look pretty good.


  • TD
    TD

    I do understand what you are saying, Slim.

    The manner in which a CPU actually performs mathematical functions is not even remotely like a human would, or even within the realm of what would occur to the average person. (Everything is derived from simple binary addition and logical operators)

    But if the computational ability of a computer completely eclipses that of a human being (And there's absolutely no question on that point) then it really doesn't matter what's going on inside a CPU as long as it is giving correct answers.

    As you said, "..it really does not matter that AI is stupid in how it goes about producing its outcome from our perspective. What matters are the results it produces."

    Or as Jeffrey Hawkins, neuroscientist and AI researcher himself observed:

    "...most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough."

    But as Hawkins goes on to point out, that approach has made AI utterly dependent upon human knowledge (And by extension human error) on a wide variety of topics.

    Excerpts from a session with ChatGPT:

    Incorrect


    Incorrect - Song with the same title by a different artist


    Not surprising, I guess.


    Swing and a miss. --Common Engineering/Mathematical term

    Still no answer (Other than 'splaining to me in my own field)

    The AI is simply regurgitating the tribal wisdom of the internet.

  • slimboyfat
    slimboyfat

    I got a lot of wrong answers from ChatGPT too.

    https://www.jehovahs-witness.com/topic/6003458855927808/frustrating-discussion-chargpt

    1. Do you think AI gave better or worse answers to your questions than the average human? The percentage of humanity who could give the correct answer to the first question for example must be exceedingly tiny. The fact that we are testing it out on obscure pieces of knowledge already is testament to how far it has come. AI is already better than the average human in general, and better than specialists in some particular areas. That already makes it useful before the weak points are improved.

    2. Have you tried the same questions on GPT 4? Do you think you will get better answers? Because ongoing improvements are substantial. The “hands” problem in early AI image generation has already largely been fixed, for example. (See above) GPT 2 apparently spat out nonsense most of the time. If the trajectory of improvement continues then most of the issues you highlight may be resolved relatively quickly. People have also reported substantial improvements in output if you adjust the prompt slightly. For example, you could try prefacing your maths question with the stament: “answer the following question as if you are a mathematics professor”. It sounds stupid but it makes a difference.

    3. I think it’s fundamentally a mistake to assume that AI that makes errors is neither useful, nor a threat. AI could still be issuing “wrong” answers to basic questions right up to the day that it devises a new way for killing all humans on the planet in short order. It is apparently already very good at suggesting new pathogens, toxins, other weapons, and methods of delivery. Even if it’s singing the wrong lyrics to “Eve of Destruction” while doing it, we could all still be dead.

  • smiddy3
    smiddy3

    I had a glimpse of something on the TV news this morning talking about research being done on merging human brain cells with AI technology ?

    If that`s not scary I don`t know what is ? Should humans be worried ?

    I think so .though it will be after I have left the planet.

  • Reasonfirst
    Reasonfirst

    I'm not deeply interested in AI, (I'm much more interested in watching the rise of Asia - and apropos, fascinating rise of what may be the 9th world power,- I wonder what spin will our former loving bros. will handle that ) but a post on Sciencedaily suggests that AI, (just like me) can forget what it previously knew.

    https://www.sciencedaily.com/releases/2023/07/230720124956.htm

  • Reasonfirst
  • slimboyfat
    slimboyfat

    TD I put your question “Can you explain how bulge is calculated in mathematics?” to the new AI called Aomni Agent. First it breaks down the question into different components, then it lists relevant sources, then it browses the sources, then it gives its report in the form of “key takeaways” followed by a more detailed explanation. You can read the result here:

    https://aomni.com/research/d1e68f94-05f0-4ee0-b094-154714687e50

    Is this better than ChatGPT?

  • TD
    TD

    Slim,

    Much better.

    The key takeaways are not wrong, but they have the "ring" of someone who quickly scoured the internet on a subject they're unfamiliar with and now want to speak authoritatively. Oddly enough, that makes it sound more and not less "human." (That's not something we see on the internet at all - LOL)

    (Bulge is the ratio of the sagitta to half the length of the chord. As a ratio, like Pi for example, it is not dependent upon the size of the object, which makes it useful for curves that must be scaled.)

  • slimboyfat
    slimboyfat

    Thanks TD, it certainly looked like an improvement, but not knowing anything about “bulge” I didn’t know. These models are only going to get better, plus there seems to be a knack to using them. For example, quirky as it may seem, many report that asking the AI to identify mistakes in their own answers will significantly improve the resulting output.

    Thinking further about your example about driverless cars being able to handle the road but being flummoxed by the hand gestures of other drivers, a few things occur to me. First of all, if this is presented as an example of a domain that is ultimately indecipherable to AI, then I think that is very unlikely. A few years ago there was real debate about whether AI would ever be able to distinguish a cat from a dog. Some said the task was simply too complicated and AI would not manage it for many decades to come, if ever. Yet that barrier was overcome very swiftly, and now AI cannot only distinguish cats from dogs, it can identify many thousands of species that no single human can identify. It can distinguish human individuals from their faces or parts of their faces, and much, much more, far beyond the capability of humans. In a few short years we’ve gone from AI that can’t tell a cat from a dog to AI that is much better at identifying at categorising objects in general from visual input than any human.

    Therefore, the fact that driverless cars can’t seem to understand human hand gestures is probably simply indicative of the fact that the models have not yet been pointed in that direction. If AI is set the task of analysing a large dataset of hand gestures then it will very rapidly become better than humans at identifying and responding effectively to hand gestures. Plus other physical/visual cues that humans are aware of, such as a car slowing down, speeding, peeking out, hesitating, and everything else, plus sounds and smells and so on. None of these inputs are inherently beyond the capability of AI to categorise and respond to effectively. It’s just a matter of turning the attention of AI in that direction in the first place.

    Aha, you might say, there you have it! AI doesn’t even know what to look for unless a human first tells it what to do! That’s true to some extent, but the range of abilities and level of abstraction becomes ever wider, so that humans can give more and more general instructions, and the details can be worked out by AI.

    In other words, each time the risk is narrowly defined and effectively completed, humans can step outside the defined area of analysis and ask the AI to move up to the next level of abstraction. In other words you begin by instructing AI to deal with specific situations, but the better it becomes the more general the instructions can be. Starting from telling AI in driverless cars not to collide with other objects, using the rules of the road, then taking into consideration the actions and visual cues of other drivers, all the way up to the point where the AI can be asked to identify all the relevant factors impacting safety in moving vehicles and to optimise for all the relevant factors it identifies.

    It can then be asked to analyse its own results recursively in order to identify any safety factors that had been neglected. It can then be instructed to address those factors it identifies in an endless loop, until what you arrive at is a system that copes with all relevant aspects of safely and handles them better than human drivers. This process will probably be very quick because all the gains in knowledge by AI systems can be shared across the network and once learned are never forgotten.

Share this

Google+
Pinterest
Reddit