Is AI going to change the world?

by Reasonfirst 54 Replies latest jw friends

  • TD
    TD

    AI is still stumped by even the most basic human interactions.

    Even the simple act of a driver, who legally has the right of way, yielding it to you via a hand gesture (Say for example, traffic has backed up to the point where you are unable to exit a parking lot) completely stumps the most advanced, AI powered driverless vehicle.

    Human writers, both male and female struggle to create believable characters of the opposite gender. There are entire books on the subject, because the average person is simply not aware of the nuances. Sorry to be a pessimist, but I don't see AI reaching this level of sophistication anytime soon.

  • Anony Mous
    Anony Mous

    Current LLM can approximate a very low level of human interaction for a very brief period of time. The problem is that it has nothing (eg. morality or experience which are effectively feedback loops to what is acceptable) to anchor itself on, it literally just throws a ton of shit at the wall to see what sticks and unless properly guided, it will just spew out more and more random stuff.

    Until we can define and encode what it is that makes up intelligence and morality, then we have nothing to fear from a self-aware AI. People will use AI to improve their effectiveness in many avenues, including killing other people, but that is just people doing what people do. As long as your gun isn’t controlled by someone else (eg. Smart triggers), you will be able to defend yourself.

  • slimboyfat
    slimboyfat

    Saying that AI will not learn how to cope with the social aspects of driving on the basis of current driverless cars is a bit like saying planes will never cross the Atlantic on the basis of the early flights by Orville and Wilbur Wright.

    We are at early days and the progress is phenomenally fast. Experts such as Geoffrey Hinton are saying the latest AI models are doing things they didn’t expect to see for decades. My own experience of ChatGPT is that it is both astonishingly brilliant and fantastically stupid at the same time. It’s been likened to the best read 10 year old there has ever been who still doesn’t understand basic aspects of the real world. But for how much longer? Nobody is saying ChatGPT will change the world, or endanger it, it’s the trajectory of the progress which is the amazing and/or alarming thing. And GPT 4 is already here.

    AI is now better at diagnosing patients than human doctors. And a recent study shows AI interactions are perceived as more empathetic than human doctors too.

    https://www.forbes.com/sites/paulhsieh/2023/06/27/when-the-ai-is-more-compassionate-than-the-doctor/?sh=7239d8ea102a

  • FFGhost
    FFGhost
    Human writers, both male and female struggle to create believable characters of the opposite gender.

    I thought it was pretty easy:

    https://www.youtube.com/watch?v=pBz0BTb83H8

  • a watcher
    a watcher

    Yes, and not in a good way.

  • TD
    TD

    I think there's an important difference between the infancy of flight and the infancy of what we're calling AI.

    Unlike many of their predecessors, the Wright brothers were not simply mimicking the actions of birds with little to no understanding of power to weight ratios. They were truly, actually flying. --Not exactly like birds do, but via a mechanical application of the same principles.

    AI, as we use the term today is a mimicry of the human mind via clever algorithms and vast repositories of facts, but it is not truly intelligent in the sense that humans are. Unless and until we actually understand how the human mind works, we are not likely to be able to build a machine that works along similar principles.

    It's not that I'm not impressed with what's been accomplished so far. I was enthusiastic enough to participate in a test program for what is still the most advanced driverless system around. My first hand observation is that it's not enough to be able to interpret the intentions of another driver. In the example I gave, the AI must possess the judgment to know when a traffic law can be broken and why.

  • Simon
    Simon

    Tricky, because there isn't any real AI yet. It's clever pattern matching, and it can fool you into giving the illusion that it's doing something clever (it is) but it isn't intelligence.

    As an example: Generative AI art is improving all the time. Models are bigger, training is more involved, and PCs get more powerful. But it still doesn't really know what it's drawing. It can create amazing images, but it doesn't know how a hand holds an umbrella, which is why that same hand may sometimes be holding something behind it.

    It doesn't really have any knowledge or intelligence of how the world is made up, and that the image is a visualization of that world. It just knows that certain patterns match certain words and go together a certain way. Well "knows" is too strong ... it has weights for them.

    So let me know when real AI happens. Currently AI is in the "blockchain" phase - if you want to get funding, you say you're doing things with "AI".

    None of this means jobs won't be impacted: the price of art has effectively just fallen to $0. Not only will this impact artists (digital artists more, the price for genuine real-life authentic art will probably go up), it will also impact art licensing companies too who live off them. Photographers and models ... why go to all that expense when AI can create the most beautiful woman imaginable and you can create versions for different markets with different ethnicities?

    So there will inevitably be losers. But then there will be winners - companies will be able to produce products based on art without having to pay reproduction fees, so in theory that should be deflationary for prices (or else those companies will become more profitable).

    It will of course also impact normal people's work, but mostly as a productivity boost. Just as spell checkers and grammar checkers allowed people to get more done faster, and to focus on higher level things, AI will be another tool. Of course when people are more productive, you tend not to need as many of them ...

  • TonusOH
    TonusOH

    This could work out for certain creative people, in that there are plenty of options for building a support structure that is uniquely yours. If my art or animations or writing generate interest and I can build a following, that can provide an artist/writer with income even when their work appears to be drowned out by AI-generated spam. Even if your work gets incorporated into the algorithms somehow, you still have a way to monetize your work and grow your following.

    Meanwhile, the people and companies generating massive amounts of poor-quality content that all begins to look the same are only able to generate very small amounts of income, since the money is split so many ways and begins to dry up as potential buyers decide that it's not worth it to pay repeatedly for the same content. So perhaps this is a problem that will, for a short time anyway, fix itself.

  • TonusOH
    TonusOH

    TD: Human writers, both male and female struggle to create believable characters of the opposite gender.

    Oh, I don't expect AI to completely replace human writers/actors for a while (or, possibly, ever). But there is a lot of derivative and awful work being produced by people today (sometimes at considerable expense). AI could be used to produce "filler" work- derivative crap designed mostly to earn a bit of cash with a relatively tiny expenditure. And all the while, it can be refined and upgraded and improved.

    I'm not sure that AI can ever replace human writers and actors from a standpoint of creative uniqueness, so to speak (see my comment previous to this one). But it can produce waves of canned content that might generate enough profit to keep production companies solvent, which will allow them to churn out even more content. I'd despair for our future, but a perusal of YouTube reminds me that unoriginal crap is not in short supply even now.

  • slimboyfat
    slimboyfat

    TD

    thanks for the interesting response. But I think you have made a few mistakes.

    Unlike many of their predecessors, the Wright brothers were not simply mimicking the actions of birds with little to no understanding of power to weight ratios. They were truly, actually flying. --Not exactly like birds do, but via a mechanical application of the same principles.
    AI, as we use the term today is a mimicry of the human mind via clever algorithms and vast repositories of facts, but it is not truly intelligent in the sense that humans are. Unless and until we actually understand how the human mind works, we are not likely to be able to build a machine that works along similar principles.

    You have hit on exactly the correct idea when you talk about mimicry.

    Geoffrey Hinton explained that until a few years ago he thought the best way to achieve artificial intelligence was to copy the way the brain works. What he discovered, to his complete surprise, was that there is a quicker route to producing intelligent outcomes than copying the brain. This is indeed analogous to how the Wright brothers discovered there was a better route to flight that attempting to copy birds flapping their wings. This means that current AI does not function the way the brain does because it uses the large scale data to predict what an intelligent response would look like rather than the human approach of using reasoning to try to work out an actual intelligent response.

    Aha, you may say, there you go! It isn’t really intelligent at all, it’s just mimicking what intelligence looks like! Yes, and no. This is the tricky part. Yes in the sense that the AI has no inner life, it doesn’t “work toward a solution” as such, as humans do. It just uses numbers to predict the best next word/move/image/sound. There is nothing “thoughtful” about it. So in this sense AI is stupid and tends to make ridiculous mistakes from our perspective.

    But what you’ve got to appreciate is that, in terms of all the things that are really important to us - its usefulness, replacing jobs, existential threat - it really does not matter that AI is stupid in how it goes about producing its outcome from our perspective. What matters are the results it produces.

    If a computer beats a human at chess it doesn’t matter if it does it by being clever or by some stupid unthinking process - the outcome is till the same.

    If a computer creates images that are as good as artists then it doesn’t matter whether it does it by being creative, or by crunching numbers - the outcome it still the same - major job losses.

    If a computer can diagnose patients better than a doctor then it doesn’t matter if it does it by being clever of by a stupid process of data crunching and prediction - the outcome is the same - better diagnosis and less need for doctors and their training.

    The same all the way up to existential threat. It doesn’t matter if AI kills all humans because it wants to (it doesn’t really “want” anything, it’s just a machine) or just as a byproduct an unthinking mechanical process, the outcome is still the same - all humans dead.

    There seems to be incredulity that AI can produce outcomes that compete with or exceed human capabilities unless it copies the way humans do it. This is an understandable mistake (Geoffrey Hinton admits he made the same mistake him selves n the past) but it is a mistake nonetheless. It is also contradicted each day as AI produces better outcomes than humans at playing chess, diagnosing from scans, summarising, building proteins, discovering new antibiotics, and on and on.

    In the past people like Noam Chomsky and Douglas Hofstadter have argued that the “brute force” approach of large language models will never reach human intelligence. In one sense they were correct because language models have no inner life and they have no reasoning ability as we understand the concept. Where they were mistaken was in thinking that “brute force” could not produce the same end results as human style reasoning. What Hofstadter, Hinton and others have realised, only in last few years, is that this alternative route to intelligence can produce results reaching and exceeding human level outputs.

    https://youtu.be/Ac-b6dRMSwY

Share this

Google+
Pinterest
Reddit