What do you think is the source of your consciousness? Can it be copied or transferred?

by EndofMysteries 34 Replies latest watchtower beliefs

  • ttdtt
    ttdtt

    You will like this Exerpt.
    “Equally favorite: Philosopher John Searle’s proof that no digital computer can have mental states (a mental state is, for example, your state of mind when I say, “Picture a red rose” and you do)—that minds can’t be built out of software. A digital computer can do only trivial arithmetic and logical instructions. You can do them, too; you can execute any instruction that a computer can execute. You can also imagine yourself executing lots and lots of trivial instructions. Then ask yourself, “Can I picture a new mind emerging on the basis of my doing lots and lots and lots of trivial instructions?” No. Or imagine yourself sorting a deck of cards—sorting is the kind of thing digital computers do. Now imagine sorting a bigger and bigger and bigger deck. Can you see consciousness emerging at some point, when you sort a large enough batch? Nope.


    “And the inevitable answer to the inevitable first objection: But neurons only do simple signal transmission—can you imagine consciousness emerging out of that? This is an irrelevant question. The fact that lots of neurons make a mind has no bearing on the question of whether lots of anything else make a mind. I can’t imagine being a neuron, but I can imagine executing machine instructions. No mind emerges, no matter how many of those instructions I carry out.”

    Excerpt From: John Brockman. “This Explains Everything.” iBooks.

  • EndofMysteries
    EndofMysteries

    ttdtt - A consciousness in a computer would never randomly happen. But a computer could be programmed to take in and process information the same as a human brain. With the correct programming foundation, then it could learn, become self aware, etc.

    That is also why I don't accept we are here by chance. No idea the truth of the matter but whether by an all encompassing universal supreme being or another biological life form I think we were created. Biological robots w/ advanced programming and ability to be self aware and learn.

  • Fisherman
    Fisherman

    intelligence is eternal. It is one of two elements that exist in the Cosmos. The other is matter. One acts and the other is acted upon. Intelligence comes in many grades and is co-eternal with God. We all existed as intelligences before we were spirits, and we all were spirits before we were flesh - Coldsteel

    Mormon cult gibberish. = "Since brevity is the soul of wit, I shall be brief." -Polonius

    Cofty is right about your post even if it was't mormon gibberish it would still be just the same, gibberish -or rubbish. Anyway, that is how I see it.

  • Cold Steel
    Cold Steel
    EndOfMysteries: Are our actions really random? Our moods are based upon a release of chemicals in our brain, dopamine, etc. Our actions and reactions are based upon past experience, memories, stimulus of the senses, etc. A robot could be programmed to learn like a human, and if a reward system was programmed to match a human, then I think it's very possible for it to become self aware and learn as a person, but even exceed our intelligence.
    No, our actions are not random at all. They may be affected by various brain chemicals and systems, but I don't believe they cause intelligence as much as simply affect it. In recent studies, molecular imaging in conjunction with positron emission tomography are enabling researchers to now track levels of serotonin and endogenous opioids (naturally-occurring pain killers), determining how the brain functions (and malfunctions) when responding to environmental disruptions of a person's work, social, and family life. It also allows researchers to gague the effects of various treatments of abnormal brain functions.

    Just today there's news that man's brain is far more comprehensive in its abilities than previously thought.


    In an attempt to understand and measure the brain’s synapses, whose shape and size have remained mysterious to scientists, researchers at the University of Texas, Austin and the Salk Institute worked together to determine that the brain’s memory capacity is much larger than previously understood. The results, published in the journal eLife, estimate that an individual human brain may store as much as a petabyte of information—perhaps 10 times more than previously estimated, and about the equivalent of the World Wide Web.
    All that horsepower and nowhere to crank it up!

    Not only is the diversity of synapses they observed in such a small brain region surprising, the storage capacity of each is “markedly higher than previous suggestions,” write the authors. Prior to this, researchers believed an individual synapse was only capable of storing 1 to 2 bits of information. This suggests we may have underestimated the memory capacity of the brain, which has trillions of synapses, "by an order of magnitude."

    According to lead author Terry Sejnowski, in whose lab the study was conducted, "Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web."

    This raises an interesting field of study for evolutionists who must now consider just why that horsepower exists and how it developed. I'm just fascinated that it's there.

  • Cold Steel
    Cold Steel
    Fisherman: Cofty is right about your post even if it was't mormon gibberish it would still be just the same, gibberish -or rubbish. Anyway, that is how I see it.

    That's what I find so remarkable about you guys. I can accept your opinions as your views. I may think your views are garbage and ill-conceived, but I don't leave posts with name calling because I don't like them. I assure you that I do not have a high regard of Cofty or his views. I assure you that I find them just as nonsensical as he finds mine.

    But that's where intelligence comes in. Code is free of opinions (save those of the one who writes it). Code cannot come to a determination of truth; it only can say, "If A is true, then 1F." Code cannot believe in God or anything else. In the early 70s, my mother and I saw two UFOs over the night skies of Washington, D.C. Intelligence doesn't always come up with the correct conclusions. Some might conclude (as we did) that we simply didn't have enough information to reach an explanation. But others may conclude they were extraterrestrials, or interdimensional, craft. Others might immediately conclude they were secret government aircraft, or some that they were simply hallucinations. But code can only conclude they were simply unidentified, without eliciting the least curiosity afterwards.

    In short, intelligence is often accompanied by what others see as irrational conclusions. There's a point of view aspect wholly lacking in intelligence.

  • cofty
    cofty
    I assure you that I find them just as nonsensical as he finds mine - Coldsteel the Mormon Apologist

    Why?

    You spout dogma that is totally unsupported by evidence - I describe evidence that should inform our beliefs.

    Your cultish beliefs have no more credibility than Scientology. This is not a place where you can assert risible things and expect a round of applause.


  • EndofMysteries
    EndofMysteries
    Some might conclude (as we did) that we simply didn't have enough information to reach an explanation. But others may conclude they were extraterrestrials, or interdimensional, craft. Others might immediately conclude they were secret government aircraft, or some that they were simply hallucinations. But code can only conclude they were simply unidentified, without eliciting the least curiosity afterwards.

    You completely described a coded thought process that could easily be implemented into AI. You have information inputs in your brain of aircraft, secret government projects, and aliens. When you saw what was flying over the skies, your brain accessed all information it knew about flying machines. It pulled out airplanes, government projects, or aliens. Then using that information, based on all life experiences and other factors and beliefs, it came to it's conclusion.

    Were you a robot with AI intelligence, the input would be the UFO's you saw, UFO simply unidentified flying object. Having taken in information as an AI life, you would have known about airplanes, secret government projects, and people's beliefs in aliens. Your code would have analyzed what you saw and the only 100% probability would be as you said, simply unidentified, but to give a more specific answer it would use all available information to create it's most likely conclusion or 'belief'.

  • Fisherman
    Fisherman

    I don't leave posts with name calling because I don't like them. -cs

    "your lack of substance is more than made up for by your childish oblique." -cs

    Sure you do; read the above, and if you mean to say that name calling refers to insulting the person, then cofty's post is not an offense about your person, only a remark about what you posted. I agreed with his view about your post.
  • alecholmesthedetective
    alecholmesthedetective
    Hey guys, I can fly!

    Now come on, who's gonna call that 'rubbish'?
  • Cold Steel
    Cold Steel
    You completely described a coded thought process that could easily be implemented into AI.

    Agreed...AI could do that. But would it? AI can give a wonderful emulation of intelligence. You can program it to act frightened in the face of danger or grieved in the loss of a person or animal. But it can't learn to actually feel danger or grief.

    There would have to be some curiosity on its part or why would our AI go through the trouble of alarm over detecting a flying object it couldn't identify? Even if it's programmed, the UFO would come across as UNKNOWN. The robot might catalog it, note the location, estimated speed, direction and time, but it can't feel anything. The UFO could land on the lawn of the White House and a Bigfoot could come out of it and our AI robot could feel no more than a toaster! Sure, it could be designed and programmed to act surprised, but code can't feel. The experience can't elicit an emotional response.

    To understand intelligence, we must first define it. Going back to Flynn's book, he notes that's one of the primary problems. He recounts some questions and answers with Soviet peasants back in the 70s and recognizes that despite cognitive differences which make the answers hilarious to us, they made perfect sense to them. Even so, they knew the difference between analytic and synthetic propositions, but did not employ the same understanding of logic. "Pure logic cannot tell us anything about facts; only experience can," Flynn writes.

    Q: What do a chicken and a dog have in common?
    A: They are not alike. A chicken has two legs, a dog has four. A chicken has wings but a dog doesn't. A dog has big ears and a chicken's are small.
    Q: Is there one word you could use for them both?
    A: No, of course not.
    Q: Would the word "animal" fit?
    A: Yes.

    It's a great book. Hadn't read it for a while; I'm glad the topic came up.

Share this

Google+
Pinterest
Reddit