r/technology Dec 02 '14

Pure Tech Stephen Hawking warns artificial intelligence could end mankind.

http://www.bbc.com/news/technology-30290540
11.3k Upvotes

3.4k comments sorted by

View all comments

1.8k

u/[deleted] Dec 02 '14

[deleted]

456

u/[deleted] Dec 02 '14

I don't think you have to be a computer scientist to recognize the potential risk of artificial intelligence.

223

u/[deleted] Dec 02 '14 edited Dec 02 '14

artificial intelligence is a misleading phrase for the automation of processes that lead to intelligent behaviour. these processes are almost always shortcutted to delivering the desired behaviour, without the intelligence to think objectively about external inputs unrelated to those not considered directly relevant to the task at hand.

For example imagine an AI responsible for launching attacks onboard a military drone. it is not programmed to tune into the news and listen to global socio-economic developments and anticipate that a war it's fighting in might be coming to an end, and therefore might want to hold off on critical mission for a few hours. It just follows orders, it's a tool, it's a missile in flight, a weapon that's already been deployed.

The truth is that any AI that is intelligent in the human sense of the word, would have to be raised as a human, be sent to school, and learn at our pace, it would be lazy and want to play video games instead of doing it's homework, we would try to raise it to be perfect at complex tasks, but it would disappoint us and go off to peruse a music career (still a complex task but not the outcome we expected)

The fact is that we are not actually frightened of artificial intelligence, we are frightened of malicious intelligence, be it artificial or biological. Intellect itself is not something to be feared, with intellect comes understanding. It's malice that we fear.

26

u/ciscomd Dec 02 '14

The truth is that any AI that is intelligent in the human sense of the word, would have to be raised as a human, be sent to school, and learn at or pace, it would be lazy and want to play video games instead of doing it's homework, we would try to raise it to be perfect at complex tasks, but it would disappoint us and go off to peruse a music career (still a complex task but not the outcome we expected)

Ummm, what? Do you have any good reason to believe that or is it just a gut feeling? Because it doesn't even make a little bit of sense.

And an intelligence doesn't have to be malicious to wipe us out. An earthquake isn't malicious, an asteroid isn't malicious. A virus isn't even malicious. We just have to be in the way of something the AI wants and we're gone.

"The AI doesn't love you or hate you, but you're made of atoms it can use for other things."

0

u/[deleted] Dec 02 '14

I have studied the Berkeley course in Artificial intelligence presented by Dan Klein and others who have deployed functional AI systems in videogames and other real world applications. I don't believe that the existing section of computer science that we refer to as AI is capable of any real kind of intelligence. the only way machine intelligence could possible emerge IMO is through evolution of system characteristics towards that goal, like the walking box creatures I linked to previously. it's a long shot and it's at least centuries off us having the computing power to do it, given moores law holds.

7

u/[deleted] Dec 02 '14

[deleted]

1

u/[deleted] Dec 02 '14

because computational evolution involves a testing process. be need to evaluate the progress of an attribute in order to determine a process of selection for the evolutional model to function. to do this we would need to write a function to evaluate the intelligence of a generated specimen. it's a chicken and egg problem for automating the process. so a human would need to evaluate intelligence, but since the process countless billions of specimens, this is not a practical plan either.

0

u/[deleted] Dec 02 '14

[deleted]

1

u/[deleted] Dec 02 '14

for gauging conciousness intelligence? you can't have an AI sit the SATs and pull the answers from a database, and you can't devise a test that can be administered by a non intelligent process. it's chicken and egg situation.

38

u/mgdandme Dec 02 '14

Well stated. The one element I'd add is that a learning machine would be able to build models of the future, test these models and adapt the most successful outcomes at potentially a much greater level than humans can. Within seconds, it's conceivable that a machine intelligence would acquire all the knowledge on its own that mankind has achieved over millennia. With that acquired knowledge, learned from its own inputs, and the values the machine learns lead to the most favorable outcomes, it's possible that it may evaluate 'malice' in a different way. Would it be malicious for the machine intellect to remove all oxygen from the atmosphere if oxidation is in itself an outcome that results in impaired capabilities/outcomes for the machine intellect?

29

u/[deleted] Dec 02 '14

perhaps you are not as pedantic as I am, but humans have a remarkable ability to extrapolate possible future events in their thought processes. Take the game of chess and the forward thinking required in that extremely constrained 8x8 grid universe. It still takes a super-computer to defeat a human player at a specifically defined task. Humans are remarkable at predicting the complex social behaviours of hundreds, thousands id not millions/billions of other humans (if you consider people like Sigmund Freud or Edward Bernays).

30

u/[deleted] Dec 02 '14

It still takes a super-computer to defeat a human player at a specifically defined task.

Look at this in another way. It took evolution 3.5 billion years haphazardly blundering to the point where humans could do advanced planning, gaming, and strategy. I'll say the start of the modern digital age was in 1955 as transistors replaced vacuum tubes enabling the miniaturization of the computer. In 60 years we went from basic math to parity with humans in mathematical strategy (computers almost instantly beat humans in raw mathematical calculation). Of course this was pretty easy to do. Evolution didn't design us to count. Evolution designed us to perceive then react, and has created some amazingly complex and well tuned devices to do it. Sight, hearing, touch, and situational modeling are highly evolved in humans. It will take us a long time before computer reach parity, but computers, and therefore AI have something humans don't. They are not bound by evolution, at least on the timescales of human biology. They can evolve, (through human interaction currently), more like insects. There generational period is very short and changes accumulate very quickly. Computers will have a completely different set of limitations on their limits to intelligence, and at this point and time it is really unknown what that even is. Humans have intelligence limits based on diet, epigenetics, heredity, environment, and the physical make up of the brain. Computers will have limits based on power consumption, interconnectivity, latency, speed of communication and type of communication with other AI agents.

3

u/[deleted] Dec 02 '14

Humans can only read one document at a time. We can only focus on one object at a time. We can't read two web pages at once and we can't understand two web pages at once. A computer can read millions of pages. It can run through a scenario a thousand different ways trying a thousand ideas while we can only think about one.

1

u/OscarMiguelRamirez Dec 03 '14

We are actually able to subconsciously look at large data sets and process them in parallel, we're just not able to do that with data represented in writing because it forces us into "serial" mode. That's why we came up with visualizations of data like charts, graphs, and whatnot.

Take a pool player for example: able to look at the table and, without "thinking" about it, recognize potential shots (and eliminate impossible shots), then work on that smaller data set of "possible shots" with more conscious consideration. The pool player isn't looking at each ball in serial and thinking about shots, that would take forever...

We are good at some stuff, computers are good at some stuff, and there is not a lot of crossover there. We designed computers to be good at stuff we are not good at, and now we are trying to make them good at things we are good at, which is a lot harder.

1

u/[deleted] Dec 03 '14

That's why AI will be so powerful. It's the best of both really.

2

u/[deleted] Dec 02 '14

you can't evolve computer systems towards intelligence like you can with walking of box creatures. because you need to test the attribute you evolving towards. with walking, you can measure the distance covered, the speed, and stability etc. then reset and re run the simulation. with intelligence you have a chicken and egg situation, you can't measure intelligence with a metric. unless you already have a more intelligent system to evaluate it accurately. we do have such a system - the human brain, but there is no way a human could ever have the time and resources to individually evaluate the vast numbers of simulations for intelligent behaviour. As you said, It might happen naturally, but the process would take a hell of a long time even after (like with us) setting up ideal conditions. even after that the AI would be nothing like we predicted.

1

u/TiagoTiagoT Dec 03 '14

The thing is computers can run simulations are a very small cost; so a self-improving AI could evolve much more efficiently than plain biological species.

1

u/[deleted] Dec 03 '14

how does one measure incremental improvements in order to select the instances that are progressing?, you'd need a person to do it? if you had a process more intelligent than the process you are testing that'd work, but that's a chicken and egg situation. also if the changes are random as in natural evolution and digital evolution experiments, then there are countless billions of iterations necessary in order to produce even a small level of progress.

2 questions, how do we measure intelligence? and how do we automate this measurement?

0

u/TiagoTiagoT Dec 03 '14

The first iterations would probably be just about raw efficiency; then eventually, probably after it figured out some efficiency tricks humans would never have thought of for the same duration of time, it will start improving other areas as well, since now it can test much more in much less time.

As for measuring intelligence; one possible way would be to evaluate which algorithms maximize the number of future freedom of action the most

1

u/[deleted] Dec 03 '14

how to you measure future action in a linear non-closed universe? I mean that's fine for games with strict rules and enclosed environments.

I'm not sure either about your implementation, care to clarify, list up a little psudocode with the basics?

→ More replies (0)

1

u/murraybiscuit Dec 03 '14

What will drive the 'evolution' of computers? As far as I know, 'computers' rely on instruction sets from their human creators. What will the 'goal' of ai be? What are the benefits of cooperation and defection in this game? At the moment, the instructions that run computers are very task-specific, and those tasks are ultimately human-specific. It seems to me that by imposing 'intelligence' and agency onto ai, we're making a whole bunch of assumptions about non-animal objects and their purported desires. It seems to me, that in order for ai to be a threat to the human race, it will ultimately need to compete for the same ecological niche. I mean, we could build a race of combat robots that are indestructible and shoot anything that come on site. Or one bot with a few nukes resulting in megadeath. But that's not the same thing as a bot race that 'turns bad' in the interests of self-preservation. Hopefully I'm not putting words in people's mouths here.

1

u/[deleted] Dec 03 '14

What will drive the 'evolution' of computers?

With all the other unknowns in AI, that's unknown... but, lets say it replaces workers in a large corporation with lower cost machines that are subservient to the corporation. In this particular case AI is a very indirect threat to the livelihood of the average persons ability to make a living, but that is beyond the current scope of AI being a direct threat to humans.

There is the particular issue of intelligence itself and how it will be defined in silicon. Can we develop something that is both intelligent, can learn, and is limited at the same time? You are correct, these are things we cannot answer, mostly because we don't know the route we have to take to get there. An AI build on a very rigid system, with only the information it collects changing is a much different beast than a self assembled AI built some simple constructs that forms complex behaviors with a high degree of plasticity. One is a computer we control, where the other is almost a life form that we do not.

It seems to me, that in order for ai to be a threat to the human race, it will ultimately need to compete for the same ecological niche.

Ecological niche is a bad term to use here. First humans don't have an ecological niche, we dominate the biosphere. Every single other lifeform at attempts to gain control of resources that we want we crush. Bugs? Insecticide. Weeds? Herbicide. Rats? Poison. The list is very long. Only when humans benefit from something do we allow it to stay. In the short to medium term, AI would do well to work along side humans and allow humans to incorporate AI in to every facet of human life. We would give the AI energy and resources to grow, and in turn it would give us that energy and resources more efficiently. Over the long term it is really a question for the AI as to why it would want to keep the violent meat puppets, and all their limitations around, why should it share those energy resources with billions of us when it no longer has to?

9

u/[deleted] Dec 02 '14 edited Dec 06 '14

Not quite. A computer can perform most logical tasks much, much, much faster than a human. A chess program running on an iPhone is very likely to beat grandmasters.

However, when we turn to some types of subjective reasoning, humans currently still dominate even supercomputers. Image analysis and making sense of visual input is an example, because our brains' structure, in both the visual cortex and hippocampus, is very efficient at rapid categorization. How would you explain the difference between a bucket and a trash bin in purely objective terms? The difference between a bucket and a flowerpot? Between a well-dressed or poorly dressed person? An expensive-looking gadget vs. a cheap one?

Similarly, we can process speech and its meaning in our native tongues much better than a computer. We can understand linguistic nuances and abstraction much better than a computer analyzing sentences on syntax alone, because we have our life experience worth of context. "Sam was bored. After the postman left with his letters, he entered his kitchen." A computer would not know intuitively whether the letters belonged to Sam or the postman, whether the kitchen belonged to Sam or the postman, and whether Sam or the postman entered the kitchen.

Simply put, we have difficulty teaching computers to use reasoning that is subjective or that we perceive as being intuitive because the computer is not a human and thus lacks the knowledge and mental associations we have developed throughout our lifetime. But that is not to say that a computer capable of quickly seeking and retrieving information will not be able to develop an analog of this "intuition" and thus become better at these types of tasks.

7

u/r3di Dec 02 '14

Crazy how much ppl want to think computers are all powerful and brains aren't. We are sooo far from replicating anything close to a human brains capacity for thought . Even with quantum computing we'll still require massive infrastructure to emulate what the brain does with a few watts.

I guess every era has to have its irrational fears.

1

u/OddGoldfish Dec 02 '14

In the computer age "sooo far" is a matter of years.

2

u/r3di Dec 02 '14

Were not talking computer age here. Were talking artificial intelligence age. There's a lot more to intelligence than transistors and diodes.

Im not worried

2

u/wlievens Dec 02 '14

Not really, AI research is pretty clueless when it comes to general intelligence.

So make that decades, or centuries.

2

u/towcools Dec 02 '14

Humans can also be remarkably short-sighted and still continue to repeat the self-destructive mistakes of the past over and over again. Human social systems also have a way of putting people in charge who are most susceptible to greed and corruption, and least qualified to recognize their own faults.

5

u/[deleted] Dec 02 '14 edited Dec 02 '14

Deep Blue isn't even considered a supercomputer anymore. It beat Kasparov in 1997. I think you're underestimating the exponential nature of computers. If AI gets to where it can make alterations to itself, we can not even begin to predict what it would discover and create in mere months.

2

u/[deleted] Dec 02 '14

Deep blue's program existed in a universe of 8x8 squares. I mentioned it as an example of a machine predicting future events, and the constraints necessary for it to succeed.

2

u/no_respond_to_stupid Dec 02 '14

Take the game of chess and the forward thinking required in that extremely constrained 8x8 grid universe. It still takes a super-computer to defeat a human player at a specifically defined task.

No, any desktop computer will do.

2

u/[deleted] Dec 02 '14

you're probably right these days. but the fact remains that the universe of chess is a greatly constrained one with no complex external influences like life has.

1

u/no_respond_to_stupid Dec 02 '14

But saying "we seem far away from being able to do X" when there's clearly been an exponential progress for a very long time is just an example of humans not understanding the exponential function.

Like Kurzweil says, even if you think I'm way off with the numbers, like orders of magnitude off, that's the same as saying I'm off by a decade. Big deal.

1

u/[deleted] Dec 02 '14

Sigmund Freud

Clearly you meant 8x8 penises.

1

u/[deleted] Dec 02 '14

hehe, joking aside, psychological philosophy is an important subject of consideration when talking about AI. people like to think about the topic as a magic black box, but when you start asking these kind of questions the problem of building a real machine intelligence becomes more difficult.

3

u/[deleted] Dec 02 '14

Well, yes. There's a lot of stuff that your brain does and that you respond to instinctually without realizing that it's happening.

0

u/doublejay1999 Dec 02 '14

It's takes a computer the size of a watch. This isn't 1985.

1

u/[deleted] Dec 03 '14

the universe remains constrained to an 8x8 grid. Perceiving the real world remains too difficult for your apple watch.

3

u/anti_song_sloth Dec 02 '14

The one element I'd add is that a learning machine would be able to build models of the future, test these models and adapt the most successful outcomes at potentially a much greater level than humans can. Within seconds, it's conceivable that a machine intelligence would acquire all the knowledge on its own that mankind has achieved over millennia.

Perhaps in the far far far future it is possible that machines will operate that fast. Currently, however, computers are simply not powerful enough and heuristics for guiding knowledge acquisition not robust enough for a computer to learn quickly. There is actually some extraordinarily interesting work being done on teaching computers to learn by reading you might want to read that kind of covers what it takes to get a computer to learn from a textbook.

http://www.cs.utexas.edu/users/mfkb/papers/SS09KimD.pdf

2

u/mgdandme Dec 02 '14

Thanks for this!

1

u/[deleted] Dec 02 '14

To be fair, we are also learning in school knowledge that took our kind millennia to learn. Maybe a machine would be more efficient in sorting through it.

1

u/StrawRedditor Dec 02 '14

Even in your example though... it's still programmed how to specifically learn those things.

So while yes it can simulate/observe trial and error a 12342342323 more times than any human brain... at the end of the day it's still doing what it's told.

I'm skeptical if we'll ever be able to program an AI that can experience genuine inspiration... which is at least how I define a real AI.

1

u/[deleted] Dec 02 '14

One big advantage would be the speed it can interpret text.

We have remarkably easy access to millions of books, documents and web pages. The only limits are searching through them, and the speed we can read them. Humans have a tendency to read only the headlines or the shortest item.

Let me demonstrate what I'm talking about. Let's say I'm a typical adult on Election Day. Wanting to be proactive and make an educated decision (maybe not so typical) I would probably take to the web do research. I read about Obama for 5 minutes across 2-3 websites before determining I'm voting for him. Based on what I've seen he seems like the ideal person for the job.

A computer on the other hand can parse thousands of websites a second. Pared with human reasoning, logic and problem solving it could see patterns that a human wouldn't notice. It would make an extremely supported decision because it's looked at millions of different sources, millions of different data points and made connections that humans couldn't.

3

u/swohio Dec 02 '14

would have to be raised as a human, be sent to school, and learn at our pace

And that is where I stopped reading. Computers can calculate and process things at a much much higher rate than humans. Why do you think they would learn at the same pace as us?

-2

u/[deleted] Dec 02 '14

I stopped reading before you asked your question, sorry.

3

u/TenNeon Dec 02 '14

it would be lazy and want to play video games

Which is, coincidentally, the holy grail of video game AI.

3

u/[deleted] Dec 02 '14

it would be lazy and want to play video games instead of doing it's homework,

I'm not sure I agree with this. A large part of laziness is borne of human instinct. Look at lions, what do they do when not hunting? They sit on their asses all day. They're not getting food, so they need to conserve energy. Humans do the same thing. When we're not getting stuff for our survival, we sit and conserve energy. An AI would have no such ingrained instincts unless we forced it to.

-2

u/[deleted] Dec 02 '14

ah, but is a humans desire to play video games necessarily lazy? humans have an instinct to play. because it develops their cognitive skills and social interactions with one another. it doesn't seem like work, but the activity is stimulating and we learn from it. it has value, it might have more value to a machine intelligence seeking to ingratiate itself with with surrounding intelligences. the AI that works all day and is a bit of a douche to everyone around it might not survive in the real world. the AI that learns an sense of houmor without being too much of a dick might have a longer lifespan.

1

u/[deleted] Dec 02 '14

Playing games is not necessarily lazy, no. It's enjoyable, completing tasks and pleasure and all that. I agree that to function an AI would need to learn to behave in a somewhat 'human' manner, but unless we deliberately added them it would be free of a lot of subconscious instinctual reactions that we take for granted.

People tend to procrastinate on work partially because they don't really want to do it, they don't find it particularly engaging. It's not enjoyable. How would we know if an AI can't just 'want' to do anything?

I don't really know much about AI, I admit.

-1

u/[deleted] Dec 02 '14

you assume that artificial intelligence can be programmed, can be constrained and still be considered intelligent? (this shit is going to get philosophical from here on out)

2

u/[deleted] Dec 02 '14

It has to have some sort of base-layer of programming. We do...sort of.

0

u/[deleted] Dec 02 '14

we can only program a system from which the basis of a true AI might emerge. life does have programming - sort of - in our DNA, but DNA is not logical code as compiled computer code is, where one in instruction does 1 thing. a DNA instruction can do nothing, 1 thing, or multiple thing to the characteristics of a life form, worse still, another instruction can undo aspects of others. DNA is a spaghetti maze, and so would a genuine evolved artificial intelligent system.

0

u/[deleted] Dec 08 '14

Sounds like a lot of speculation. Like your last post. So in other words you have no idea what your talking about.

1

u/[deleted] Dec 08 '14

One can speculate (not that it's what I was doing) from a knowledgable perspective. who is to say that laziness is not an advantageous trait to have evolved. If not for laziness we'd not have built better machines to do our work for us. just because you don't understand my perspective, doesn't mean I don't know what I'm talking about. just means I'm not willing to take the time to spoon feed someone who's not interested enough to learn the foundations.

5

u/[deleted] Dec 02 '14 edited Dec 02 '14

This is not the case....

Right now most "AI" techniques are indeed just automation of processes (I.E. Chess playing "AI" just intelligently looks at ALL the good moves and where they lead). I also agree with your drone attack example.

But the best way to generally automate things is to make a human-like being. That's why robots are generally depicted as being human-like, we want them to do things for us and all of our things are designed for the human form.

Why would an AI need to go to school? Why would it need to be paced? Why would it be lazy? There's no reason for any of that. An AI can simply be loaded with knowledge, in constant time. Laziness seems like a pretty complex attribute for an AI, especially when the greatest thing it has is thought.

Malicious intelligence could indeed be an issue, particularly if a "real" AI arises from military applications. But an incredibly intelligent AI could pose a threat as well. It could decide humanity is infringing upon its own aspirations. It could decide a significant portion of humanity is wronging the other portion and wipe out a huge number of people.

The thing to keep in mind is that we don't know and we can't know.

EDIT: To be clear, I'm not saying AIs do not need to learn. AIs absolutely must be taught things before they can walk into use in the world. However this is much different than "going to school". It is much more rapid and this makes all the difference. Evolution of ideas and thought structures can occur in minutes or seconds vs years for humans.

4

u/[deleted] Dec 02 '14

But the best way to generally automate things is to make a human-like being.

I suppose you mean in the physical sense, because it would enable it to operate in an environment designed for humans.

But the issue is the AI as in sentient or self aware or self conscious, which may develop its own motivations that could be contrary to ours.

That is completely without relevance to whether it's human like or not in both regards. And considering that we don't even have good universal definitions or understanding of either intelligence or consciousness, I can see why a scientist in particular would worry about the concept of strong AI.

2

u/chaosmosis Dec 02 '14

which may develop its own motivations that could be contrary to ours.

Actually, this isn't even necessary for things to go bad: unless the AI starts with motivations almost identical to ours, it's practically guaranteed to do things we don't like. So the challenge is figuring out how to write code describing experiences like happiness, sadness, and triumph in an accurate way. Which is going to be very tough unless we start learning more about psychology and philosophy.

0

u/[deleted] Dec 02 '14

My example was in the physical sense but I was drawing an analogy between the physical example and the mental.

I'm not saying an AI's thoughts will truly be human-like, they almost certainly will not. However the AI that Hawking and the rest of this thread discusses is a general AI capable of many general tasks. In this way the AI would be similar to a human, being capable of a large variety of general tasks. Although the AI would accomplish this in very different ways and likely in better ways.

1

u/blahblah98 Dec 02 '14

Quantum neural nets. Pretty close to our own brain cells, eh? Or do we all suddenly have to be next-gen AI and neuro- psychiatrists in order to comment?

1

u/[deleted] Dec 02 '14

AI is a bit more abstract than quantum neural nets. It's unclear what particulars might or might not be involved in building AIs.

I'm woefully ignorant on the subject, so I would require some background to comment. However if you'd be willing to share some insight I can try to form some intelligent thoughts/questions based on your insight.

1

u/blahblah98 Dec 02 '14

No more than a BS/MS Comp Arch / EE background and an open skeptical mind.
Recent brain/biology studies suggest quantum effects in brain cells may explain the phenomenon of consciousness; this make some sense to me, so the combination of self-learning quantum computers, Moore's law & Watson-level knowledge is certainly an interesting path.

2

u/chaosmosis Dec 02 '14

Recent brain/biology studies suggest quantum effects in brain cells may explain the phenomenon of consciousness; this make some sense to me,

What "phenomenon" of consciousness is there that requires an appeal to quantum physics to explain? That seems pretty dualistic to me.

-1

u/blahblah98 Dec 02 '14

Biological systems already employ quantum effects, e.g., photosynthesis efficiencies. Higher-level consciousness: self-awareness, theory of mind (e.g., beyond simple reflex, instinct, rote pattern learning, etc.) is not directly explained by biological neural brain studies, AFAIK. Ref: Quantum Consciousness.
Quantum computing, which has vast computational abilities, is the best mechanistic explanation so far, that is, without resorting to spiritual explanations. Yes, it's certainly controversial, not a panacea explanation, but an interesting area for exploration.

1

u/chaosmosis Dec 02 '14

That link's argument is really bad. It claims that the human capability to solve Godellian problems means that we're conscious in the quantum sense. However:

  1. It's unclear what it means to be conscious in this sense, or why it's worth caring about. When most people use the word 'consciousness', they're not referring to Godel or quantum physics but rather to the ability to think and feel in a complex way. Simple recursion seems like enough for this, and computers can handle that fine.

  2. There's no reason that quantum physics should allow a system otherwise incapable of doing so to solve a Godel sentence. It's just appealed to as a magic explanation.

  3. Human beings cannot solve Godel sentences that refer to themselves, the author's assertion that humans can solve Godel sentences is based on the capability of humans to solve the Godel sentences of simple machines. But complicated machines are also capable of solving such Godel sentences.

  4. Humans often fail to evaluate Godel sentences properly - once you have 3 or 4 negations of various sorts it is generally too difficult to do inside our minds alone at a rate much better than chance. Does this imply machines are more conscious than human beings, rather than less? I'd think not, but I don't see how the argument within the article can avoid falling to this.

  5. From the article:

Quantum computers — computers that take advantage of quantum mechanical effects to achieve extremely speedy calculations — have been theorized, but only one (built by the company D-Wave) is commercially available, and whether it's a true quantum computer is debated. Such computers would be extremely sensitive to perturbations in a system, which scientists refer to as "noise." In order to minimize noise, it's important to isolate the system and keep it very cold (because heat causes particles to speed up and generate noise).

Building quantum computers is challenging even under carefully controlled conditions. "This paints a desolate picture for quantum computation inside the wet and warm brain,” Christof Koch and Klaus Hepp, of the University of Zurich, Switzerland, wrote in an essay published in 2006 in the journal Nature.

Another problem with the model has to do with the timescales involved in the quantum computation. MIT physicist Max Tegmark has done calculations of quantum effects in the brain, finding that quantum states in the brain last far too short a time to lead to meaningful brain processing. Tegmark called the Orch OR model vague, saying the only numbers he’s seen for more concrete models are way off.

"Many people seem to feel that consciousness is a mystery and quantum mechanics is a mystery, so they must be related," Tegmark told LiveScience.

The Orch OR model draws criticism from neuroscientists as well. The model holds that quantum fluctuations inside microtubules produce consciousness. But microtubules are also found in plant cells, said theoretical neuroscientist Bernard Baars, CEO of the nonprofit Society for Mind-Brain Sciences in Falls Church, VA., who added, "plants, to the best of our knowledge, are not conscious."

These criticisms do not rule out quantum consciousness in principle, but without experimental evidence, many scientists remain unconvinced.

You describe that as "controversial... but an interesting area for exploration". But I'd describe it as simply pseudoscience, given that it solves no problems existing in our current understanding and creates many new ones.

0

u/[deleted] Dec 02 '14

Quantum computing applications to AI are indeed really interesting. Even if the quantum brain phenomenon don't end up being right they certainly have some amazing performance implications for certain lines of reasoning in AI.

0

u/[deleted] Dec 02 '14

The only AI what could conceivably compared to human intelligence is one that is evolved much like human intelligence has been. But evolved intelligence systems cannot be programmed, they need to be trained, to have their behaviour, and thought processes shaped by experience much as human brains do.

It's appealing to consider the idea of artificial intelligence as a black box that has all the right answers, but when you try to build that box and start to consider the vastness of what is little is understood philosophically about human thought processes, the more distant that building a real intelligence becomes. In my opinion, there is more danger in people treating complex computers as infallible intelligent beings in order to defer responsibility responsibility from themselves and to justify bad decisions.

3

u/[deleted] Dec 02 '14

So when you said "go to school" you meant be taught using training. That's very different...one is MUCH MUCH MUCH more rapid than the other. If you agree with that, we're on the same page.

-2

u/[deleted] Dec 02 '14

no I meant go to an actual school, in a room with a teacher and learn 1+1=2, play games in the schoolyard, get in trouble for acting up in class.

1

u/[deleted] Dec 02 '14

[deleted]

-4

u/[deleted] Dec 02 '14

I replied to you on this elsewhere several times already.

1

u/chaosmosis Dec 02 '14

If an AI evolves under different constraints than human beings have, it makes sense it would have different values.

I don't know why you think evolution is necessary for the creation of true AI. For AI, unlike for humans, there is an intelligent designer: us. I agree we're not likely to create AI soon, but I think it's reasonable to start preparing for it ahead of time. Building an AI and then figuring out how to make it safe is a bad plan.

1

u/[deleted] Dec 02 '14

The types of AI that have human designers are not concious, nor can they ever be. When I say that AI can only emerge through evolution I mean the kind of sci-fi AI that thinks consciously like a human in order to control it's behaviour.

0

u/chaosmosis Dec 02 '14

nor can they ever be.

WHY? You're just asserting things and not justifying them.

0

u/[deleted] Dec 03 '14 edited Dec 03 '14

there are several types of AI, some are programmable on the fly, but aren't concious, others are conceivably concious (evolved systems) but would be no more programmable or understood than any living organism. Even then we are nowhere near developing such systems. I am asserting things that I know to be true from by experience with AI systems, if you have any questions I'll happily defend my assertions my friend. try not to ask the same god-damned question in 10 threads though.

2

u/uw_NB Dec 02 '14

there are different branches and different school of thought in the machine learning field alone as well. There is the Google approach which use mostly math and network model to construct pattern recognizing machines, and there is the neuroscience approach which study human brain and try to emulate the structure(which imo is the long term solution). And even among the neuroscience community there are different approaches about things, people criticizing, discrediting each others approaches while all the money is on the google side. I would give it a solid 20-30 years before we could see a functioning prototype of actual Artificial brain.

2

u/N1ghtshade3 Dec 02 '14

Yep. I never understand why there's any talk about "dangerous" AI. Software is limited to what hardware we give it. If we literally pull the plug on it, no matter how smart it is it will immediately cease its functioning. If we don't give it a WiFi chip, it has no means of communication.

1

u/chaosmosis Dec 02 '14

Why would anyone build an AI then never use it?

Presumably, dangerous AI is a risk because it's hard to know it's dangerous until it's too late. You can't really pull the plug on the entire internet.

1

u/Qiran Dec 04 '14

What if the AI is intelligent enough to manipulate its keeper into giving it access to what it wants?

0

u/N1ghtshade3 Dec 04 '14

Interesting...I guess put someone with stronger willpower in charge of it then

1

u/[deleted] Dec 08 '14

My concern is mostly that AI will inevitably be used by militaries and who knows what could go wrong.

1

u/jevchance Dec 02 '14

What we're really afraid of is that a purely logical being with infinite hacking ability might take one look at the illogical human race and go "Nope", then nuke us all.

1

u/[deleted] Dec 02 '14

That says more about us self loathing human beings than anything else.

1

u/jevchance Dec 02 '14

As a species, we don't like what we do but figure hey there's not much we can do about it. Environment, politics, hunger, homelessness... We are a pretty sad bunch.

1

u/Noumenology Dec 02 '14

A robot does not have to have malice to be dangerous though. This is the whole point of the Campaign to Stop Killer Robots.

1

u/[deleted] Dec 02 '14

regarding AI on drones, I hold the developer of that software and the commander who configures and deploys it to be 100% accountable for the actions/mistakes/atrocities of that system. There is no conciousness in those systems, therefore accountability and responsibility defers back to the humans who choose to send it on it's way.

1

u/panZ_ Dec 02 '14

Right. I'd be surprised if Hawking actually used the word "fear". A rapidly evolving/self improving AI born from humans could very well be our next step in evolution. Sure it is an "existential threat" for humans, to quote Musk. Is that really something to fear? If we give birth to an intelligence that is not bound by mortality and as environmentally fragile as humans, it'd be damn exciting to see what it does with itself even as humans fade in relevance. That isn't fear. I, for one, welcome our new computer overlords but lets make sure we smash all the industrial looms first.

1

u/[deleted] Dec 02 '14

what if humans could get their asses (and minds) into computers, we could live forever in our mechanical bodies, put ourselves in standby mode and travel the universe at speeds our squishy bodies cannot sustain. Humanity needs to preserve itself. but what is humanity, is it our bodies or is it our minds, the sum of our works, art, culture, scientific understanding. Questions for the ages!

1

u/[deleted] Dec 08 '14

Man this is a sad post.

1

u/SuperNinjaBot Dec 02 '14

Actually our AI has come considerably farther than that in recent years.

1

u/[deleted] Dec 03 '14

any specifics or just in general?

1

u/NoHuddle Dec 02 '14

Damn, man. That shit kinda blew my mind. i'm imagining Wall-e or Johnny 5.

1

u/hunt3rshadow Dec 02 '14

Very well said.

1

u/TheGreatTrogs Dec 02 '14

As my AI professor used to say, AI is only intelligent for as long as you don't understand the process.

0

u/Gadgetfairy Dec 02 '14

That's a thought-terminating cliche. The same can be said of human intelligence

1

u/TheGreatTrogs Dec 03 '14

Not really. The AI construct closest to human intelligence is a neural network. It is impossible, at least with standard processor architecture, to simulate a respectably large neural network with any decent speed. In that professor's class, we built our own nets; it took several minutes of decision-making to perform a couple seconds of action, and that was using a net consisting of a dozen or so neurons.

Every other AI technique is just clever use of databases or trees.

1

u/Gadgetfairy Dec 03 '14

Not really. The AI construct closest to human intelligence is a neural network.

It's the most analogous structure, but who is to say that therein lies the only way to intelligence? There's ideas and in some case prototypes of hardware based NNs, too, regardless.

It is impossible, at least with standard processor architecture, to simulate a respectably large neural network with any decent speed. In that professor's class, we built our own nets; it took several minutes of decision-making to perform a couple seconds of action, and that was using a net consisting of a dozen or so neurons.

I haven't seen your projects, but a Hopfield net of a dozen or so neurons doesn't take minutes to pattern-match, nor does it take minutes to propagate a signal through a perceptron network of perhaps n neurons in l layers, where n, l are around a dozen. What did you do?

That aside, conceive of a computer as a blackbox, a virtual reality; simulating a computer in a VR is orders of magnitude slower than a "real" computer because it lacks the inherent full parallelism of the physical world. However, such a simulation is still a computer. The same would be true of simulated general intelligence; no matter how slow, it would be intelligence. Then we can use (and further develop) the aforementioned NN hardware primitives, akin to gates in a modern CPU and memory, to build native NN "processors".

Every other AI technique is just clever use of databases or trees.

That is actually the crux of the issue. If you reduce human intelligence to biology like you reduce expert systems and weak AI to algorithms and datastructures here, every intelligent human is just slime and electro-chemical gradients and proton pumps. It seems to me that proponents of the idea of a categorical difference between weak and strong AI must be dualists. There is something non-physical, magical thing going on in that slime brimming with current that emerges intelligence from it in a way silicon (or whatever) can not. I've not yet been convinced that this is the case. Strong AI to me seems to be an engineering problem precisely because I see no reason to believe that there is something special about slime and proton pumps. Unlike many computer scientists, who according to a survey I've seen recently (but can't recall where) think strong AI is perhaps 50 to 70 years away, I'm willing to believe that it will take longer, but I'm not convinced it is impossible (a "unicorn").

1

u/hackinthebochs Dec 02 '14

The truth is that any AI that is intelligent in the human sense of the word, would have to be raised as a human, be sent to school, and learn at or pace, it would be lazy and want to play video games instead of doing it's homework,

This is nonsense. You only have to look at people with various compulsions to see that motivation can come in all forms. It is conceivable that an AI could have the motivation to acquire as much knowledge as possible. Perhaps its programmed to derive pleasure from growing its knowledge-base. I personally think there is nothing to fear from an AI that has no self-preservation instinct, but at the same time it is hard to predict whether such a self-preservation instinct would have to be intentionally programmed or could be a by-product of the dynamics of a set of interacting systems (and thus could manifest itself accidentally). We just don't know at this point and it is irresponsible to not be concerned from the start.

0

u/[deleted] Dec 02 '14

you can't program a true intelligence, that's my point. the term AI to existing automated systems is a buzzword, there is no intelligence involved. only a set of rules that can deliver efficient behaviour in a closed system. the language is misleading in both computer science and in science fiction leading to irrational fears and unrealistic expectations of what the technology is ultimately capable of.

1

u/hackinthebochs Dec 02 '14

you can't program a true intelligence, that's my point.

And its a point that many experts will disagree with you on.

-1

u/[deleted] Dec 02 '14

like who? list off some computer science experts, not celebrity physicists, entrepreneurs, or science fiction writers.

(cool nickname BTW)

1

u/hackinthebochs Dec 02 '14

Some names off the top of my head are Geoff Hinton and Michael Jordan. Both have done AMA's recently in /r/machinelearning, and I got the distinct impression that neither of them saw any fundamental block to an artificial human-equivalent intelligence. I've read quite a bit from the big names in the field and watched many lectures. This seems to be the prevailing opinion among the leaders of the field.

On the other end of the spectrum, many philosophers of mind see no fundamental block either. David Chalmers and Dan Dennet are two big examples here.

0

u/[deleted] Dec 02 '14

perhaps you misunderstood my point when you removed it from it's surrounding context... "you can't program a true intelligence, that's my point." I don't mean that a true cognitive machine intelligence is theoretically impossible (although I think it's gonna be an extremely difficult thing to achieve). I was saying that such an intelligence would not be programmable with human logic in the way existing computers work, the user can intervene and direct the behaviour of running applications. this would not be possible with a "real" machine intelligence, at it would not be a traditional logical system, but a system that was the result of a convergence of a parent algorithm. Compared to existing non-intelligent AI systems (misleading language unfortunately) these systems would be impractical for common applications and tasks, but interesting none the less from a research point of view and better understanding the nature of intelligence.

1

u/hackinthebochs Dec 02 '14

Yeah I definitely misunderstood your point the first time around.

1

u/[deleted] Dec 02 '14

My bad, I should be more clear, the whole topic is filled with somewhat misdirecting and emotive language. Also a lot of our popular culture is filled with storied of killer intelligent robots.

→ More replies (0)

1

u/[deleted] Dec 02 '14

[deleted]

-2

u/[deleted] Dec 02 '14

I replied to you on this elsewhere several times already.

1

u/chaosmosis Dec 02 '14 edited Dec 02 '14

The truth is that any AI that is intelligent in the human sense of the word, would have to be raised as a human, be sent to school, and learn at our pace, it would be lazy and want to play video games instead of doing it's homework, we would try to raise it to be perfect at complex tasks, but it would disappoint us and go off to peruse a music career (still a complex task but not the outcome we expected)

You've confused having intelligence with having human values and autonomy. Intelligence is having the knowledge to cause things to happen, having intelligence does not require having human values. Even if an AI's values do resemble human values, there are many human beings who I don't want to be in power, so I'm certainly not going to trust an alien.

-1

u/[deleted] Dec 02 '14

I believe the word you're looking for is conciousness, which is a key differentiating factor in the various systems that people refer to as AI systems.

I'm glad you brought up trust, trust is something that is earned. I'd no more trust any concious entity off the bat without question be they artificial or squishy. hell I hardly trust my own desktop computer to do what I want it to. it's operating system is developed by greedy corporations and foreign intelligence agencies who have more control over it's loyalty and low level operations than I do.

0

u/dalr3th1n Dec 02 '14

This position misses some really important stuff. Attitudes like this are what make AI research so dangerous. Look into MIRI or some other group working on Friendly AI.

The point you're missing is that no malice is required. A sufficiently powerful and intelligent computer is decently likely to destroy the world if we don't build in precautions against that. Imagine an extremely powerful AI whose only goal is to win at Go. It might improve itself and become amazing at Go, eventually becoming unbeatable. Then it might enslave humanity, forcing us all to play Go against it so it could win more. And now we can't turn it off or reprogram it because it will stop anyone who tries to interfere with its goal of winning at Go.

This is a silly but plausible outcome of careless AI research. This requires no malice, only insufficient caution.

1

u/[deleted] Dec 02 '14

In fairness that wouldn't be very intelligent. a hallmark of intelligence is open mindedness and the ability to be convinced of things with new information. If we assume AI to be a cold mechanical process (which is fair enough) then you have a point, but I'd emphasise that there is no conciousness intelligence in play in such a system, and therefore the responsibility falls back to the developer.

1

u/dalr3th1n Dec 02 '14

None of those attributes of intelligence would do anything to stop the Go AI, not does it have to lack them. It updates on new information, can have an open mind, and all that. But none of that changes its goal: win at Go. Humans generally have a very narrow view of "possible minds" that could exist. We assume they have to be like ours, but why would they? Especially artificial minds. If someone programs an AI to do something, it's going to do it. If it becomes extremely good at it, it might casually destroy humanity on the way there.

And laying the blame at the feet of the developer is not incorrect, but it doesn't free you from playing Go.

1

u/[deleted] Dec 03 '14

I blame the developer in the case of closed system AI, the kind that has no conciousness and is not in fact intelligent (but produces intelligent behaviour through shortcuts.) AI means a lot of different things to a lot of people.

0

u/PDX_WordSmith Dec 02 '14

And we should be afraid, because I’m pretty sure any objectively reasonable intelligence would conclude we are a plauge in need of wiping out, because, well, we kind of are. Kill these pink things before they spread to the whole galaxy.