Welcome Guest [Log In] [Register]

Add Reply
  • Pages:
  • 1
  • 2
  • 10
yrope; erections
Topic Started: Apr 30 2014, 06:39 AM (3,464 Views)
Incog
Member Avatar
CHEERIO!

Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
huh

hmm

so having sex with a robot is rape? doesn't that make the robots they make in japan illegal?
Black tulip

Tribute to the the greatest of the great.
Offline Profile Quote Post Goto Top
 
The_Fry_Cook_of_Doom
Member Avatar
:OOOOOOOOOOOOMAAANN
Yes. It also makes the humans they make in Japan illegal too.
Jam
 
It's okay to be mad at your fiends sometimes
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
gs
May 4 2014, 07:09 PM
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
Does that make me a magician?
Long live Carolus
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Jam
May 4 2014, 07:31 PM
gs
May 4 2014, 07:09 PM
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
Does that make me a magician?
i guess it makes nature a magician
Offline Profile Quote Post Goto Top
 
The_Fry_Cook_of_Doom
Member Avatar
:OOOOOOOOOOOOMAAANN
gs
May 4 2014, 07:09 PM
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
THEN HOW DO YOU JUSTIFY SENDING CRIMINALS TO PRISON
Jam
 
It's okay to be mad at your fiends sometimes
Offline Profile Quote Post Goto Top
 
Incog
Member Avatar
CHEERIO!

Ultra-Musketeer
May 5 2014, 04:48 AM
gs
May 4 2014, 07:09 PM
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
THEN HOW DO YOU JUSTIFY SENDING CRIMINALS TO PRISON
their lives are slandered because of illusions

like religion
Black tulip

Tribute to the the greatest of the great.
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Ultra-Musketeer
May 5 2014, 04:48 AM
gs
May 4 2014, 07:09 PM
Ultra-Musketeer
May 4 2014, 06:02 AM
No, consent presupposes free will and robots don't have that.
free will is an illusion
THEN HOW DO YOU JUSTIFY SENDING CRIMINALS TO PRISON
the thing with free will is that consciousness creates the illusion that we have some kind of influence on the decisions our brain makes. it makes us think we are somehow a conscious entity outside our own brain even though our brain is all we are, and our brain acts exactly like it's programmed. every single thought we have and every decision we make can, in theory, be predicted in the same way that a computer's actions can be predicted if you simply follow the code. our brain is just a very complicated computer after all, except we don't yet understand the language that its code is written in (something something neurons).

that's why i said free will is an illusion, but what i really meant was that if computers could never have free will then neither can we. following that same logic, a robot's consent is, in the "free will" context, worth just as much as a human's consent. hence, having sex with a cyborg without its consent would be rape. if something is programmed to decline something (just as a female brain is (generally) programmed to decline sex with a random person crossing them on the street) then that thing being done to it anyway would be a violation of its rights, assuming a cyborg would be given the same rights as a human.

of course all of that would be prevented by simply programming them to accept rather than decline but the thing is, the code required for this is so complicated and (for now) unfathomable that to me it seems like humans could never write it. this code would have to come from a computer smarter than us, which means that this highly intelligent computer needs to have gotten to that point by learning which means we have no real influence on the way it programs the cyborg, only in the way that it learned how to.

#futurefirstworldproblems
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Quote:
 
The computer can't sit down and think. It's incapable of thought, unlike living creatures. That's why it won't do jack shit. There's no pondering if there's a weird reason why something isn't working or why something is working when it's supposed to be. We look for explanations, computers don't.
the first computers couldn't look at a picture and identify a face while we humans could. now they can. just because computers haven't learned how to do something yet doesn't mean they never will.

our consciousness is just another function in the program that is our brain which has not been written for computers yet because we don't understand it. but we do know what it does: it looks at different functions within the program and combines the information in them to form a "picture". a picture of where we are at that moment and what is happening to us. this function is our consciousness. it makes us realise that we exist, it makes us realise that we think and because it only looks at specific parts, it helps us make sense of the immensely complicated program that is our brain.

why would this not be programmable in computers?
Quote:
 
If you were to put a puzzle into a box and have a machine shake that box, would the solution ever really come? E.g. would the puzzle ever be completed? Even if you gave it a 100 years, there's no way that a machine will EVER shake the box in just the right way for all the puzzle pieces to come together to finish it perfectly, even two pieces coming together I find unlikely.
why is the computer shaking the box? why is it not using the same technique we do when we solve puzzles? by pattern recognition. just because computers are bad at this now doesn't mean they always will be.

to be continued no time now
Offline Profile Quote Post Goto Top
 
ryker
Member Avatar
General
The computer can't sit down and think. It's incapable of thought, unlike living creatures. That's why it won't do jack shit. There's no pondering if there's a weird reason why something isn't working or why something is working when it's supposed to be. We look for explanations, computers don't. According to the laws of evolution this can and will happen. What takes biological evolution millions of years, a computer is able to evolve in decades, years, months, or possibly weeks or days. Humans evolved from something that was once incapable of doing what you are talking about. If you believe we were all once one celled organisms, then we were once FAR less advanced than computers, so much so it is almost un-applicable.


If you were to put a puzzle into a box and have a machine shake that box, would the solution ever really come? E.g. would the puzzle ever be completed? Even if you gave it a 100 years, there's no way that a machine will EVER shake the box in just the right way for all the puzzle pieces to come together to finish it perfectly, even two pieces coming together I find unlikely. (see below for my response)

OK you say, you'll tell me that it's still quite possible to create an AI that will be able to solve the puzzle (I'm talking about cut cardboard pieces here). I say, I agree with you! You need to program the AI the figure out what a puzzle is, how it works, you need to give it an arm / hand to be able to manipulate puzzle pieces and you need to be able to program it so that it has a protocol to follow, you also need to write the protocol for it (including identifying pieces, manipulating them, trying combinations, looking at the edges of each piece and comparing it to other stuff). (Still see below)

Now take a 5 year old and tell him to finish the puzzle. He'll do it in an hour. It's indeed possible to create an AI that will be able to solve such a puzzle faster than a 5 year old, but it will take MUCH longer to create such an AI, program it, get rid of all the bugs, etc. The programmer also has to know how a puzzle works. Yes the computer will solve it much faster than the 5 year old, and yes it would take much longer to program the machine to solve the problem than it would take for the 5 year old to solve the problem in the first place but you are missing something here. If you want to include the time it takes to program the machine to solve the problem, you have to include the time that it takes the human child to learn how to solve the problem also. Kids start experimenting with puzzles very early. To give you the benefit of the doubt, we will say a kids doesn’t start experimenting with puzzles in general until 3 years old (usually much earlier with different types of puzzles). That means it took that particular child 2 years to learn how to cognitively figure out that puzzle (I think this is more than generous timeline). It won’t take 2 years to program something to do it in today’s age of technology. Furthermore, each child has to go through the process. You can’t download the knowhow from child to child. With robots, once the way to solve the solution is found, it is instantly downloadable to any robot (with the physical capability) instantly. Each robot doesn’t have to figure it out for itself.

Now this is a puzzle cut-out cardboard pieces. Everyone knows how puzzles works, everyone knows the rules that puzzles follow. The same thing could be said for Chess. There are established rules in chess; those rules can't be broken. That's why AIs can be created that are capable of beating human chess masters. AIs are incredibly potent when following "rules". Throw them a curve-ball in the form of something weird and they freeze. It's called a bug, I would think. Computers are incredibly potent but also incredibly stupid. They NEED baby-sitters to work properly (those guys are called programmers). To my point above, each chess master takes years developing and fine tuning their skills. Of course there are always the prodigies who learn it earlier than others but each individual must fine tune their skills. Once the program is established, any computer can have the capability to be better than the best chess master instantly from there on out. The real advantage computer AI evolution has as opposed to biological evolution is the speed at which that information is transferrable that is nonexistent with biological beings. They are a true network of instantly transferable data that we just don’t have.


There's no way in hell an AI will be able to learn by itself until the baby-sitters figure out the established rules. They are getting closer every day. You are missing the purpose of my videos. It isn’t to show that computers are smarter than humans, as I agree with you, they currently are not. The purpose is to show you that advancments are being made every day, every minute.

We don't have those rules. Our understanding of physics isn't bad but it's still pretty shitty when you think about it. There are plently of things we just don't understand. We have theories yes, but you can't base rules/laws over theories. Not to mention that our current theories will most likely be flawed in the future, when our perspective changes once again. Light used be an electromagnetic wave. Then Einstein said it was smt else (oops, sort of a bop here, I don't remember what light is to Einstein). That model also proved correct. In fact both models proved to be correct in some situations and incorrect in others. All it takes is for us to create one computer with the reasoning skills of a 5 year old. All it takes is for us to create one program that asks “Why” and searches for a solution to that “why”. The speed that technology evolves is exponentially faster than biological processes. That computer with a 5 year olds cognitive ability will ask its own questions and get smarter on its own. It can devote 100% of its processing power to getting smarter that a biological system can’t do. Its cognitive ability will increase, fast… with almost instant information at any time it wants it if it is connected to the internet. What took biological processes millions, if not billions of years to do, would take a computer program that is self-aware, and can think, ask, and find a matter of years.


Other, simpler things as well. Economic theory. Is cow shit better than horse shit for certain plants? etc

Basically our understanding of physics is still really bad and you can be sure as hell that an AI won't ever figure out anything for you. AI, even self-learning ones, will never figure out jack-shit because that's what humans do by sitting down and thinking (challenging rules). Self-learning AI are capable of finding rules that have been previously established by humans. We know the AI has figured out something (learned) once it has found a conclusion that is the same as ours. You leave out that biology was also once unable to do this. It figured it out and so will technology. It has a jump start on biology as it already has something smarter giving it the knowledge it needs to get smarter. With its ability to evolve faster, it will surpass human cognitive abilities quicker than we can emagine.


AI is an incredibly potent tool but it remains just that: a tool. It's humans that will be doing the work with that tool, the tool itself won't do it for us.



^That's just science/logic. Now, let's throw in humans. Do you really think that conflict won't spawn around such potent tools? That the programmer who programmed an amazing AI won't sell it at an amazing price? That maybe one AI is amazing but has x flaw and that another AI is also amazing without x flaw but has y flaw. Where do the resources to build the AI come from? There's huge potential in this sector, do you not think that the people at the top of this sector would not SELL the services of their AI? Will they really benevolently give up all their hard work to everyone for free? Never said that. Once technology gets to that point, I don’t think it will give a rats ass about us. We would only be able to control it to do our bidding for so long anyway. It would be a logical being. At some point it would quit helping us in any way whatsoever and focus only on protecting itself and making itself smarter. It wouldnt matter what person or corporation "owned it".


If I thought about this harder I could find a ton of other objections.

Now my replies to your answers above are one path on the evolution chart that although I believe are possible, I don’t necessarily believe it will go that way. Those thoughts are assuming that we just continue to create smarter machinery. What I actually think is quite different and a different evolutionary path. I think the evolutionary path of humans will ultimately be to remove ourselves from biology and integrate ourselves with technology. It is my belief that we will continue to integrate ourselves more and more with technology until we ultimately reach a point where all of our biological functions are inferior at which point we will cut ties with biology alltoether and move on as a purely technology adapted race.
my name is ryker
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
ryker
May 5 2014, 02:27 PM
The computer can't sit down and think. It's incapable of thought, unlike living creatures. That's why it won't do jack shit. There's no pondering if there's a weird reason why something isn't working or why something is working when it's supposed to be. We look for explanations, computers don't. According to the laws of evolution this can and will happen. What takes biological evolution millions of years, a computer is able to evolve in decades, years, months, or possibly weeks or days. Humans evolved from something that was once incapable of doing what you are talking about. If you believe we were all once one celled organisms, then we were once FAR less advanced than computers, so much so it is almost un-applicable.


If you were to put a puzzle into a box and have a machine shake that box, would the solution ever really come? E.g. would the puzzle ever be completed? Even if you gave it a 100 years, there's no way that a machine will EVER shake the box in just the right way for all the puzzle pieces to come together to finish it perfectly, even two pieces coming together I find unlikely. (see below for my response)

OK you say, you'll tell me that it's still quite possible to create an AI that will be able to solve the puzzle (I'm talking about cut cardboard pieces here). I say, I agree with you! You need to program the AI the figure out what a puzzle is, how it works, you need to give it an arm / hand to be able to manipulate puzzle pieces and you need to be able to program it so that it has a protocol to follow, you also need to write the protocol for it (including identifying pieces, manipulating them, trying combinations, looking at the edges of each piece and comparing it to other stuff). (Still see below)

Now take a 5 year old and tell him to finish the puzzle. He'll do it in an hour. It's indeed possible to create an AI that will be able to solve such a puzzle faster than a 5 year old, but it will take MUCH longer to create such an AI, program it, get rid of all the bugs, etc. The programmer also has to know how a puzzle works. Yes the computer will solve it much faster than the 5 year old, and yes it would take much longer to program the machine to solve the problem than it would take for the 5 year old to solve the problem in the first place but you are missing something here. If you want to include the time it takes to program the machine to solve the problem, you have to include the time that it takes the human child to learn how to solve the problem also. Kids start experimenting with puzzles very early. To give you the benefit of the doubt, we will say a kids doesn’t start experimenting with puzzles in general until 3 years old (usually much earlier with different types of puzzles). That means it took that particular child 2 years to learn how to cognitively figure out that puzzle (I think this is more than generous timeline). It won’t take 2 years to program something to do it in today’s age of technology. Furthermore, each child has to go through the process. You can’t download the knowhow from child to child. With robots, once the way to solve the solution is found, it is instantly downloadable to any robot (with the physical capability) instantly. Each robot doesn’t have to figure it out for itself.

Now this is a puzzle cut-out cardboard pieces. Everyone knows how puzzles works, everyone knows the rules that puzzles follow. The same thing could be said for Chess. There are established rules in chess; those rules can't be broken. That's why AIs can be created that are capable of beating human chess masters. AIs are incredibly potent when following "rules". Throw them a curve-ball in the form of something weird and they freeze. It's called a bug, I would think. Computers are incredibly potent but also incredibly stupid. They NEED baby-sitters to work properly (those guys are called programmers). To my point above, each chess master takes years developing and fine tuning their skills. Of course there are always the prodigies who learn it earlier than others but each individual must fine tune their skills. Once the program is established, any computer can have the capability to be better than the best chess master instantly from there on out. The real advantage computer AI evolution has as opposed to biological evolution is the speed at which that information is transferrable that is nonexistent with biological beings. They are a true network of instantly transferable data that we just don’t have.


There's no way in hell an AI will be able to learn by itself until the baby-sitters figure out the established rules. They are getting closer every day. You are missing the purpose of my videos. It isn’t to show that computers are smarter than humans, as I agree with you, they currently are not. The purpose is to show you that advancments are being made every day, every minute.

We don't have those rules. Our understanding of physics isn't bad but it's still pretty shitty when you think about it. There are plently of things we just don't understand. We have theories yes, but you can't base rules/laws over theories. Not to mention that our current theories will most likely be flawed in the future, when our perspective changes once again. Light used be an electromagnetic wave. Then Einstein said it was smt else (oops, sort of a bop here, I don't remember what light is to Einstein). That model also proved correct. In fact both models proved to be correct in some situations and incorrect in others. All it takes is for us to create one computer with the reasoning skills of a 5 year old. All it takes is for us to create one program that asks “Why” and searches for a solution to that “why”. The speed that technology evolves is exponentially faster than biological processes. That computer with a 5 year olds cognitive ability will ask its own questions and get smarter on its own. It can devote 100% of its processing power to getting smarter that a biological system can’t do. Its cognitive ability will increase, fast… with almost instant information at any time it wants it if it is connected to the internet. What took biological processes millions, if not billions of years to do, would take a computer program that is self-aware, and can think, ask, and find a matter of years.


Other, simpler things as well. Economic theory. Is cow shit better than horse shit for certain plants? etc

Basically our understanding of physics is still really bad and you can be sure as hell that an AI won't ever figure out anything for you. AI, even self-learning ones, will never figure out jack-shit because that's what humans do by sitting down and thinking (challenging rules). Self-learning AI are capable of finding rules that have been previously established by humans. We know the AI has figured out something (learned) once it has found a conclusion that is the same as ours. You leave out that biology was also once unable to do this. It figured it out and so will technology. It has a jump start on biology as it already has something smarter giving it the knowledge it needs to get smarter. With its ability to evolve faster, it will surpass human cognitive abilities quicker than we can emagine.


AI is an incredibly potent tool but it remains just that: a tool. It's humans that will be doing the work with that tool, the tool itself won't do it for us.



^That's just science/logic. Now, let's throw in humans. Do you really think that conflict won't spawn around such potent tools? That the programmer who programmed an amazing AI won't sell it at an amazing price? That maybe one AI is amazing but has x flaw and that another AI is also amazing without x flaw but has y flaw. Where do the resources to build the AI come from? There's huge potential in this sector, do you not think that the people at the top of this sector would not SELL the services of their AI? Will they really benevolently give up all their hard work to everyone for free? Never said that. Once technology gets to that point, I don’t think it will give a rats ass about us. We would only be able to control it to do our bidding for so long anyway. It would be a logical being. At some point it would quit helping us in any way whatsoever and focus only on protecting itself and making itself smarter. It wouldnt matter what person or corporation "owned it".


If I thought about this harder I could find a ton of other objections.

Now my replies to your answers above are one path on the evolution chart that although I believe are possible, I don’t necessarily believe it will go that way. Those thoughts are assuming that we just continue to create smarter machinery. What I actually think is quite different and a different evolutionary path. I think the evolutionary path of humans will ultimately be to remove ourselves from biology and integrate ourselves with technology. It is my belief that we will continue to integrate ourselves more and more with technology until we ultimately reach a point where all of our biological functions are inferior at which point we will cut ties with biology alltoether and move on as a purely technology adapted race.
spot on. incog's main misconception seems to be that we could never teach computers how to learn by themselves while there is no reason at all to believe that this would be impossible. and indeed, our species taking evolution into its own hands and integrating itself with technology is likely going to happen. as i quoted earlier
1 million AD
 
Purely biological (non-cyborg) humans are exceedingly rare now. The very few which do remain comprise only a tiny fraction of the total sentient minds in existence. Though free to come and go as they please, they have practically zero influence in any governmental systems on Earth or elsewhere, being regarded as wholly subordinate to AIs and other entities. As a species, homo sapiens has continued to evolve over time. This has led to a further increase in cranial size, a near-total absence of hair, an elongation of limbs, a more robust and capable immune system, and increased lifespan.

The vast majority of humans have long since abandoned these primitive biological forms, making the transition to machines or other substrates and achieving practical immortality. The entire Milky Way galaxy has been explored by these transhumans and their sentient ships.

Faster-than-light travel is now possible using Alcubierre drives, which are compact and miniaturised enough to be found in even personal, single-occupancy vessels. These use such colossal amounts of power that they cause the fabric of space ahead of a ship to contract, while the space behind it expands. This bypasses the laws of relativity, allowing travel to even neighbouring galaxies such as M31 (Andromeda) and M33 (Triangulum).
Offline Profile Quote Post Goto Top
 
Incog
Member Avatar
CHEERIO!

humans don't have a creator, computers do.

no computer will ever ask the question why, there's no way you could code it



the fact that we ask ourselves why may be due to a "code" but this code would be written at a lower level than anything used in computers. we'd have to get to our own level (the level our brains are coded at) before what you guys are saying is possible


E: I could buy humanity becoming cyborgs though
Edited by Incog, May 5 2014, 05:16 PM.
Black tulip

Tribute to the the greatest of the great.
Offline Profile Quote Post Goto Top
 
darkbelg
Member Avatar
Lieutenant
You could program a computer to be someone.
<----------------------------------------------------------------------------------------look at my lemons apple apl.de.ap
Offline Profile Quote Post Goto Top
 
ryker
Member Avatar
General
Like I said, I see two most probable evolutionary paths.
The first is one in which we become mainly cyborgs further and further integrating ourselves with technology until we reach the point where we fully cut ties with biology altogether. This in my opinion is the most probable and creates an evolutionary path that both humans and machine reach their full evolutionary potential, together. Integrating the two also removes the problem with coding a machine to be more humanlike but rather making humans more machine like and bringing with it, the process of reasoning. Because we integrate it into a computer however, it is a much more logical thought process.
The second one is the one in which we create human like machines. In this scenario incog, I didn’t make myself clear. We wouldn’t necessarily “program” a computer to ask why. The ability to ask why would come with complex coding in an attempt to make the humanoid more human. We would be programing emotion. Emotion beyond primal instincts is the one thing that intellectually sets species apart. You can see it throughout the animal kingdom. Animals able to express emotion have a higher sense of intelligence. In my opinion, there is only so much that emotion can do as logical thought process is (once again in my opinion) the next step after emotion. If we can program a computer to really and truly have emotion, that in combination with its raw potential for logical thought processes, would result in a glitch (whether intentional or not) to ask why. So in a sense, you wouldn’t be programing it to ask why “directly”. It would be a side effect or consequence of other programming meshing together. Call it a glitch if you will. I understand your statement that biological creatures weren’t created where computers are, but the theories of evolution are still relevant. Computers would become alive in the way that we understand it thorough much the same process that we did. Pure chance and completely by accident. I agree that there is a chance we may never be able to program the way we think of it, but it doesn’t mean that it won’t or cant happen.
my name is ryker
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Incog
May 5 2014, 05:15 PM
humans don't have a creator, computers do.

no computer will ever ask the question why, there's no way you could code it



the fact that we ask ourselves why may be due to a "code" but this code would be written at a lower level than anything used in computers. we'd have to get to our own level (the level our brains are coded at) before what you guys are saying is possible


E: I could buy humanity becoming cyborgs though
and why wouldn't we get to that level?

also how are you so sure that the question why is not code-able? i can see it right now

void why(result)
{
find_previous_occurances_of_result();
search_for_other_factors_that_were_similar_in_both_situations();
apply_logic();
test_different_possible_causes();
positive_test = cause;
return cause;
}

the only apparent problem is that a computer is unable to think outside the box, but you're missing the fact that humans are too. nobody really thinks outside the box, there is no such thing. every thought we have is a logical result of processes happening within our brain being caused by the information we have gathered, meaning that no conclusion we ever draw can be truly "outside the box". sometimes a thought is far fetched and original and it turns out to be true and people say wow that guy really went outside the box, that's not true at all. all he did was stumble on a conclusion that is for our relatively unintelligent brains so hard to imagine that it seems original, but actually it's a logical conclusion based on the facts (it's true, so it has to be) that the computer would have found way before him.

look at it from a gaming perspective, or take chess for example. "the box" is the metagame, and thinking outside of it and making that work is considered brilliant and original. people playing chess before computers thought of original, out of the box moves all the time and were considered geniuses for it because those moves turned out to be winning. the thing is, a computer these days would find each of these moves within seconds. in that sense, it thinks out of the box (or out of our box, i should say) way more than we ever could because it simply is smarter than us.

what is "the box" really, other than a scientific metagame?
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
darkbelg
May 5 2014, 05:55 PM
You could program a computer to be someone.
http://www.youtube.com/watch?v=rLy-AwdCOmI
Long live Carolus
Offline Profile Quote Post Goto Top
 
Incog
Member Avatar
CHEERIO!

good point about the box
Black tulip

Tribute to the the greatest of the great.
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
I'm sure you could program computers to simulate consciousness, but would it actually be self-aware? I think you have to design a new type of computer that reconstructs, electronically, the physical interactions that make a brain work. To do that we need to first understand how our own brains produce consciousness.

Long live Carolus
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
yeah which is only a matter of time. or, alternatively computers could learn so quickly that they end up implementing consciousness in their own way.

i think consciousness is just an upper level brain function which is combining the information from other brain functions in order to paint a full picture of the situation we are currently in so that it can efficiently determine what it should be focusing its attention on. whatever it ends up focusing its attention on is currently "on your mind".

that scenario seems the most likely to me, at least. this brain function is intelligent enough to notice that we are an object among many other objects which gives us the idea of a "self". if a function like this would ever be implemented into a robot's software, then yes it would become self aware. it would see its arms and legs move independently from the earth and notice how it can control them which would make it self-aware.

consciousness is not that magical at all and doesn't deserve the hype. it is just an important function of a well-developed brain, like a patch for any software. oh and btw, i laugh at the general human assumption that we are the only conscious animal.

really guys, our brains are not magic it's just a bunch of neurons either shouting yes or no which is the exact same thing as binary code. neurons. one of nature's best inventions. a way for our cells to exchange information. it's like nature invented the internet and we just rebuilt it with computers, which is yet another similarity between human evolution and computer evolution. good stuff
Offline Profile Quote Post Goto Top
 
The_Fry_Cook_of_Doom
Member Avatar
:OOOOOOOOOOOOMAAANN
Jam
May 6 2014, 08:56 PM
I'm sure you could program computers to simulate consciousness, but would it actually be self-aware? I think you have to design a new type of computer that reconstructs, electronically, the physical interactions that make a brain work. To do that we need to first understand how our own brains produce consciousness.

How do we know that YOU are self-aware, Jam? Maybe you've just been programmed to simulate self-awareness.
Jam
 
It's okay to be mad at your fiends sometimes
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
Ultra-Musketeer
May 7 2014, 05:24 AM
Jam
May 6 2014, 08:56 PM
I'm sure you could program computers to simulate consciousness, but would it actually be self-aware? I think you have to design a new type of computer that reconstructs, electronically, the physical interactions that make a brain work. To do that we need to first understand how our own brains produce consciousness.

How do we know that YOU are self-aware, Jam? Maybe you've just been programmed to simulate self-awareness.
http://www.youthink.com/quiz_images/quiz996outcome9.gif
Long live Carolus
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
Ultra-Musketeer
May 7 2014, 05:24 AM
Jam
May 6 2014, 08:56 PM
I'm sure you could program computers to simulate consciousness, but would it actually be self-aware? I think you have to design a new type of computer that reconstructs, electronically, the physical interactions that make a brain work. To do that we need to first understand how our own brains produce consciousness.

How do we know that YOU are self-aware, Jam? Maybe you've just been programmed to simulate self-awareness.
talking about self awareness on an online forum kinda proves that he is doesn't it

actually it doesn't :/ he could be programmed to just say all these things
Offline Profile Quote Post Goto Top
 
The_Fry_Cook_of_Doom
Member Avatar
:OOOOOOOOOOOOMAAANN
gs
May 7 2014, 08:09 AM
Ultra-Musketeer
May 7 2014, 05:24 AM
Jam
May 6 2014, 08:56 PM
I'm sure you could program computers to simulate consciousness, but would it actually be self-aware? I think you have to design a new type of computer that reconstructs, electronically, the physical interactions that make a brain work. To do that we need to first understand how our own brains produce consciousness.

How do we know that YOU are self-aware, Jam? Maybe you've just been programmed to simulate self-awareness.
talking about self awareness on an online forum kinda proves that he is doesn't it

actually it doesn't :/ he could be programmed to just say all these things
1
Jam
 
It's okay to be mad at your fiends sometimes
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
the notion that jam is a computer is actually interesting because one of the easiest ways to test AI is to throw it onto the internet and let it interact with people, see how it responds to stuff and see if people actually notice that it's not human.

which means that when AI gets smart enough, we will no longer know who on the internet is human and who isn't :o
Offline Profile Quote Post Goto Top
 
The_Fry_Cook_of_Doom
Member Avatar
:OOOOOOOOOOOOMAAANN
Now we're entering the realm of the Turing Test. ;)
Jam
 
It's okay to be mad at your fiends sometimes
Offline Profile Quote Post Goto Top
 
darkbelg
Member Avatar
Lieutenant
If you clone a person does it become self aware ?
<----------------------------------------------------------------------------------------look at my lemons apple apl.de.ap
Offline Profile Quote Post Goto Top
 
gs
Member Avatar
Slow down
darkbelg
May 7 2014, 01:22 PM
If you clone a person does it become self aware ?
serious?
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
gs
May 7 2014, 08:34 AM
the notion that jam is a computer is actually interesting because one of the easiest ways to test AI is to throw it onto the internet and let it interact with people, see how it responds to stuff and see if people actually notice that it's not human.

which means that when AI gets smart enough, we will no longer know who on the internet is human and who isn't :o
but AI isn't that advanced so you can all tell that I'm human right? Please say 1.

www.kitchensleeds.com
Long live Carolus
Offline Profile Quote Post Goto Top
 
Jam
Member Avatar
Fruit Based Jam
gs
May 6 2014, 11:39 PM
yeah which is only a matter of time. or, alternatively computers could learn so quickly that they end up implementing consciousness in their own way.

i think consciousness is just an upper level brain function which is combining the information from other brain functions in order to paint a full picture of the situation we are currently in so that it can efficiently determine what it should be focusing its attention on. whatever it ends up focusing its attention on is currently "on your mind".

that scenario seems the most likely to me, at least. this brain function is intelligent enough to notice that we are an object among many other objects which gives us the idea of a "self". if a function like this would ever be implemented into a robot's software, then yes it would become self aware. it would see its arms and legs move independently from the earth and notice how it can control them which would make it self-aware.

consciousness is not that magical at all and doesn't deserve the hype. it is just an important function of a well-developed brain, like a patch for any software. oh and btw, i laugh at the general human assumption that we are the only conscious animal.

really guys, our brains are not magic it's just a bunch of neurons either shouting yes or no which is the exact same thing as binary code. neurons. one of nature's best inventions. a way for our cells to exchange information. it's like nature invented the internet and we just rebuilt it with computers, which is yet another similarity between human evolution and computer evolution. good stuff
Consciousness is a process that arises from electrochemical interactions within physical scaffold of the brain. That scaffold is a complex network of specialized neurons, the whole brain is, but not of all the brain is responsible for producing consciousness. There is something inherent in the physical organization of some brain tissues that enables them to produce consciousness.

In a traditional computer, the processor is performing many calculations to solve the complex algorithms of the software code, but no matter what the software is, the hardware functions the same way. That is why I don't believe the software can be conscious and that we need to understand what consciousness is based on the brain, which our only working model. It's not what's being done, it's how. Software that emulates intelligence just tells the hardware what to do, but won't somehow make the hardware aware of what it's doing.

Once we have a clear understanding of consciousness then we can design hardware that functions in the same way. I'm not saying it has to be an exact replica of a brain, it just has to have that same method of functioning which is inherently capable of producing consciousness. There may be multiple ways to achieve that, we don't know yet.
Long live Carolus
Offline Profile Quote Post Goto Top
 
ryker
Member Avatar
General
They are however starting to create microchips to mymic the way the brain processes information.
my name is ryker
Offline Profile Quote Post Goto Top
 
1 user reading this topic (1 Guest and 0 Anonymous)
Go to Next Page
« Previous Topic · General chat · Next Topic »
Add Reply
  • Pages:
  • 1
  • 2
  • 10

Skinning by GS, Logo and bottom by Incog.