Jump to content

Talk:Artificial consciousness: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Awareness of processes
Line 391: Line 391:


"A new class of visuomotor neurons has been recently discovered in the monkey's premotor cortex: mirror neurons. These neurons respond both when a particular action is performed by the recorded monkey and when the same action, performed by another individual, is observed." (Rizolatti et al) Maybe this also suggests that awareness is an awareness of the processes, not static objects or states. Or maybe this discovery is important for other reasons. Just put it here because there was recently a lot of discussion in the Internet about mirror neurons. [[User:Tkorrovi|Tkorrovi]] 22:24, 14 Apr 2004 (UTC)
"A new class of visuomotor neurons has been recently discovered in the monkey's premotor cortex: mirror neurons. These neurons respond both when a particular action is performed by the recorded monkey and when the same action, performed by another individual, is observed." (Rizolatti et al) Maybe this also suggests that awareness is an awareness of the processes, not static objects or states. Or maybe this discovery is important for other reasons. Just put it here because there was recently a lot of discussion in the Internet about mirror neurons. [[User:Tkorrovi|Tkorrovi]] 22:24, 14 Apr 2004 (UTC)

== Awareness of processes ==

I digged it a bit more and found that some experiments indeed show that a process, not an object activates neurons. From a New Scientist article http://www-inst.eecs.berkeley.edu/~cs182/readings/ns/article.html "So they presented monkeys with things like raisins, slices of apple, paper clips, cubes and spheres. It wasn't long before they noticed something odd. As the monkey watched the experimenter's hand pick up the object and bring it close, a group of the F5 neurons leaped into action. But when the monkey looked at the same object lying on the tray, nothing happened. When it picked up the object, the same neurons fired again. Clearly their job wasn't just to recognise a particular object."
For such reaction there must be created a model of the process. Unless we suggest that human has models of all possible processes from birth, awareness of the processes likely includes creating a model of a process without having a prior knowledge of it, only based on the information received through the senses (ie only from the pulses coming from the receptors). To be able to create a model such way distinguish consciousness from other systems, and would be the biggest challenge for artificial consciousness. No conventional software can do that, as it needs too much flexibility. This is also why such proposed mechanisms as absolutely dynamic systems might be necessary.
[[User:Tkorrovi|Tkorrovi]] 13:42, 17 Apr 2004 (UTC)

Revision as of 13:42, 17 April 2004

Archived Discussion

Prediction

I am puzzled about the idea of consciousness being associatd with prediction. I thought that perhaps it meant anticipation in the short term, i.e. immediate cogent reaction to imagined possible events (including internal events as might emanate from thought processes). Can anyone explain, in relation to consciousness, what is being predicted and by whom, and why this is thought to be an essential component of consciousness? Matt Stan 08:49, 25 Mar 2004 (UTC)

In accordance with my Concise Oxford Dictionary, "anticipate" in the wider sense means "foresee", "regard as probable" etc, so it means the same as "predict" ("foretell"). Except that there is no such thing as a synonym in English. There are far more words in English than in some other languages. Every word is coloured with a different set of meanings from every other word, which means that as one refines meaning on some topic then it can become apparent that there is a better word to use, or at least to help to distinguish between matters under discussion. Anticipate does not mean the same as predict. To anticipate is to behave in a manner borne of interacting with the environment to take account of a variety of outcomes in any given situation. To predict is to make a statement about the future which might be true or false at the time that the prediction is made. Matt Stan 11:38, 10 Apr 2004 (UTC)) The difference of the word "anticipate" is that it has a narrower meaning "deal with before the proper time". If you talk about immediate reaction to imagined events, then you most likely consider that meaning. No, "predict" is not used in that sense in AC. In the paper I added to NPOV version, Igor Aleksander talks about "Ability to predict changes that result from action depictively". It's also said in paper by Rod Goodman and Owen Holland www.rodgoodman.ws/pdf/DARPA.2.pdf that "Good control requires the ability both to predict events, and to exploit those predictions". Why we need to predict changes what result from action is that we can then compare them with the events what really happened, what enables us to control the environment and ourselves (ie act so that we can predict the results of our action). This is also important for training AC -- the system tries to predict an outside event, and if this event indeed happens, then that gives it a positive signal. What is necessary for that is imagination, ie generating all relevant possibilities for certain case, for what the system must be very unrestricted. And what is also necessary is some sort of "natural selection" so that only these models (processes) survive, what fit in their environment. So the events are imagined not in order to react to them immediately, but they are stored to exploit them later, the time when the predicted outside event should occur. Tkorrovi 18:50, 25 Mar 2004 (UTC)

I think that Ability to predict changes that result from action depictively is a bit too abstract for an encyclopedia, but your subsequent explanation gets the point across. I'm not sure, however, that the heuristic process that you describe above regarding learning is necessary for consciousness per se, although it is important for artificial intelligence, and I'm concerned that we shouldn't confuse the two - though they must indeed go together to some extent in any implementation. I think your point is perhaps important for a machine that has to learn, and therefore might be important in order to attain consciousness, i.e. part of the engineering/programming process aimed at achieving consciousness. However, once consciousness has been achieved then the ability to go on learning is not essential in order for consciousness to continue. I can remain conscious in a totally chaotic environment in which it is not possible to predict anything accurately. You seem to be saying that it is my constant and continuous attempts to predict what is going to happen next, moment by moment, every moment, that are a defining part of my consciousness. Whilst I am writing this, you might say that in the process of making my utterances I am having to predict what I am going to write next. But I would put that differently. I don't predict what I am going to do - I just do it, in what is called a stream of consciousness. Therefore I still question whether predictive capability has to be accepted into the definition of what constitutes AC. Matt Stan 19:39, 25 Mar 2004 (UTC)
Almost all "potential" AC systems (neural networks, genetic algorithms etc) are trainable, ie the abilities are not necessarily attained during engineering/programming process, but during training, so necessary condition to have certain ability is to be capable to achieve that ability. It's the same with humans, the only possibility to achieve certain abilities is to learn from childhood, so abilities what enable us to learn are important part of our consciousness, especially thinking. You said "I can remain conscious in a totally chaotic environment in which it is not possible to predict anything accurately" -- are you sure, many people loose their mind during chaotic periods like wars. And you must think what you write before you write, maybe this is part of stream of consciousness, but then it necessarily don't exclude prediction. That predictive capability is necessary condition for AC is a view of several scientists, therefore it must be included in the article. And other views, what you support, must be included too. Tkorrovi 20:13, 25 Mar 2004 (UTC)
It's Ok to cite authorities to back up one's own understanding. But it's no good citing authorities to back up one's lack of understanding or lack of capability to put across one's understanding. I keep trying to work back to a definition of consciousness which is exclusive of the things one associates with consciousness but which are not the same as consciousness. I disagree with Paul about inanimate passive objects possessing consciousness, e.g. a thermostat. I have to work from a human model, as this is the only real thing one can relate consciousness to. A new born baby is either conscious - while it is awake, or unconscious, while it is asleep. Is that not a good starting point? Can anyone disagree with that and if so on what grounds? Now a new born baby, with its mind half formed, has an enormous capacity for learning, but it doesn't become more conscious as it grows up; it stays conscious while it is awake and unconscious while it is asleep, right thoughout its whole life. It learns to think, it learns to anticipate, but what it starts off with, which it has right from the start and never leaves it throughout its life, is the ability to be attentive, first to its parent who feeds it, to sounds, to colours, to movement. It does none of these things while it is asleep. Surely this is the basis upon which we can start to define simple consciousness. Never mind what abstract scientists may have said, which none of us can understand. We can't write about things we don't understand - only things that we do understand. Matt Stan 01:16, 26 Mar 2004 (UTC)
"A new born baby is either conscious - while it is awake, or unconscious, while it is asleep. Is that not a good starting point?" Not at all. If we want to model mental abilities of the functioning human being, then we must define consciousness in the widest possible sense, from consciousness to be a totality of all mental abilities, otherwise the model will not be very good. This is why in many cases at least thinking is considered to be an aspect of consciousness. This is the view I support, but this doesn't mean that I consider the views of others a lack of understanding. There are different and often opposite views, and even in two opposite views, something may be right in both Tkorrovi 12:21, 26 Mar 2004 (UTC)

This hinges on the word "necessary". Anticipation is a very useful, desirable attribute for a conscious being to have. But that does not mean it is a necessary attribute of consciousness. I agree with tkorrovi about all the advantages of anticipation, just not that it is necessary. That it is necessary has not been shown. Desirable, yes. Useful, yes. Necessary, no. Therefore that is supposedly necessary does not merit a prominent, headline, first definition position in the article. Paul Beardsell 09:13, 26 Mar 2004 (UTC)

Passive Inanimate Objects and Consciousness

If a human is passive then it can be conscious. So, "passive animate objects" can be conscious. If all "inanimate objects" can not be conscious then the big question is answered and, in my view, we can go home. So, passiveness disqualifies an inanimate object from consciousness but not an animate one. Which is just too blatant an adoption of a priveleged position to be allowed.

But only one of many. Luckily, for my argument, the thermostat is not passive.

Paul Beardsell 09:06, 26 Mar 2004 (UTC)

I didn't intend passive inanimate object to be contrasted with passive animate object, but rather with active inanimate object. Need perhaps to define inanimate in this context. Are you suggesting that the thermostat is an animate object? I'm not sure what you mean by a human is passive. If someone is acting in any capacity then surely that person is being active. One can obviously act passively in relation to a particular circumstance, but the generic active surely applies to someone whilst they are alive - whilst their blood is pumping they have some level of activity. You could even suggest that they actively decompose after they are dead, I suppose. And of course you could say that any object above absolute zero comprises active atoms. I had merely intended to use active in the sense that electronic engineers do when distinguishing between active components (such as semiconductors) and passive components, such as resistors and capacitors - the distinguishing factor in this case being that active components gain their salient properties by being driven in a specific manner (gain, for example, in the case of a transistor) whereas passive components have no such requirement to be driven in a particular manner (e.g. resistance in the case of a resistor).

Merging "digital sentience", "artificial consciousness" and "artificial consciousness NPOV"

I merged digital sentience into artificial consciousness NPOV (first attempt). It was easy to do word by word because of a structure of artificial consciousness NPOV version where different views are clearly separated. artificial consciousness NPOV was made based on the artificial consciousness article with everything essential (except two or three most recent additions) incorporated. My proposal is to make the merged article based on artificial consciousness NPOV version as it would result in more clear article with better structure. My suggestion is, if supported, please merge what you find not included from "artificial consciousness" into "artificial consciousness NPOV" and then the new version would be copied and pasted to main article. Tkorrovi 24 Mar 2004

I merged everything what was missing from main article to artificial consciousness NPOV. Please add if I missed something, then artificial consciousness NPOV would be merged into artificial consciousness and edited later (I think it should be shorter). Tkorrovi 24 Mar 2004

The merger is done. I added everything from previous version and from digital sentience. I copied the previous version also to [1] for it to be easier to look at it and add text, if found that something indeed was omitted in merged version. Tkorrovi 19:13, 25 Mar 2004 (UTC)

Suggest another merge might be in order becasue discussion has been continuing in the original location. Please don't leave behind any duplication, because after your previous attempt, I didn't realise what was going on and carried on editing at the place my watchlist told me had been edited last. Matt Stan 11:21, 27 Mar 2004 (UTC)
Usually, when merger is done, then talk page is also copied. I wote a message that it was done. But it's OK to discuss on any talk page, only then if you like it to be here too, you either must copy new messages to this talk page, or just put a link to that discussion on this talk page, I shall not make any further duplicates. Tkorrovi 13:34, 27 Mar 2004 (UTC)

Don't revert my edits. Read the manual of style. The NPOV guideline is unnecessary. You are not supposed to use bullets in articles. Finally, your "NPOV" edits make the article more POV. ugen64 21:04, Apr 3, 2004 (UTC)

I, and perhaps some others, don't agree with you, your edit rather made the article more POV. I consider your opinion, but I don't necessarily take it as advice. BTW, I read this in the manual of style: "in these circumstances where there is not enough text to justify a sub-heading, it may be preferable to use bolded text or bullet points within a section instead of using sub-headings". Tkorrovi 23:43, 3 Apr 2004 (UTC)

There is no instance in the article in which there was little enough text to justify using bullets. Also, stop linking to useless articles such as ability and predict. You are supposed to link to articles that would clarify or be useful to readers. I think everyone who can understand the concept of artificial consciousness knows the meanings of ability and predict. ugen64 20:51, Apr 4, 2004 (UTC)
Make only careful use of generic attributions ("Critics say ..."). Some Wikipedians describe these as weasel words, because they can make claims look less obscure or less controversial than they are. In general, when something needs attributing, be specific. (Wikipedia:NPOV tutorial). You should revies all of the statements that start with "Some say", "Some assert", etc.: that's indefinite citation.
Also, none of my edits affect NPOV in any way. They are all GRAMMAR or WORD CHOICE edits. I was fixing your grammar, not making substantial edits to the actual content of the article. ugen64 21:02, Apr 4, 2004 (UTC)
For example: "Test may fail just because the system is not developed to the necessary level or don't have enough resources such as computer memory." In what way is that a sentence? Test should be preceded with either the or a. "system is not developed"... "don't have enough" - subject-verb agreement is absent. System is singular; you can't say "system don't have enough". etc. etc. The only actual edits I made to the article were where you said, "AC system must theoretically be capable of passing all known tests" etc. etc. That's not true; what are "all known tests"? Does that mean it has to pass the SAT II? The test for getting into the military? What if I made up a test that was, "this machine must grow into a monster and eat the Moon"? It would be known, it would be relevant, but obviously an AC wouldn't have to pass it in order to be considered an AC. Changing the wording into "some", "should", etc. isn't POV; it's a fact. ugen64 21:05, Apr 4, 2004 (UTC)
The article predict is not useless, it is a very good article.
"Some say" is not weasel word, it says clearly that this is not a common opinion. "Critics say" is different, because it may refer something to be commonly accepted. BTW all sentences where "some say" occurs, are not written by me. I proposed NPOV structure, where we don't have to write "some say", but would just list different views what are there. (I don't agree with a lot, including the whole "Strong AC", but I didn't change it, leaving for reader to decide what he prefers).
Concerning your edits, please look at the changes, they are not only grammar edits.
But I'm not against grammar edits. System "doesn't" have enough, sorry, I also make sometimes some small mistakes, like everybody, this is why copyedit is necessary.
"AC system must theoretically be capable of passing all known tests" -- was never written there, you critisize something what nobody never said.
Possibly it was never written in exactly those words but it was written by Tkorrovi more than once. Later he said what follows. Paul Beardsell 03:51, 6 Apr 2004 (UTC)
What was said was that "AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of capable human", as a view what differs from "Weak AC" in that weak AC must achieve only one ore more such abilities. This doesn't say and doesn't mean that AC must pass all tests. Such system itself may even be not very complicated, all what that means is that it must be theoretically be built based on right concepts so that it can be trained to achieve any of the known and objectively observable abilities. These are not necessarily only tests by what we may establish that a system is AC, but theoretical concepts it is based on, and some tests what confirm what these concepts predict. Nobody ever said that it must pass all possible tests. BTW, this description was also not only written by me, but also by other people (I explained it earlier on this talk page), and to use "all" there was insisted by other people, because only such requirement may make it a "Strong AI", and this is the meaning of that approach, this is what differentiates it from "Weak AC", what is equivalent to "Weak AI". Tkorrovi 22:49, 4 Apr 2004 (UTC)
BTW, one question. Did you ever pass all tests what capable human should be able to pass? No I think. But does it prove that you are not capable human? No. You pass only the tests you need to, and you know that you are able to learn to pass other tests. Some particular AC system would also not be used (and therefore trained) to do everything, most likely it would be used for some purpose, and it must pass tests what prove that it fits for that purpose. But theoretically it, like human, can learn to do many other things. That "all" determines that range, for it not to be only two or three. AC may have much less capability than human, because it has much less resources, and because it is not required to simulate other abilities than only these what are known and objective. And we don't know *very* much. In fact I think that the AC systems what really would be used, would not be very complicated at all, maybe some special regulators etc. So the difference would only be the theoretical concepts by what they are built. Like such regulator would be able to regulate some process as well as human can do that, something what almost no existing regulator can do (they even used neural networks in regulators, but even these cannot model the processes). Tkorrovi 23:22, 4 Apr 2004 (UTC)

Blasphemy

People don't have to take the test. We know we are conscious, and don't have to prove it to anyone. The test for whether I am conscious, if any were required, is that I ask you 'Am I conscious?' and you say 'Yes'. If you say 'No' then I know you are lying. You might say 'I'm not sure', meaning that you are not sure whether I exist, i.e. that perhaps I am only a machine. People take it for granted, and have to take it for granted, that what they know as their own self-awareness is experienced similarly by other people as their self-awareness. When we talk of tests, we are only talking of how artificial consciousness is to gain a semblance of reality, and of course, self-awareness as you or I know it cannot be tested, which is why it cannot be one of the pre-requisites for artificial consciousness. Tkorrovi, what is your first language? Matt Stan 23:44, 4 Apr 2004 (UTC)
We know what capable human is able to learn. For example we know very exactly what children of different age are able to learn, not only whether they "are conscious" or "are not conscious". For a man there is much to prove to be considered a capable human. Concerning my first language, I first want to know why that question was asked, we talk about artificial consciousness here. Tkorrovi 00:03, 5 Apr 2004 (UTC)
My first language is English. I don't think yours is. We are conversing here in English. Others have remarked on your syntax. I am interested in your different perception of certain matters to do with consciousness. If consciousness is itself partly an illusion, as I have suggested, then culture might make us susceptible to different illusions, and our expression of them through different modes of language. Any knowledge I had of your first language, or could acquire if I knew what it was, might help in my interpretation of your idiosyncratic prose. Matt Stan 19:53, 5 Apr 2004 (UTC)
Some quotations written by unnamed on the Ugen64 talk page:
"...tkorrovi is either a troll or very misguided. From my experiences on UseNet, I would say his behavior is very trollish and would most likely be dismissed as such if the argument were happening on UseNet." "Claims to be acting reasonably, but shamelessly reverts other people's edits (pehaps because he suspects new editors to be aliases of the person who originally pissed him off, but still). Also a common troll tactic; try to claim you are being reasonable despite all appearances to keep people on your side as long as possible." "Another difficulty with his posts is that his English is below the general standard of Wikipedia, yet he often insists on having his wordings preserved exactly, regardless of whether the edit is a semantic difference or merely correction of grammar, spelling, style, etc.."
Is it all self-evident when my nationality happens not to be English? Tkorrovi 22:44, 5 Apr 2004 (UTC)
I do not think these characteristics are typical of any nationality. Paul Beardsell 04:24, 6 Apr 2004 (UTC)
Yes but when nationality happens to be estonian? Tkorrovi 10:08, 6 Apr 2004 (UTC)
From my little knowledge of Estonian people - I once had an Estonian landlady for several years - I'd say that an Estonian troll is a highly unusual occurrence. Therefore Tkorrovi cannot be Estonian :-) Matt Stan 11:23, 10 Apr 2004 (UTC)
Please stop the blasphemy! Show me one example that causes you to think that I am a troll. I have not the slightest idea of the cause of all that blasphemy. One reason I can possibly conceive is my nationality. I may be a person you or somebody else doesn't like. Not everyone likes everyone, but I don't write blasphemy in every possible place about people I don't like. Please say what you want from me, I cannot figure out what would please you. Sorry if there was anything what I did wrong. I have the courage to say sorry. A simple statement from you that you stop calling me a troll would end the argument. What would it cost you to say that? Please understand that we cannot seriously discuss anything if we become personal. Then it would not be a discussion about the topic, but rather a discussion about our personal qualities. This is pretty much the reason why it was impossible to discuss things on the talk page and which caused an edit war. Please think about it, what should we discuss now, the question of whether I am a troll or not? Tkorrovi 13:36,

10 Apr 2004 (UTC)

I've corrected the English in the paragraph above. There is nothing wrong with Tkorrovi. There is no such thing as an Estonian troll. The Estonian people are very proud of their unique language and the contribution that their nation has made to world culture. I do not decry your nationality - only your use of English. You seem to use the dictionary as the sole source of reference. It is not. There is grammar and correct usage that may not be found in a dictionary alone (at least not the Concise Oxford - go for the Shorter Oxford at least). That is why we make the distinction between artificial and simulated, real and genuine, predict and anticipate, and so on. I am also interested that someone from a different culture may indeed have a valid and different definition of consciousness because of his different cultural identity. I am interested for instance in the fact that Estonian has no future tense. Does this affect one's appreciation of the meaning of the English word 'prediction'? Does having no future tense imply a different consciousness about what the word future actually means? Unless we can be clear on our definitions of simple words and overcome whatever language barriers there are, and be aware that they exist, I fear that the assertion that Tkorrovi is a troll might persist. I aim not to use ad hominem arguments, so there is no question of blasphemy. Who is accused of blaspheming against what, anyway? Matt Stan 16:29, 10 Apr 2004 (UTC)
So you continue to talk on the topic of whether I am a troll or not: "Unless we can be clear on our definitions of simple words and overcome whatever language barriers there are, and be aware that they exist, I fear that the assertion that Tkorrovi is a troll might persist." -- *stop* it. Estonian has no future tense, but this only means that what is said like "I shall do it sometimes later" is said like "I do it sometimes later" and it means exactly the same -- something shall be done in the future, only a peculiarity of the grammar and no difference in thinking. Estonian is similar to Finnish (I can speak Finnish also) and there is no future tense either, nobody yet said that this somehow influences the thinking of the Finns. Concise Oxford Dictionary has often been chosen as a basic dictionary for international agreements, ie only the words from that dictionary may appear in these agreements. This is why that dictionary is a kind of standard and preferred over the others. English being your first language is your advantage, but many Wikipedia articles in English are written by people whose first language is not English, including some featured articles, so also don't overestimate that advantage. I'm not proud to be Estonian, but this has nothing to do with the discussion here whatsoever, if your only intention is to disregard me, then please leave alone me and my nationality. Tkorrovi 17:36, 10 Apr 2004 (UTC)
What we need around here is an anthropologist. Anybody? Paul Beardsell 18:12, 10 Apr 2004 (UTC)
What we need here is a human rights lawyer, if even that helps to keep order here. Tkorrovi 18:37, 10 Apr 2004 (UTC)
I am happy to confine vocabulary to keywords in the Concise Oxford Dictionary in order to avoid obscure words, but when it comes to definition and coloration of words, then one needs to cast a wider net. The advantage of having English as a first language, if such is an advantage, is merely to know what is correct usage, whereas others are not always so confident. This doesn't give me a problem in the Wikipedia context, because one can always go in and correct someone else's grammar/syntax. So it's no big deal. Difficulties can arise, though, when the intended meaning of a non-native speaker is unclear. I think that has happened here. (I'm much clearer now on prediction having read Tkorrovi's explanation about modelling within a closed loop process control system.) Matt Stan 09:28, 14 Apr 2004 (UTC)
I rather think that the topic is complicated and difficulties may arise from that, especially when the discussion is often much too fast, with no time to think enough your response. It's of course sure that the native speaker can express himself faster; in very fast discussion the non-native speaker is certainly at a disadvantage. Otherwise, nobody has yet said that my English is very bad. For example, I translated this http://www.highpark.pwp.blueyonder.co.uk/dallas/pressxpress1998.htm for High Park Records, a record company in Canada (rubbish as it is, but I mean the language). Tkorrovi 16:00, 14 Apr 2004 (UTC)
I am hesitant to criticise at all in a public arena such as this, as it is not likely to be fruitful. (Adage: Praise in public; criticise in private.) Another criticised Tkorrovi's English, and I followed that up by making some suggestions. I am also bashful about criticising the English of someone who has evidently spent considerable time learning a language that is not their own - so who am I to criticise? However, when it comes to plain understanding, I think we are making progress, albeit rather verbosely, and I wouldn't make any further criticisms anyway. As for speed, well, we can take our time in Wikipedia - no one forces us to respond immediately. Matt Stan 20:32, 14 Apr 2004 (UTC)

It is an established maxim of communication theory that if we do not meet face to face, but through some other medium, such as electrically, then there is no way that I can communicate with you unambiguously to distinguish left from right. A possiblity always exists that you could be living in a mirror world where everything is back-to-front but I am unaware of it and you are unable to discern that I am unaware of it. Matt Stan 11:23, 10 Apr 2004 (UTC)

Back to Prediction

"'One aspect is the ability to predict the external events in every possible environment when it is possible to predict for capable human. Ability to predict has been considered necessary for AC by several scientist, including Igor Aleksander.'" These sentences make no sense, why are you reverting to them? What does "every possible environment" mean? Will it have to predict events on Antartica, 15 feet inside an obscure cave in the Arctic Ocean, or on Pluto? You are using the term "all" and "every" too liberally. Also, "AC must be theoretically capable of achieving all known objectively observable abilities of consciousness of capable human" doesn't make sense either. What's the point of saying "theoretically capable... all known"? What does "objectively observable" mean? ugen64 23:58, Apr 4, 2004 (UTC)
First, the environment must be possible.
> Will it have to predict events on Antartica, 15 feet inside an obscure cave in the Arctic Ocean, or on Pluto?
Yes, because every capable human can predict, or can learn to predict, at least some events even in these conditions.
The difficulty here, I think, is with Tkorrovi's usage of predict. The questioner is asking whether the requirement implies that the AC implementation has omniscience, since Tkorrovi says, though I don't think he means, that the AC implementation must be able accurately to predict future events amd have knowlege about matters that are effectively unknowable. What is meant by predict in this context is, I think, just the ability to formulate an abstract model (or extrapolation) from sensory and other inputs (e.g. recollection: matching currently observed patterns with previously observed patterns) which will enable the AC machine to operate coherently with respect to passing events. The coherentness of this operation will be one of the criteria that the tester of AC will be expected to adjudge against their own model of consciousness. I still think predict is the wrong word to use here, because of the confusion with omnicience, or at least it should be qualified to indicate that we are talking about a predictive model whose purpose is to arrive at conclusions about the AC machine's internal and external environments that will affect its behaviour. Matt Stan 09:28, 14 Apr 2004 (UTC)
Incidentally, if I misunderstand you, does that mean I am any less conscious? If not, then one could have an AC implementation that constantly misunderstood it's environment, i.e. it's predictive model was completely wrong, but this would not affect its credentials in terms of its claim to be artificially conscious. So, whilst a predictive model is probably a prerequisite for AC, I still question the level of sophistication that it would require in order to qualify. I suppose I'm trying to extend the boundary and ask whether sanity is a requirement for AC, i.e. whether an incoherent predictive model would qualify, so the tester will say "It's completely mad, but it's definitely conscious!" Matt Stan 09:28, 14 Apr 2004 (UTC)
The rest is copied from my usenet post:
1. “artificial system” – “artificial” here means “formed in imitation

of something natural” (Concise Oxford Dictionary). “artefact” means “product of human art and wormanship” (Concise Oxford Dictionary), what otherwise is correct, except some possibility discussed that artificial consciousness may be created by other such system.

2. “theoretically” – concerning the time it takes for the system to

develop, “theoretically” would not be necessary, as “capable of achieving” says by itself that time it takes to achieve certain abilities is not relevent for system to be artificial consciousness. But the other possibility is that system is potentially capable of achieving these abilities, but lacks resources like enough memory in computer, certain sensors etc. Then “theoretically” says that the system is a kind of system what is capable of achieving certain abilities, but not necessarily capable of achieving them in some case when there are not enough resources.

3. “capable of achieving” – means that the system shall achieve

certain ability necessary for it after being in particular environment enough time, where does not matter how long is that time.

4. “all” – the most necessary condition for it to be artificial

consciousness. Even with the word “all” included, artificial consciousness is only a subset of consciousness, how big this subset is depends on how much we objectively know about consciousness. This way artificial consiousness becomes closer to consciousness when we now objectively more about it. When we exclude the word “all”, then for it to be artificial consciousness, “all” must still be assumed, but it may be misinterpreted as demanding only to achieve one ability of consciousness, in which case any simple ability of thinking like boolean logic would be enough for artificial consciousness, for it to be even less than weak AI. In comments of the article omitting “all” was substantiated by argument that if some system is never capable of achieving certain ability of consciousness, it may still be conscious, what is very doubtful if we consider consciousness to be the totality of person’s thoughts and feelings (Concise Oxford Dictionary). If we consider average person here, then by that he can not be considered to have full consciousness if he can never achieve certain ability what average person usually does.

5. “known and objectively observable” – this was thoroughly discussed

in ai-forum and included only after agreed with people there (Ribald et al). This makes artificial consciousness objective, what consciousness is widely known not to be. “verifiable” is not that comprehensive, as it may demand to verify only one aspect of some ability, maybe even only determine that it is certain ability, what in some cases may be done even if the ability is otherwise subjective like feelings.

6. “abilities of consciousness” – here “ability” is somewhat more

determined as one phenomenon as “aspect”. “ability” means capacity or (mental) power (Concise Oxford Dictionary). Maybe “aspect” is not wrong, but for me it seems somewhat more natural to objectively observe abilities, not aspects, “apect” also is more likely to refer to something static, not to process. Tkorrovi 00:18, 5 Apr 2004 (UTC)

Sorry for bad format, it was usenet post copied from web page, this is how it appears. I try to make it better later. Tkorrovi 00:51, 5 Apr 2004 (UTC)
  • This is completely wrong. If you say AC beig "not real" you effectively say that computer model is not real. But computer model is a program and it is real. It would be the same to say that Linux operating system is not "real".
  • I did not use "weak" to refer capability, but how close it is to real consciousness/intelligence. Obviously calculator is very far from human intelligence.
  • AC can not be real or not real, if it is really implemented then it is always real. And it is model if it is implemented on computer, even if it is equivalent to real consciousness (what I think though never happen),
  • Quantum computing was not invented by Penrose, and I also think that Church-Turing thesis apply to quantum computing.
  • It is just your way to put things what you insist and I, and maybe others, don't agree with.

Tkorrovi 15:13, 5 Apr 2004 (UTC)

Strong vs Weak

Please lets not confuse "strong" and "weak" with "good" and "bad" or "more capable" and "less capable". Something can be really conscious - say a cat (e.g.). Yet a cat cannot do (much) mathematics. A computer may be not really conscious yet prove a theorem. Please let's find some mutually acceptable form of words to distinguish the reality of consciousness from its capability. Paul Beardsell 02:33, 5 Apr 2004 (UTC)

I acknowledge that some here believe that AC will not be real. Some think that this is highly unlikely (tkorrovi?) or impossible (Matt Stan?). As I understand it (forgive me for paraphrasing your arguments) consciousness is such an unknown quality that each of you thinks that we must model AC on the abilities of humans - because they (we!) are conscious and nothing else is known to be conscious. I respect that view. But I think that if you hold that view you will not want me to distinguish real from capable. For you the term "capable" means a little closer to human. That is why you do not like me using the term "weak" to describe your view. Paul Beardsell 02:50, 5 Apr 2004 (UTC)

First, concerning capacity, there is indeed problem that AC system may have some additional capacity, what doesn't bring it closer to consciousness, so strong and weak don't exactly indicate the capacity. Second, I don't think that AC is impossible, I just don't agree with the concept of "Strong AC". Third, capable doesn't necessarily mean closer to real consciousness, but capable of achieving the abilities of consciousness does. Tkorrovi 03:03, 5 Apr 2004 (UTC)
Re your third point: I have now fixed a mistake - I did not mean to say "real" but "human". Paul Beardsell 07:32, 5 Apr 2004 (UTC)

I think you are redefining terms. What I would like to do so that each of us has a chance of understanding the other is to try and use the dictionary meaning of terms if possible. We have run into a difficulty here because those who discuss artificial intelligence use the terms "Strong" and "Weak" not in relation to capability. They chose bad terms: They did not use the dictionary! When they say "Strong" they mean REALLY INTELLIGENT, maybe even more intelligent than humans but not necessarily: What they mean is capable of REAL thought - the thought could be STUPID yet, if it is REAL, the believers in Strong AI will consider the issue proven. When they say "Weak" they mean not REAL: They are not saying that the AI device can not be impressive: They are just saying the "thought" is not REAL.

When I started using Strong and Weak in the AC article I used capitals. Perhaps we should return to that or use ""quotation marks to show that the words "strong" and "weak" do not have their usual meanings and connotations.

I think I am right in characterising your belief as being the "Weak" one. You also think that AC might be highly capable. This is an entirely consistent POV.

Paul Beardsell 03:17, 5 Apr 2004 (UTC)

I agree that the distinction between Strong and Weak AC, analogous to the distinction made regarding AI, is a useful starting point. I think the difficulties about terminology arise because artificial consciousness is not elsewhere defined, so we have no authority. We can look up artificial and consciousness, but put them together and we have ourselves to define the synergy that the use of these words together implies. I think an oxymoron arises in relation to Strong AC, because we are saying that Strong AC is both real, and yet artificial, which is a linguistic contradiction. That difficulty does not arise with Weak AC. But Weak AC is actually more difficult to define than Strong - we know what the latter is because we experience it ourselves and don't really need to be told about it. Weak AC, which I have also dubbed simulated consciousness, needs to be defined to reflect that its possessor will not have self-awareness or thought, because it won't have a self to be aware of, nor is there a computer model for thinking that it can emulate. Therefore we are stuck with those aspects of consciousness that can be simulated, and these are only its external manifestations. I am not so pessimistic as to think that AC is impossible, rather that if we can define the goals that define AC per se (leaving out AI because that is something different) then I'd be keen to contribute to a project to actually build a candidate artifact. I think that if we can devise the necessary bootstraps, and in particular to take note of what cognitive psychologists understand as personality, as well as providing suitable heuristic routines, then the Weak AC machine, in its later incarnations, might find that it needed to think in order to meet its design goals. That's the serendipity point at which the implementation could cross over from being Weak to Strong; that's the spooky point at which Pinnocchio comes to life; that's the point where AC implementations communing with each other risk incurring the suspicions of humans that they are about to take over the world. That's when legislators start to get involved and try to ban artificial stem cell research'! Matt Stan 08:16, 5 Apr 2004 (UTC)

What I'd like is a flat screen and small camera mounted on a dynamic angle-poise contraption, with muscles to allow the screen to orient itself so that its viewing area was always directly facing me. If I move to the left, it will turn slightly; if I stand up it will raise itself. In a limited way it will be conscious of me, dedicating itself to serving my need to be able to see my screen regardless of whereabouts in the room I am. Would that qualify as an AC implementation? Matt Stan 08:16, 5 Apr 2004 (UTC)

It would satisfy your criterion of attentiveness but I think it would much more likely be conscious if it looked at itself in a mirror all the time. Self-awareness is consciousness. Not awareness or attentiveness. Paul Beardsell 08:22, 5 Apr 2004 (UTC)
As I am sure you know such systems exist. They "follow" cars to focus on the license plates for the London congestion charge. They track individuals in sophisticated burglar alarm systems. If only we can get some self reference in so as to give the impression of some self-awareness. If the AC thermostat had to wander accross the room to find the heater switch. Or the robot which finds an AC (alternating current!) socket and plugs itself in when its batteries get low. On slashdot there was a story recently about an optional extra on BMW cars in Germany: They will supply additional small steering inputs to keep the car between the dotted lines. Apparently the car feels as though it is being driven in a concave lane: It takes a slightly more assertive driver to change lanes! Paul Beardsell 16:28, 5 Apr 2004 (UTC)
My problem with self-awareness in respect to AC, is that it implies that the implementation necessarily must have a self to be aware of. Now I am claiming that self is itself a psychological construct, and even psychologists claim that self is an illusion, i.e. it doesn't really exist. Therefore I do not understand the requirement that an AC implementation must have self-awareness, and find it difficult to determine how one could know whether an AC implementation had self-awareness or not. Now it would be different if we stipulated that the AC implementation needs a component that manifests itself as an ego, i.e. it had the capability of using the first person (I) when expressing itself, but I'm not sure we've got as far as defining clearly how the AC implemetation needs to manifest itself to observers. Matt Stan 09:55, 14 Apr 2004 (UTC)
"Self", the experience of the "self", is subjective experience, the one Thomas Nagel talks about in his "what is it like to be a bat". This is why consciousness is considered to be subjective. It may simply mean that different people experience the same thing differently. Not necessarily a "magic spark" or violation of Church-Turing thesis, just the problem that we cannot completely model such experience. We cannot objectively observe such human experience, as one person cannot confirm evidence given by another person. (If we don't talk about copying the whole brain, what is likely unfeasible, and also we cannot then do that separately from its environment, what even may lead us to impossibility that the universe cannot contain a copy of itself.) This is why the Genuine AC is questionnable, there are aspects of human consciousness, what are subjective and therefore we cannot objectively model them by any processes, no matter how clever we are or how much we know. Tkorrovi 12:25, 14 Apr 2004 (UTC)

"Real" and "artificial" are not contradictions. I am renting a Ford Falcon which is both. "Real" and "simulated" are contradictory. You wrongly have implied more than once that "simulated" and "artificial" are synonymous. Paul Beardsell 08:20, 5 Apr 2004 (UTC)

I am claiming no "synergy" in placing the words "artificial" and "consciousness" together. I want to use the term artificial consciousness in the same way I might one day have to use natural consciousness to distinguish it from the artificial variety and as a separate subset of consciousness. You must not be allowed to impose some other meaning on the term than what it literally does now mean. I want a term which concisely describes a man-made (or at least not naturally occurring) object which has self-awareness. "Artificial" "consciousness" or "artificial consciousness". What other term will you allow me if you hijack that one. Words have meanings so use them: Come up with your own term. "Simulated consciousness" is what you need. Not personally. Paul Beardsell 08:29, 5 Apr 2004 (UTC)

I could (attempt to) build a virtual AC entity which will be real consciousness. You can build a real object which will simulate consciousness! Paul Beardsell 08:45, 5 Apr 2004 (UTC)

Joking aside, the question of real vs not real I deeply believe will eventually be seen to be a red herring. That I am a cyborg will not be discovered in your lifetime. (Now there is a statement that is undeniable!) Paul Beardsell 08:45, 5 Apr 2004 (UTC)

Unless you are rumbled early. Matt Stan 20:12, 5 Apr 2004 (UTC)
I'm programmed to think I will be the last to be judged obsolete. Paul Beardsell 04:16, 6 Apr 2004 (UTC)

I don't see a problem here. Strong AC means AC what is equivalent to real consciousness, though I am not the only one who considers this impossible. Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are), though by loosening requirements so much it is pretty much equivalent to Weak AI and is not even enough related to consciousness to be called AC. And Objective Stronger AC is closer to consciousness than Weak AC in that it requires the ability to achieve all certain objective abilities (this is why it differs from Weak AC), but is not as close to consciousness as Strong AC. So it is between "Weak AC" and "Strong AC" in how close to real consciousness AC should be. "Simulated" may mean "artificial", one definition of "simulate" is even "produce a computer model of a process" (Concise Oxford Dictionary) what is pretty much what AC is all about - a computer model of human consciousness. "You must not be allowed to impose some other meaning on the term than what it literally does now mean" -- what??? You alone decide in what meaning a term must be used? Concerning the example of Matt Stan, I think it may only then be possible to considered it AC when it has awareness of the processes, for example it learns how you normally move and may turn in right direction even before you actually move, to do it quickly enough (say when it see that you are frustrated, it would suppose that you would go to the refrigerator and take something to drink). Tkorrovi 11:04, 5 Apr 2004 (UTC)

I thought you didn't see a problem?

  • Weak AC does not mean "as far away from real consciousness as possible" or that it has restricted capabilities. It just means "not real". Which is why your "Objective Stronger AC" is a Weak AC and why, in my opinion, it need not be distinguished from Weak AC. Weak AC can be highly capable. Strong AC can be pathetically incapable.
  • "Objective Stronger AC" is a term which I think you have coined - I have not seen it elsewhere.
  • The meaning that "artificial consciousness" already has is the one I am using and the one that corresponds with the dictionary definitions.
  • AC might be just a computer model of consciousness when it is Weak but not when it is Strong. Even then, should AC be biologically based (when we develop the techniques to do this) then it would not be a computer model.
  • I can be conscious in an environment where anticipation is impossible so presumably another machine, an artificial one, could be too. I know you insist on the prediction thing but I just don't see it: We'll have to agree to differ.
  • I've been to the fridge but it was empty. I had not anticipated that.

Paul Beardsell 11:20, 5 Apr 2004 (UTC)

Weak AC also means no Strong AI, while Objective Stronger AC is intended to be Strong AI. BTW all the terms "Strong AC" and "Weak AC" are also most likely coined by you, I don't know that anybody else use them. "Objective Stronger AC" is also a preliminary term I used after you coined these terms, it is AC between "Weak" and "Strong". AC, when implemented on computer, is computer model even if it is "Strong". "Anticipation" can be used in full sense of "prediction", but the problem is that it may be understood in narrower sense "deal with before proper time", but not every prediction is followed by action. Also in AC scientists started to use the term "predict", I don't know "anticipate" used in any paper. Tkorrovi 11:56, 5 Apr 2004 (UTC)
  • We have "Objective Stronger AC" defined by you as not being real consciousness. And yet you now assert it is "Strong". I think you mean "capable" not "real". Please clarify. If we take all occurrences of "Strong" and replace with "real" and replace "Weak" with "not real" (or possibly "simulated") then your contradiction is obvious.
  • The terms "Strong" and "Weak" are used widely in discussions of AI to have precisely the "real" and "can never be real" meanings I use here. I am not sure if I first used those terms in respect of AC or not but I have read a lot on the subject so I might have picked them up from a book. The usage is at least consistent with AI - a field which many consider related to AC, you included. I do not mind what terms we use as long as we do not invent new terminology for the sake of it or usurp the perfectly good meanings of other words mangling them in the process. I have already invited the use of other terms but you have not suggested any yet.
  • It became obvious early on that there is one obvious question on which those interested in AC are divided: Can AC ever be real consciousness? By using the "strong" and "Weak" terms to judge capability rather than realness we risk losing sight of this, the foremost question. I suggest capability and realness are not necessarily linked and gave the mathematical cat vs theorem proving computer as an example to back up this view, that capability and realness are not necessarily linked.

Paul Beardsell 13:12, 5 Apr 2004 (UTC)

Searching for '"artificial consciousness" anticipate' at Google:

Just three examples on the first returned page. Paul Beardsell 13:21, 5 Apr 2004 (UTC)

  • I asserted it is intended to be "Strong AI", not "Strong AC". What distinguishes it, is that it has some complete criteria, to be capable to achieve all certain abilities, so "not just anything" contrary to Weak AI, what may as well be a calculator. Dividing AC only to be "real" or "not real" is not appropriate because there are different varieties of AC what differ in how close they are to real consciousness. Then we should also coin a new name to what was "Weak AC", like "Minimal AC". There is exactly no contradiction, it all just depends on how we classify the different forms of AC. I see one essential difference between AC and AI, and this is that it is questioned by many of whether we can never fully understand such subjective term like consciousness in its widest meaning (totality of all mental abilities of capable human), and therefore can ever build any truly "Strong" AC, while intelligence is only a subset of consciousness, usually considered objective, and so some (also not many) think that Strong AC might be possible. Honestly, I don't know anybody other than you who thinks that "Strong AC", how you define it, is possible. So "Strong" means somewhat different thing in AI than in AC, in AI it is objective in spite that it is a very complicated problem. So it's by far not obvious that we can transfer the terms "Weak" and "Strong" from AI to AC, they may not remain consistent with the terms used in AI.
  • The question may also be asked as "how close to real consciousness AC may become", what is a question as obvious as the question of whether it would ever become a real consciousness or not.
  • I agree that capability and realness are not necessarily linked, the question was not in that. And AC by itself cannot be "real" or "not real", the term AC is not equivalent to consiousness, if AC is really built an can be considered AC, then it is real. The question may only be of whether AC is equivalent to human consciousness, or how close it is or can be to human consciousness.
  • Also one thought about whether AC is a computer model. We used to think about computer as digital serial computer, but there were, are and would be other kind of computers. One example is an obsolete analog computer, what has a difference though that it is simultaneous, not serial. And most recently there is quantum computer, what is claimed of being theoretically able to model any physical process (asserted by Deutch in 1985 based on the work of Faynman). It's also not excluded that some future computers would be built as brain. So even if it is "Strong", if it is ever possible, it may be computer model nevertheless.
  • The word "anticipate" was used once in the first and third paper, but not in connection with AC in the second paper. I saw "predict" much often. Of course "anticipate" can be used in AC, but in proper meaning, if it has the same meaning as "predict", then "predict" would be better. As "predict" is used in papers, then it should remain. "Predict" is also more unambiguous term because "take action before proper time" has a very different meaning than for example "foresee", but these both are the meanings of "anticipate".

Tkorrovi 14:11, 5 Apr 2004 (UTC)

In what sense is the word predict being used? One can make a prediction that may be right or wrong. It is still a prediction. Does the AC machine have to make correct predictions, or will incorrect predictions do equally well? Matt Stan 16:37, 10 Apr 2004 (UTC)
It just means that the system must be able to model the external processes, but based on the information it gets from its "senses", not that there are pre-programmed models, and by "running" a model of an external process to see how it develops. For example it has no idea who you are, but after it gets to know you, it builds a model of you, knowing what are your habbits, desires etc. This helps to predict your behaviour in certain cases with certan accuracy. So prediction is part of being aware of the processes in the environment, what means to model them and "running" these models. This is different from static awareness (perception) what Chalmers talks about and what enables to suggest that thermostat is conscious. In fact thermostat is only the most primitive regulator, in more complicated regulators exactly the prediction is the biggest obstacle. For example they built a regulator for oil refinery, and used neural network to predict, ie to recognize the pattern of how the parameter changes and then to take action or not to take action based on how the parameter was expected to change. So we may consider it to be some awareness of the process, but this is still static because neural network cannot model the process, it can only recognize a static instance of the diagram. Tkorrovi 18:04, 10 Apr 2004 (UTC)
Maybe I should explain more precisely why prediction is important for regulator. Every object has certain inertia, this is not always an inertia of the mass, but just for example that it takes certain time to increase or decrease the room temperature, change the course of the space to air missile etc. So if we want to regulate more precisely, we must be able to know beforehand based on the circumstances in the environment, what the result of certain change might be, and then take only such action what takes as closer to the desired result faster and more precisely (in order not to overregulate), just for being able to do it fast enough. Tkorrovi 18:30, 10 Apr 2004 (UTC)

Well:

  • Tkorrovi: I don't know "anticipate" used in any paper. Any paper.
  • I'll do an edit changing "strong" to "real" and "weak" to "not real". "Stronger" would become "realer" which is not English so "Objective Stronger AC" will be transformed into "Objective More Real AC". This is a little awkward so you may wish to suggest a different phrase.
  • Also your continued use of "weak" to mean not capable (as in the calculator) will be more obvious. I repeat: Believers in weak/not-real AC will not believe that an artificial entity which has "all the capabilities of the average human" is truly/REALLY conscious. It will be a simulation: You have said so yourself (and it is a position which I respect but which I believe is wrong). That is how I proposed the term Weak should be used and its replacement by not-real will make that obvious.
  • I agree that a computer model is the most likely way for Weak / Not Real AC but if the AC is Real then it can not be said to be a "model". Model means simulation, model means not real. Strong/Real AC is also most likely to be realised by a digital computer but the point is that it would not be a model.
  • Quantum computing is the way Penrose tries to escape the Church-Turing thesis but unfortunately it is not obvious that the thesis does not apply to quantum computing.
  • I agree the real / not-real question is not the only question. But it is the issue taking the most space here.

Paul Beardsell 14:54, 5 Apr 2004 (UTC)

  • This is completely wrong. If you say AC beig "not real" you effectively say that computer model is not real. But computer model is a program and it is real. It would be the same to say that Linux operating system is not "real".
This is comprehensively refuted below. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
  • I did not use "weak" to refer capability, but how close it is to real consciousness/intelligence. Obviously calculator is very far from human intelligence.
I think you did. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
Tkorrovi: Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are)
  • AC can not be real or not real, if it is really implemented then it is always real. And it is model if it is implemented on computer, even if it is equivalent to real consciousness (what I think though never happen),
Is your impelementation of AC real consciousness. I ask you the very first question I asked you several weeks ago: Can AC be real consciousness? Yes or no.
  • Quantum computing was not invented by Penrose, and I also think that Church-Turing thesis apply to quantum computing.
I never said he did. Paul Beardsell 15:39, 5 Apr 2004 (UTC)
  • It is just your way to put things what you insist and I, and maybe others, don't agree with.
You are not happy with Strong and Weak. We all agree that by these terms we meant Real and Not-real. You have just said so again in this posting! Paul Beardsell 15:39, 5 Apr 2004 (UTC)

Tkorrovi 15:14, 5 Apr 2004 (UTC)


Example 1:

  • A book containing a story is REAL.
  • The story can be a REAL story.
  • The story does not nesessarily describe real events.
  • But the story is not necessarily REAL.

Example 2:

  • A computer program is a REAL program.
  • The computer game Sim World is a REAL program.
  • But the Sim World is a NOT-REAL world.

Example 3:

  • Tkorrovi's AC program is a REAL program.
  • Tkorrovi himself says that his program is a simulation of consciousness. He says it is NOT-REAL consciousness.

Paul Beardsell 15:39, 5 Apr 2004 (UTC)


Two examples:

  • Tkorrovi: Then we should also coin a new name to what was "Weak AC", like "Minimal AC".
  • Tkorrovi: Weak AC means as far from real consciousness as AC can be defined (requiring only one or few abilities, whatever these abilities are)

These are examples of using Weak AC in such a way as "weak" means "lacking capability". "Weak" does not mean "minimal" in this context. The term "Weak AC", as was explained when I introduced it and as is defined in the article itself, is the term for the school of thought or the belief or the argument that AC can not be real consciousness. It can be highly capable, but never real.

The continued use of the term not as defined is the reason I have changed the term for one where the dictionary definition is close to the meaning intended.

Paul Beardsell 16:08, 5 Apr 2004 (UTC)

I meant *what formerly was called Weak AC". I disagree naming AC "not real" or "more real", and I explained this above. I think this is not a different view, but nonsense. Tkorrovi 16:36, 5 Apr 2004 (UTC)

OK, but you are not succeeding in writing what you mean and I am finding it difficult too - it is difficult because we have to keep on reminding ourselves what we mean by these terms. I think this is because the words we were using for AC which is real C ("strong") and AC which is not real C ("weak") have other dictionary meanings. Now there can be no mistake but you are not happy. What can we do to make you happy? Paul Beardsell 17:21, 5 Apr 2004 (UTC)

The recent terminological change (viz. "Strong" -> "Real" and "Weak" -> "Not-real") certainly does demonstrate that some of the things suggested and discussed are indeed nonsense. There we agree. What we disagree about is what is the nonsense! It is precisely the question of whether AC can be real consciousness that is at issue here. Some say yes, others say no. Those who say yes believe in "Real AC" and those who say no believe in "Not-real AC". Of course you are welcome to suggest alternative terms. I have asked you for your suggestion but none is forthcoming. Paul Beardsell 17:03, 5 Apr 2004 (UTC)

I have thought of a good example as comparison. One of the ways that books are divided into two groups is into Fiction and Non-fiction. There are Fiction Books and Non-fiction Books. There is Real AC and Not-real AC. Tkorrovi does not like the term "Not-real AC" because, if I undertand correctly, he thinks that it calls the existence of AC into question. No. There are Non-fiction Books. These are books which are not true stories. There is Non-real AC. This is AC which is not true Consciousness. Hopefully this helps. Paul Beardsell 17:12, 5 Apr 2004 (UTC)

You are not suppose to call other views nonsense. I, and maybe some others, don't agree with your view, what you called "Strong AC" and now changed it to "Real AC", what in essence argues that AC must be equivalent to real consciousness. Doubtful for many, they consider consciousness so subjective that it even cannot be defined. But I never called it nonsense, just said that I don't agree with that. So can it be understood so that now you said that you deliberately made nonsense edits? Tkorrovi 17:17, 5 Apr 2004 (UTC)


You have just said "But I never called it nonsense, just said that I don't agree with that." But just a few minutes ago:

I meant *what formerly was called Weak AC". I disagree naming AC "not real" or "more real", and I explained this above. I think this is not a different view, but nonsense. Tkorrovi 16:36, 5 Apr 2004 (UTC)

Paul Beardsell 17:26, 5 Apr 2004 (UTC)

Don't you understand that I said that about names, not about your "Strong AC" view? Tkorrovi 17:31, 5 Apr 2004 (UTC)

And it is my view about names you have called "nonsense". Not that I mind, but you are taking offense at me subsequently calling some views expressed nonsense. I did not even identify which ones so you took offense prematurely. But the point is here you are telling me off for something you had just done. I should no longer be surprised. Paul Beardsell 17:46, 5 Apr 2004 (UTC)

We talked then about calling views on AC nonsense, what I never did, but you did. The only thing what I called nonsense was how you call these views. And that way of calling these views I don't consider a view. Tkorrovi 18:04, 5 Apr 2004 (UTC)
I think I would rather have a view which exists than one that doesn't. Paul Beardsell 18:23, 5 Apr 2004 (UTC)

Real vs Genuine

This is where we are:

  • "Real AC" says that some AC can be real C.
  • "Not-real AC" says that AC can never be real C.

Some hold the first view, others the second. What view do you hold? Paul Beardsell 17:40, 5 Apr 2004 (UTC)

I don't agree with your classification. "Not real" also means not existing, so by your classification there is nothing else to choose from than your view. Tkorrovi 17:49, 5 Apr 2004 (UTC)

I think you are failing to see the distinction between "to be (something)" and "to exist". Some languages do not have separate verbs for this. Spanish does it even better than English. I hesitate to ask Matthew's question again! "Not to be something" does not call one's existence into question. Interestingly and perhaps this is apt I remember one of Plato's problems being explained away by my Philosophy I lecturer as being this preceisely. The lecturer said that if only Plato (or Socrates or whoever) had spoken English and not Greek he would have not had all the angst about existence! Paul Beardsell 18:10, 5 Apr 2004 (UTC)

I wish you would look at the examples above. The point is that the "Not-real" does not refer to the AC but to the C that the AC is simulating. Paul Beardsell 18:18, 5 Apr 2004 (UTC)

It was your idea to introduce all these terms, but then you saw that they are not good enough and started to change them. I didn't want to name them at all, just list what different views there are. But if you insist that they must be named, then we supposed to change names again. My proposal is to use the word "genuine" instead of the word "real". Genuine means "really coming from its stated source" and "not sham", where "sham" also means "simulate" (Concise Oxford Dictionary). It is not exactly the most desirable, but it is better than "real", because "not genuine" does not mean "not existing", but it can be interpreted as "simulated" or "not equivalent to its source". So we may call the views even "not genuine AC", "objective less genuine AC" and "genuine AC". Tkorrovi 18:28, 5 Apr 2004 (UTC)

The whole Wikipedia idea is continual improvement. I am glad you are coming around to that view. Paul Beardsell 04:06, 6 Apr 2004 (UTC)

I like that. I'll swap it all to "genuine" and "not genuine". Later today. Paul Beardsell 03:39, 6 Apr 2004 (UTC)

Real and genuine are distinct. (says Matt Stan)

Yes, but someone has to keep the Estonians happy. Paul Beardsell 15:01, 10 Apr 2004 (UTC)

So you keep the Estonians happy? This is a hard work to do, you are probably exhausted. But this is only the beginning. Then we also must keep the supporters of Dennett happy, and then supporters of Penrose also ;-) Tkorrovi 15:23, 10 Apr 2004 (UTC)

"A new class of visuomotor neurons has been recently discovered in the monkey's premotor cortex: mirror neurons. These neurons respond both when a particular action is performed by the recorded monkey and when the same action, performed by another individual, is observed." (Rizolatti et al) Maybe this also suggests that awareness is an awareness of the processes, not static objects or states. Or maybe this discovery is important for other reasons. Just put it here because there was recently a lot of discussion in the Internet about mirror neurons. Tkorrovi 22:24, 14 Apr 2004 (UTC)

Awareness of processes

I digged it a bit more and found that some experiments indeed show that a process, not an object activates neurons. From a New Scientist article http://www-inst.eecs.berkeley.edu/~cs182/readings/ns/article.html "So they presented monkeys with things like raisins, slices of apple, paper clips, cubes and spheres. It wasn't long before they noticed something odd. As the monkey watched the experimenter's hand pick up the object and bring it close, a group of the F5 neurons leaped into action. But when the monkey looked at the same object lying on the tray, nothing happened. When it picked up the object, the same neurons fired again. Clearly their job wasn't just to recognise a particular object." For such reaction there must be created a model of the process. Unless we suggest that human has models of all possible processes from birth, awareness of the processes likely includes creating a model of a process without having a prior knowledge of it, only based on the information received through the senses (ie only from the pulses coming from the receptors). To be able to create a model such way distinguish consciousness from other systems, and would be the biggest challenge for artificial consciousness. No conventional software can do that, as it needs too much flexibility. This is also why such proposed mechanisms as absolutely dynamic systems might be necessary. Tkorrovi 13:42, 17 Apr 2004 (UTC)