January 13, 2010

-page 48-

In the late 1960's and 1970's the computer model of mind set it, and functionalism became the dominant model of mind. On this model, mind is not what the brain consists in (electrochemical transactions in neurons in vast complexes). Instead, mind is what brains do: They are function of mediating between information coming into the organism and behaviour proceeding from the organism. Thus, a mental state is a functional state of the brain or of the human or an animal organism. More specifically, on a favourite variation of functionalism, the mind is a computing system: Mind is to brain as software is to hardware; Thoughts are just programs running on the brain's "NetWare.” Since the 1970s the cognitive sciences - from experimental studies of cognition to neuroscience - have tended toward a mix of materialism and functionalism. Gradually, however, philosophers found that Phenomenological aspects of the mind pose problems for the functionalist paradigm too.


In the early 1970s Thomas Nagel argued in "What Is It Like to Be a Bat?" (1974) that consciousness itself - especially the subjective character of what it is like to have a certain type of experience - escapes physical theory. Many philosophers pressed the case that sensory qualia - what it is like to feel pain, to see red, etc. - are not addressed or explained by a physical account of either brain structure or brain function. Consciousness has properties of its own. And yet, we know, it is closely tied to the brain. And, at some level of description, neural activities implement computation.

In the 1980s John Searle argued in Intentionality (1983) (and further in The Rediscovery of the Mind (1991)) that intentionality and consciousness are essential properties of mental states. For Searle, our brains produce mental states with properties of consciousness and intentionality, and this is all part of our biology, yet consciousness and intentionality require the "first-person" ontology. Searle also argued that computers simulate but do not have mental states characterized by intentionality. As Searle argued, a computer system has of the syntax (processing symbols of certain shapes) but has no semantics (the symbols lack meaning: We interpret the symbols). In this way Searle rejected both materialism and functionalism, while insisting that mind is a biological property of organisms like us: Our brains "secrete" consciousness.

The analysis of consciousness and intentionality is central to phenomenology as appraised above, and Searle's theory of intentionality reads like a modernized version of Husserl's. (Contemporary logical theory takes the form of stating truth conditions for propositions, and Searle characterizes a mental state's intentionality by specifying its "satisfaction conditions"). However, there is an important difference in background theory. For Searle explicitly assumes the basic worldview of natural science, holding that consciousness is part of nature. But Husserl explicitly brackets that assumption, and later phenomenologists - including Heidegger, Sartre, Merleau-Ponty - seem to seek a certain sanctuary for phenomenology beyond the natural sciences. And yet phenomenology itself should be largely neutral about further theories of how experience arises, notably from brain activity.

The philosophy or theory of mind overall may be factored into the following disciplines or ranges of theory relevant to mind: Phenomenology studies conscious experience as experienced, analysing the structure - the types, intentional forms and meanings, dynamics, and (certain) enabling conditions - of perception, thought, imagination, emotion, and volition and action.

Neuroscience studies the neural activities that serve as biological substrate to the various types of mental activity, including conscious experience. Neuroscience will be framed by evolutionary biology (explaining how neural phenomena evolved) and ultimately by basic physics (explaining how biological phenomena are grounded in physical phenomena). Here lie the intricacies of the natural sciences. Part of what the sciences are accountable for is the structure of experience, analysed by phenomenology.

Cultural analysis studies the social practices that help to shape or serve as cultural substrate of the various types of mental activity, including conscious experience. Here we study the import of language and other social practices. Ontology of mind studies the ontological type of mental activity in general, ranging from perception (which involves causal input from environment to experience) to volitional action (which involves causal output from volition to bodily movement).

This division of labour in the theory of mind can be seen as an extension of Brentano's original distinction between descriptive and genetic psychology. Phenomenology offers descriptive analyses of mental phenomena, while neuroscience (and wider biology and ultimately physics) offers models of explanation of what causes or gives rise to mental phenomena. Cultural theory offers analyses of social activities and their impact on experience, including ways language shapes our thought, emotion, and motivation. And ontology frames all these results within a basic scheme of the structure of the world, including our own minds.

Meanwhile, from an epistemological standpoint, all these ranges of theory about mind begin with how we observe and reason about and seek to explain phenomena we encounter in the world. And that is where phenomenology begins. Moreover, how we understand each piece of theory, including theory about mind, is central to the theory of intentionality, as it were, the semantics of thought and experience in general. And that is the heart of phenomenology.

There is potentially a rich and productive interface between neuroscience/cognitive science. The two traditions, however, have evolved largely independent, based on differing sets of observations and objectives, and tend to use different conceptual frameworks and vocabulary representations. The distributive contributions to each their dynamic functions of finding a useful common reference to further exploration of the relations between neuroscience/cognitive science and psychoanalysis/psychotherapy.

Forthwith, is the existence of a historical gap between neuroscience/cognitive science and psychotherapy is being productively closed by, among other things, the suggestion that recent understandings of the nervous system as a modeler and predictor bear a close and useful similarity to the concepts of projection and transference. The gap could perhaps be valuably narrowed still further by a comparison in the two traditions of the concepts of the "unconscious" and the "conscious" and the relations between the two. It is suggested that these be understood as two independent "story generators" - each with different styles of function and both operating optimally as reciprocal contributors to each others' ongoing story evolution. A parallel and comparably optimal relation might be imagined for neuroscience/cognitive science and psychotherapy.

For the sake of argument, imagine that human behaviour and all that it entails (including the experience of being a human and interacting with a world that includes other humans) is a function of the nervous system. If this were so, then there would be lots of different people who are making observations of (perhaps different) aspects of the same thing, and telling (perhaps different) stories to make sense of their observations. The list would include neuroscientists and cognitive scientists and psychologists. It would include as well psychoanalysts, psychotherapists, psychiatrists, and social workers. If we were not too fussy about credentials, it should probably include as well educators, and parents and . . . babies? Arguably, all humans, from the time they are born, spend significant measures of their time making observations of how people (others and themselves) behave and why, and telling stories to make sense of those observations.

The stories, of course, all differ from one another to greater or lesser degrees. In fact, the notion that "human behaviour and all that it entails . . . is a function of the nervous system" is itself a story used to make sense of observations by some people and not by others. It is not my intent here to try to defend this particular story, or any other story for that matter. Very much to the contrary, is to explore the implications and significance of the fact that there ARE different stories and that they might be about the same (some)thing

In so doing, I want to try to create a new story that helps to facilitate an enhanced dialogue between neuroscience/cognitive science, on the one hand, and psychotherapy, on the other. That new story is itself is a story of conflicting stories within . . . what is called the "nervous system" but others are free to call the "self," "mind," "soul," or whatever best fits their own stories. What is important is the idea that multiple things, evident by their conflicts, may not in fact be disconnected and adversarial entities but could rather be fundamentally, understandably, and valuably interconnected parts of the same thing.

Many practising psychoanalysts (and psychotherapists too, I suspect) feel that the observations/stories of neuroscience/cognitive science are for their own activities, least of mention, are at primes of irrelevance, and at worst destructive, and the same probable holds for many neuroscientists/cognitive scientists. Pally clearly feels otherwise, and it is worth exploring a bit why this is so in her case. A general key, I think, is in her line "In current paradigms, the brain has intrinsic activity, is highly integrated, is interactive with the environment, and is goal-oriented, with predictions operating at every level, from lower systems to . . . the highest functions of abstract thought." Contemporary neuroscience/cognitive science has indeed uncovered an enormous complexity and richness in the nervous system, "making it not so different from how psychoanalysts (or most other people) would characterize the self, at least not in terms of complexity, potential, and vagary." Given this complexity and richness, there is substantially less reason than there once was to believe psychotherapists and neuroscientists/cognitive scientists are dealing with two fundamentally different thing’s ally suspects, more aware of this than many psychotherapists because she has been working closely with contemporary neuroscientists who are excited about the complexity to be found in the nervous system. And that has an important lesson, but there is an additional one at least as important in the immediate context. In 1950, two neuroscientists wrote: "The sooner we realize that not to expect of expectation itself, which we would recognize the fact that the complex and higher functional Gestalts that leave the reflex physiologist dumfounded in fact send roots down to the simplest basal functions of the CNS, the sooner we will see that the previously terminologically insurmountable barrier between the lower levels of neurophysiology and higher behavioural theory simply dissolves away."

And in 1951 another said, "I am becoming subsequently forwarded by the conviction that the rudiments of every behavioural mechanism will be found far down in the evolutionary scale and represented in primitive activities of the nervous system."

Neuroscience (and what came to be cognitive science) was engaged from very early on in an enterprise committed to the same kind of understanding sought by psychotherapists, but passed through a phase (roughly from the 1950's to the 1980's) when its own observations and stories were less rich in those terms. It was a period that gave rise to the notion that the nervous system was "simple" and "mechanistic," which in turn made neuroscience/cognitive science seem less relevant to those with broader concerns, perhaps even threatening and apparently adversarial if one equated the nervous system with "mind," or "self," or "soul," since mechanics seemed degrading to those ideas. Arguably, though, the period was an essential part of the evolution of the contemporary neuroscience/cognitive science story, one that laid needed groundwork for rediscovery and productive exploration of the richness of the nervous system. Psychoanalysis/psychotherapy of course went through its own story evolution over this time. That the two stories seemed remote from one another during this period was never adequate evidence that they were not about the same thing but only an expression of their needed independent evolutions.

An additional reason that Pally is comfortable with the likelihood that psychotherapists and neuroscientists/cognitive scientists are talking about the same thing is her recognition of isomorphisms (or congruities, Pulver 2003) between the two sets of stories, places where different vocabularies in fact seem to be representing the same (or quite similar) things. I am not sure I am comfortable calling these "shared assumptions" (as Pally does) since they are actually more interesting and probably more significant if they are instead instances of coming to the same ideas from different directions (as I think they are). In this case, the isomorphisms tend to imply that, rephrasing Gertrude Stein, "that there exists an actual there.” Regardless, Pally has entirely appropriately and, I think, usefully called attention to an important similarity between the psychotherapeutic concept of "transference" and an emerging recognition within neuroscience/cognitive science that the nervous system does not so much collect information about the world as generate a model of it, act in relation to that model, and then check incoming information against the predictions of that model. Pally's suggestion that this model reflects in part early interpersonal experiences, can be largely "unconscious," and so may cause inappropriate and troubling behaviour in current time seems entirely reasonable. So too is she to think of thoughts that there is an interaction with the analyst, and this can be of some help by bringing the model to "consciousness" through the intermediary of recognizing the transference onto the analyst.

The increasing recognition of substantial complexity in the nervous system together with the presence of identifiable isomorphisms provides a solid foundation for suspecting that psychotherapists and neuroscientists/cognitive scientists are indeed talking about the same thing. But the significance of different stories for better understanding a single thing lies as much in the differences between the stories as it does in their similarities/isomorphisms, in the potential for differing and not obviously isomorphic stories productively to modify each other, yielding a new story in the process. With this thought in mind, I want to call attention to some places where the psychotherapeutic and the neuroscientific/cognitive scientific stories have edges that rub against one another rather than smoothly fitting together. And perhaps to ways each could be usefully further evolved in response to those non-isomorphisms.

Unconscious stories and "reality.” Though her primary concern is with interpersonal relations, Pally clearly recognizes that transference and related psychotherapeutic phenomena are one (actually relatively small) facet of a much more general phenomenon, the creation, largely unconsciously, of stories that are understood to be that of what are not necessarily reflective of the "real world.” Ambiguous figures illustrate the same general phenomenon in a much simpler case, that of visual perception. Such figures may be seen in either of two ways; They represent two "stories" with the choice between them being, at any given time, largely unconscious. More generally, a serious consideration of a wide array of neurobiological/cognitive phenomena clearly implies that, as Pally said, that if we could ever see "reality," but only have stories to describe it that result from processes of which we are not consciously aware.

All of this raises some quite serious philosophical questions about the meaning and usefulness of the concept of "reality." In the present context, what is important is that it is a set of questions that sometimes seem to provide an insurmountable barrier between the stories of neuroscientists/cognitive scientists, who by and large think they are dealing with reality, and psychotherapists, who feel more comfortable in more idiosyncratic and fluid spaces. In fact, neuroscience and cognitive science can proceed perfectly well in the absence of a well-defined concept of "reality" and, without being fully conscious of it, does in fact do so. And psychotherapists actually make more use of the idea of "reality" than is entirely appropriate. There is, for example, a tendency within the psychotherapeutic community to presume that unconscious stories reflect "traumas" and other historically verifiable events, while the neurobiological/cognitive science story says quite clearly that they may equally reflect predispositions whose origins reflect genetic information and hence bear little or no relation to "reality" in the sense usually meant. They may, in addition, reflect random "play" (Grobstein, 1994), putting them even further out of reach of easy historical interpretation. In short, with regard to the relation between "story" and "reality," each set of stories could usefully be modified by greater attention to the other. Differing concepts of "reality" (perhaps the very concept itself) gets in the way of usefully sharing stories. The neurobiologists and/or/cognitive scientists' preoccupation with "reality" as an essential touchstone could valuably be lessened, and the therapist's sense of the validation of story in terms of personal and historical idiosyncracies could be helpfully adjusted to include a sense of actual material underpinnings.

The Unconscious and the Conscious. Pally appropriately makes a distinction between the unconscious and the conscious, one that has always been fundamental to psychotherapy. Neuroscience/cognitive science has been slower to make a comparable distinction but is now rapidly beginning to catch up. Clearly some neural processes generate behaviour in the absence of awareness and intent and others yield awareness and intent with or without accompanying behaviour. An interesting question however, raised at a recent open discussion of the relations between neuroscience and psychoanalysis, is whether the "neurobiological unconscious" is the same thing as the "psychotherapeutic unconscious," and whether the perceived relations between the "unconscious" and the"conscious" are the same in the two sets of stories. Is this a case of an isomorphism or, perhaps more usefully, a masked difference?

An oddity of Pally's article is that she herself acknowledges that the unconscious has mechanisms for monitoring prediction errors and yet implies, both in the title of the paper, and in much of its argument, that there is something special or distinctive about consciousness (or conscious processing) in its ability to correct prediction errors. And here, I think, there is evidence of a potentially useful "rubbing of edges" between the neuroscientific/cognitive scientific tradition and the psychotherapeutic one. The issue is whether one regards consciousness (or conscious processing) as somehow "superior" to the unconscious (or unconscious processing). There is a sense in Pally of an old psychotherapeutic perspective of the conscious as a mechanism for overcoming the deficiencies of the unconscious, of the conscious as the wise father/mother and the unconscious as the willful child. Actually, Pally does not quite go this far, but there is enough of a trend to illustrate the point and, without more elaboration, I do not think many neuroscientists/cognitive scientists will catch Pally's more insightful lesson. I think Pally is almost certainly correct that the interplay of the conscious and the unconscious can achieve results unachievable by the unconscious alone, but think also that neither psychotherapy nor neuroscience/cognitive science are yet in a position to say exactly why this is so. So let me take a crack here at a new, perhaps bi-dimensional story that could help with that common problem and perhaps both traditions as well.

A major and surprising lesson of comparative neuroscience, supported more recently by neuropsychology (Weiskrantz, 1986) and, more recently still, by artificial intelligence is that an extraordinarily rich repertoire of adaptive behaviour can occur unconsciously, in the absence of awareness of intent (be supported by unconscious neural processes). It is not only modelling of the world and prediction and error correction that can occur this way but virtually (and perhaps literally) the entire spectrum of behaviour externally observed, including fleeing from threat, approaching good things, generating novel outputs, learning from doing so, and so on.

This extraordinary terrain, discovered by neuroanatomists, electrophysiologists, neurologists, behavioural biologists, and recently extended by others using more modern techniques, is the unconscious of which the neuroscientist/cognitive scientist speaks. It is a terrain so surprisingly rich that it creates, for some people, the inpuzzlement about whether there is anything else at all. Moreover, it seems, at first glance, to be a totally different terrain from that of the psychotherapist, whose clinical experience reveals a territory occupied by drives, unfulfilled needs, and the detritus with which the conscious would prefer not to deal.

As indicated earlier, it is one of the great strengths of Pally's article to suggest that the two terrains may in fact turns out to be the same in many ways, but if they are of the same line, it then becomes the question of whether or not it feels in what way nature resembles the "unconscious" and the "conscious" different? Where now are the "two stories?” Pally touches briefly on this point, suggesting that the two systems differ not so much (or at all?) in what they do, but rather in how they do it. This notion of two systems with different styles seems to me worth emphasizing and expanding. Unconscious processing is faster and handles many more variables simultaneously. Conscious processing is slower and handles several variables at one time. It is likely that there appear to a host of other differences in style as well, in the handling of number for example, and of time.

In the present context, however, perhaps the most important difference in style is one that Lacan called attention to from a clinical/philosophical perspective - the conscious (conscious processing) that has in resemblance to some objective "coherence," that is, it attempts to create a story that makes sense simultaneously of all its parts. The unconscious, on the other hand, is much more comfortable with bits and pieces lying around with no global order. To a neurobiologist/cognitive scientist, this makes perfectly good sense. The circuitry includes the unconscious (sub-cortical circuitry?) assembly of different parts organized for a large number of different specific purposes, and only secondarily linked together to try to assure some coordination? The circuitry preserves the conscious precessings (neo-cortical circuitry?), that, on the other hand, seems to both be more uniform and integrated and to have an objective for which coherence is central.

That central coherence is well-illustrated by the phenomena of "positive illusions,” exemplified by patients who receive a hypnotic suggestion that there is an object in a room and subsequently walk in ways that avoid the object while providing a variety of unrelated explanations for their behaviour. Similar "rationalization" is, of course, seen in schizophrenic patients and in a variety of fewer dramatic forms in psychotherapeutic settings. The "coherent" objective is to make a globally organized story out of the disorganized jumble, a story of (and constituting) the "self."

What this thoroughly suggests is that the mind/brain be actually organized to be constantly generating at least two different stories in two different styles. One, written by conscious processes in simpler terms, is a story of/about the "self" and experienced as such, for developing insights into how such a story can be constructed using neural circuitry. The other is an unconscious "story" about interactions with the world, perhaps better thought of as a series of different "models" about how various actions relate to various consequences. In many ways, the latter is the grist for the former.

In this sense, we are safely back to the two story ideas that has been central to psychotherapy, but perhaps with some added sophistication deriving from neuroscience/cognitive science. In particular, there is no reason to believe that one story is "better" than the other in any definitive sense. They are different stories based on different styles of story telling, with one having advantages in certain sorts of situations (quick responses, large numbers of variables, more direct relation to immediate experiences of pain and pleasure) and the other in other sorts of situations (time for more deliberate responses, challenges amenable to handling using smaller numbers of variables, more coherent, more able to defer immediate gratification/judgment.

In the clinical/psychotherapeutic context, an important implication of the more neutral view of two story-tellers outlined above is that one ought not to over-value the conscious, nor to expect miracles of the process of making conscious what is unconscious. In the immediate context, the issue is if the unconscious is capable of "correcting prediction errors,” then why appeal to the conscious to achieve this function? More generally, what is the function of that persistent aspect of psychotherapy that aspires to make the unconscious conscious? And why is it therapeutically effective when it is? Here, it is worth calling special attention to an aspect of Pally's argument that might otherwise get a bit lost in the details of her article: . . . the therapist encourages the wife to stop consciously and consider her assumption that her husband does not properly care about her, and to effort fully consider an alternative view and inhibit her impulse to reject him back. This, in turn, creates a new type of experience, one in which he is indeed more loving, such that she can develop new predictions."

It is not, as Pally describes it, the simple act of making something conscious that is therapeutically effective. What is necessary is too consciously recompose the story (something that is made possible by its being a story with a small number of variables) and, even more important, to see if the story generates a new "type of experience" that in turn causes the development of "new predictions." The latter, I suggest, is an effect of the conscious on the unconscious, an alteration of the unconscious brought about by hearing, entertaining, and hence acting on a new story developed by the conscious. It is not "making things conscious" that is therapeutically effective; it is the exchange of stories that encourages the creation of a new story in the unconscious.

For quite different reasons, Grey (1995) earlier made a suggestion not dissimilar to Pally's, proposing that consciousness was activated when an internal model detected a prediction failure, but acknowledged he could see no reason "why the brain should generate conscious experience of any kind at all." It seems to me that, despite her title, it is not the detection of prediction errors that is important in Pally's story. Instead, it is the detection of mismatches between two stories, one unconscious and the other conscious, and the resulting opportunity for both to shape a less trouble-making new story. That, in brief, is it to why the brain "should generate conscious experience,” and reap the benefits of having a second story teller with which a different style of paraphrasing Descartes, one might know of another in what one might say "I am, and I can think, therefore I can change who I am.” It is not only the neurobiological "conscious" that can undergo change; it is the neurobiological "unconscious" as well.

More generally, the most effective psychotherapy requires the recognitions that assume their responsibility is rapidly emerging from neuroscience/cognitive science, that the brain/mind has evolved with two (or more) independent story tellers and has done so precisely because there are advantages to having independent story tellers that generate and exchange different stories. The advantage is that each can learn from the other, and the mechanisms to convey the stories and forth and for each story teller to learn from the stories of the other are a part of our evolutionary endowment as well. The problems that bring patients into a therapist's office are problems in the breakdown of story exchange, for any of a variety of reasons, and the challenge for the therapist is to reinstate the confidence of each story teller in the value of the stories created by the other. Neither the conscious nor the unconscious is primary; they function best as an interdependent loop with each developing its own story facilitated by the semi-independent story of the other. In such an organization, there is not only no "real,” and no primacy for consciousness, there is only the ongoing development and, ideally, effective sharing of different stories.

There are, in the story I am outlining, implications for neuroscience/cognitive science as well. The obvious key questions are what does one mean (in terms of neurons and neuronal assemblies) by "stories," and in what ways are their construction and representation different in unconscious and conscious neural processing. But even more important, if the story I have outlined makes sense, what are the neural mechanisms by which unconscious and conscious stories are exchanged and by which each kind of story impacts on the other? And why (again in neural terms) does the exchange sometimes break down and fail in a way that requires a psychotherapist - an additional story teller - to be repaired?

Just as the unconscious and the conscious are engaged in a process of evolving stories for separate reasons and using separate styles, so too have been and will continue to be neuroscience/cognitive science and psychotherapy. And it is valuable that both communities continue to do so. But there is every reason to believe that the different stories are indeed about the same thing, not only because of isomorphisms between the differing stories but equally because the stories of each can, if listened to, be demonstrably of value to the stories of the other. When breakdowns in story sharing occur, they require people in each community who are daring enough to listen and be affected by the stories of the other community. Pally has done us all a service as such a person. I hope my reactions to her article will help further to construct the bridge she has helped to lay, and that others will feel inclined to join in an act of collective story telling that has enormous intellectual potential and relates as well very directly to a serious social need in the mental health arena. Indeed, there are reasons to believe that an enhanced skill at hearing, respecting, and learning from differing stories about similar things would be useful in a wide array of contexts.

There is now a more satisfactory range of ideas available [in the field of consciousness studies] . . . They involve mostly quantum objects called Bose-Einstein condensates that may be capable of forming ephemeral but extended structures in the brain (Pessa). Marshall's original idea (based on the work of Frölich) was that the condensates that comprise the physical basis of mind, form from activity of vibrating molecules (dipoles) in nerve cell membranes. One of us (Clarke) has found theoretical evidence that the distribution of energy levels for such arrays of molecules prevents this happening in the way that Marshall first thought. However, the occurrence of similar condensates centring around the microtubules that are an important part of the structure of every cell, including nerve cells, remains a theoretical possibility (del Giudice et al.). Hameroff has pointed out that single-cell organisms such as 'paramecium' can perform quite complicated actions normally thought to need a brain. He suggests that their 'brain' be in their microtubules. Shape changes in the constituent proteins (tubulin) could subserve computational functions and would involve quantum phenomena of the sort envisaged by del Giudice. This raises the intriguing possibility that the most basic cognitive unit is provided, not by the nerve cell synapse as is usually supposed, but by the microtubular structure within cells. The underlying intuition is that the structures formed by Bose-Einstein condensates are the building Forms of mental life; in relation to perception they are models of the world, transforming a pleasant view, say, into a mental structure that represents some of the inherent qualities of that view.

We thought that, if there is anything to ideas of this sort, the quantum nature of awareness should be detectable experimentally. Holism and non-locality are features of the quantum world with no precise classical equivalents. The former presupposes that the interacting systems have to be considered as wholes - you cannot deal with one part in isolation from the rest. Non-locality means, among other things, that spatial separation between its parts does not alter the requirement to deal with an interacting system holistically. If we could detect these in relation to awareness, we would show that consciousness cannot be understood solely in terms of classical concepts.

Generative thought and words are the attempts to discover the relation between thought and speech at the earliest stages of phylogenetic and ontogenetic development. We found no specific interdependence between the genetic roots of thought and of word. It became plain that the inner relationship we were looking for was not a prerequisite for, but rather a product of, the historical development of human consciousness.

In animals, even in anthropoids whose speech is phonetically like human speech and whose intellect is akin to man’s, speech and thinking are not interrelated. A prelinguistic epoché through which times interval in thought and a preintellectual period in speech undoubtedly exist also in the development of the child. Thought and word are not connected by a primary bond. A connection originates, changes, and grows in the course of the evolution of thinking and speech.

It would be wrong, however, to regard thought and speech as two unrelated processes either parallel or crossing at certain points and mechanically influencing each other. The absence of a primary bond does not mean that a connection between them can be formed only in a mechanical way. The futility of most of the earlier investigations was largely due to the assumption that thought and word were isolated, independent elements, and verbal thought the fruit of their external union.

The method of analysis based on this conception was bound to fail. It sought to explain the properties of verbal thought by breaking it up into its component elements, thought and word, neither of which, taken separately, possessed the properties of the whole. This method is not true analysis helpful in solving concrete problems. It leads, rather, to generalisation. We compared it with the analysis of water into hydrogen and oxygen - which can result only in findings applicable to all water existing in nature, from the Pacific Ocean to a raindrop. Similarly, the statement that verbal thought is composed of intellectual processes and speech is functionally proper applications to all verbal thought and all its manifestations and explains none of the specific problems facing the student of verbal thought.

We tried a new approach to the subject and replaced analysis into elements by analysis into units, each of which retains in simple form all the properties of the whole. We found this unit of verbal thought in word meaning.

The meaning of a word represents such a close amalgam of thought and language that it is hard to tell whether it is a phenomenon of speech or a phenomenon of thought. A word without meaning is an empty sound; meaning, therefore, is a criterion of “word,” its indispensable component. It would seem, then, that it may be regarded as a phenomenon of speech. But from the point of view of psychology, the meaning of every word is a generalisation or a concept. And since generalisations and concepts are undeniably acts of thought, but we may regard meaning as a phenomenon of thinking. It does not follow, however, that meaning formally belongs in two different spheres of psychic life. Word meaning is a phenomenon of thought only insofar as thought is embodied in speech, and of speech only insofar as speech is connected with thought and illumined by it. It is a phenomenon of verbal thought, or meaningful speech - a union of word and thought.

Our experimental investigations fully confirm this basic thesis. They not only proved that concrete study of the development of verbal thought is made possible by the use of word meaning as the analytical unit but they also led to a further thesis, which we consider the major result of our study and which issues directly from the further thesis that word meanings develop. This insight must replace the postulate of the immutability of word meanings.

From the point of view of the old schools of psychology, the bond between word and meaning is an associative bond, established through the repeated simultaneous perception of a certain sound and a certain object. A word calls to mind its content as the overcoat of a friend reminds us of that friend, or a house of its inhabitants. The association between word and meaning may grow stronger or weaker, be enriched by linkage with other objects of a similar kind, spread over a wider field, or become more limited, i.e., it may undergo quantitative and external changes, but it cannot change its psychological nature. To do that, it would have to cease being an association. From that point of view, any development in word meanings is inexplicable and impossible - an implication that impeded linguistics as well as psychology. Once having committed itself to the association theory, semantics persisted in treating word meaning as an association between a word’s sound and its content. All words, from the most concrete to the most abstract, appeared to be formed in the same manner in regard to meaning, and to contain nothing peculiar to speech as such; a word made us think of its meaning just as any object might remind us of another. It is hardly surprising that semantics did not even pose the larger question of the development of word meanings. Development was reduced to changes in the associative connections between single words and single objects: A word brawn to denote at first one object and then become associated with another, just as an overcoat, having changed owners, might remind us first of one person and later of another. Linguistics did not realize that in the historical evolution of language the very structure of meaning and its psychological nature also change. From primitive generalisations, verbal thought rises to the most abstract concepts. It is not merely the content of a word that changes, but the way in which reality is generalised and reflected in a word.

Equally inadequate is the association theory in explaining the development of word meanings in childhood. Here, too, it can account only for the pure external, quantitative changes in the bonds uniting word and meaning, for their enrichment and strengthening, but not for the fundamental structural and psychological changes that can and do occur in the development of language in children.

Oddly enough, the fact that associationism in general had been abandoned for some time did not seem to affect the interpretation of word and meaning. The Wuerzburg school, whose main object was to prove the impossibility of reducing thinking to a mere play of associations and to demonstrate the existence of specific laws governing the flow of thought, did not revise the association theory of word and meaning, or even recognise the need for such a revision. It freed thought from the fetters of sensation and imagery and from the laws of association, and turned it into a purely spiritual act. By so doing, it went back to the prescientific concepts of St. Augustine and Descartes and finally reached extreme subjective idealism. The psychology of thought was moving toward the ideas of Plato. Speech, at the same time, was left at the mercy of association. Even after the work of the Wuerzburg school, the connection between a word and its meaning was still considered a simple associative bond. The word was seen as the external concomitant of thought, its attire only, having no influence on its inner life. Thought and speech had never been as widely separated as during the Wuerzburg period. The overthrow of the association theory in the field of thought actually increased its sway in the field of speech.

The work of other psychologists further reinforced this trend. Selz continued to investigate thought without considering its relation to speech and came to the conclusion that man’s productive thinking and the mental operations of chimpanzees were identical in nature – so completely did he ignore the influence of words on thought.

Even Ach, who made a special studies in phraseology, by the meaning of who tried to overcome the correlation in his theory of concepts, did not go beyond assuming the presence of “determining tendencies” operative, along with associations, in the process of concept formation. Hence, the conclusions he reached did not change the old understanding of word meaning. By identifying concept with meaning, he did not allow for development and changes in concepts. Once established, the meaning of a word was set forever; Its development was completed. The same principles were taught by the very psychologists Ach attacked. To both sides, the starting point was also the end of the development of a concept; the disagreement concerned only the way in which the formation of word meanings began.

In Gestalt psychology, the situation was not very different. This school was more consistent than others in trying to surmount the general principle of a collective associationism. Not satisfied with a partial solution of the problem, it tried to liberate thinking and speech from the rule of association and to put both under the laws of structure formation. Surprisingly, even this most progressive of modern psychological schools made no progress in the theory of thought and speech.

For one thing, it retained the complete separation of these two functions. In the light of Gestalt psychology, the relationship between thought and word appears as a simple analogy, a reduction of both to a common structural denominator. The formation of the first meaningful words of a child is seen as similar to the intellectual operations of chimpanzees in Koehler’s experiments. Words that filter through the structure of things and acquire a certain functional meaning, in much the same way as the stick, to the chimpanzee, becomes part of the structure of obtaining the fruit and acquires the functional meaning of tool, that the connection between word and meaning is no longer regarded as a matter of simple association but as a matter of structure. That seems like a step forward. But if we look more closely at the new approach, it is easy to see that the step forward is an illusion and that we are still standing in the same place. The principle of structure is applied to all relations between things in the same sweeping, undifferentiated way as the principle of association was before it. It remains impossible to deal with the specific relations between word and meaning.

They are from the outset accepted as identical in principle with any and all other relations between things. All cats are as grey in the dusk of Gestalt psychology as in the earlier plexuities that assemble in universal associationism.

While Ach sought to overcome the associationism with “determining tendencies,” Gestalt psychology combatted it with the principle of structure - retaining, however, the two fundamental errors of the older theory: the assumption of the identical nature of all connections and the assumption that word meanings do not change. The old and the new psychology both assume that the development of a word’s meaning is finished as soon as it emerges. The new trends in psychology brought progress in all branches except in the study of thought and speech. Here the new principles resemble the old ones like twins.

If Gestalt psychology is at a standstill in the field of speech, it has made a big step backward in the field of thought. The Wuerzburg school at least recognised that thought had laws of its own. Gestalt psychology denies their existence. By reducing to a common structural denominator the perceptions of domestic fowl, the mental operations of chimpanzees, the first meaningful words of the child, and the conceptual thinking of the adult, it obliterates every distinction between the most elementary perception and the highest forms of thought.

This may be summed up as follows: All the psychological schools and trends overlook the cardinal point that every thought is a generalisation. They all study word and meaning without any reference to development. As long as these two conditions persist in the successive trends, there cannot be much difference in the treatment of the problem.

The discovery that word meanings evolve leads the study of thought and speech out of a blind alley. Word meanings are dynamic rather than static formations. They change as the child develops; they change also with the various ways in which thought functions.

If word meanings change in their inner nature, then the relation of thought to word also changes. To understand the dynamics of that relationship, we must supplement the genetic approach of our main study by functional analysis and examine the role of word meaning in the process of thought.

Let us consider the process of verbal thinking from the first dim stirring of a thought to its formulation. What we want to show now is not how meanings develop over long periods of time but the way they function in the live process of verbal thought. On the basis of such a functional analysis, we will be able to show also that each stage in the development of word meaning has its own particular relationship between thought and speech. Since functional problems are most readily solved by examining the highest form of a given activity, we will, for a while, put aside the problem of development and consider the relations between thought and word in the mature mind.

The leading idea in the following discussion can be reduced to this formula: The relation of thought to word is not a thing but a process, a continual movement back and forth from thought to word and from word to thought. In that process the relation of thought to word undergoes changes that they may be regarded as development in the functional sense. Thought is not merely expressed in words; it comes into existence through them. Every thought tends to connect something with something else, to establish a relationship between things. Every thought moves, grows and develops, fulfils a function, solves a problem. This flow of thought occurs as an inner movement through a series of planes. An analysis of the interaction of thought and word must begin with an investigation of the different phases and planes a thought traverses before it is embodied in words.

The first thing such a study reveals is the need to distinguish between two planes of speech. Both the inner, meaningful, semantic aspect of speech and the external, phonetic aspects, though forming a true unity, have their own laws of movement. The unity of speech is a complex, not a homogeneous, unity. A number of facts in the linguistic development of the child indicate independent movement in the phonetic and the semantic spheres. We will point out two of the most important of these facts.

In mastering external speech, the child starts from one word, then connects two or three words; a little later, he advances from simple sentences to more complicated ones, and finally to coherent speech made up of series of such sentences; in other words, he proceeds from a part to the whole. In regard to meaning on the other hand, the first word of the child is a whole sentence. Semantically, the child starts from the whole, from a meaningful complex, and only later begins to master the separate semantic units, the meanings of words, and to divide his formerly undifferentiated thought into those units. The external and the semantic aspects of speech develop in opposite directions – one from the particular to the whole, from word to sentence, and the other from the whole to the particular, from sentence to word.

This in itself suffices to show how important it is to distinguish between the vocal and the semantic aspects of speech. Since they move in reverse directions, their development does not coincide, but that does not mean that they are independent of each other. On the contrary, their difference is the first stage of a close union. In fact, our example reveals their inner relatedness as clearly as it does their distinction. A child’s thought, precisely because it is born as a dim, amorphous whole, must find expression in a single word. As his thought becomes more differentiated, the child is less apt to express it in single words but constructs a composite whole. Conversely, progress in speech to the differentiated whole of a sentence helps the child’s thoughts to progress from a homogeneous whole to well-defined parts. Thought and word are not cut from one pattern. In a sense, there are more differences than likenesses between them. The structure of speech does not simply mirror the structure of thought that is why words cannot be put on by thought like a ready-made garment. Thought undergoes many changes as it turns into speech. It does not merely find expression in speech; It finds its reality and form. The semantic and the phonetic developmental processes are essentially one, precisely because of their reverse directions.

The second, equally important fact emerges at a later period of development. Piaget demonstrated that the child uses subordinate clauses with because, although, etc., long before he grasps the structures of meaning corresponding to these syntactic forms. Grammar precedes logic. Here, too, as in our previous example, the discrepancy does not exclude union but is, in fact, necessary for union.

In adults the divergence between the semantic and the phonetic aspects of speech is even more striking. Modern, psychologically oriented linguistics is familiar with this phenomenon, especially in regard too grammatical and psychological subject and predicate. For example, in the sentence “The clock fell,” emphasis and meaning may change in different situations. Suppose I notice that the clock has stopped and ask how this happened. The answer is, “The clock fell.” Grammatical and psychological subject coincide: “The clock” is the first idea in my consciousness; “fell” is what is said about the clock. But if I hear a crash in the next room and inquire what happened, and get the same answer, subject and predicate are psychologically reversed. I knew something had fallen – that is what we are talking about. “The clock” completes the idea. The sentence could be changed to: “What has fallen is the clock”; Then the grammatical and the psychological subject would coincide. In the prologue to his play Duke Ernst von Schwaben, Uhland says: “Grim scenes will pass before you.” Psychologically, “will pass” is the subject. The spectator knows he will see events unfold the additional idea, the predicate, remains in “grim scenes.” Uhland meant, “What will pass before your eyes are a tragedy.” Any part of a sentence may become the psychological predicate, the carrier of topical emphasis: on the other hand, entirely different meanings may lie hidden behind one grammatical structure. Accord between syntactical and psychological organisation is not as prevalent as we tend to assume – rather, it is a requirement that is seldom met. Not only subject and predicate, but grammatical gender, number, case, tense, degree, etc. has their psychological doubles. A spontaneous utterance wrong from the point of view of grammar, may have charm and aesthetic value. Absolute correctness is achieved only beyond natural language, in mathematics. Our daily speech continually fluctuates between the ideals of mathematical and of imaginative harmony.

We will illustrate the interdependence of the semantic and the grammatical aspects of language by citing two examples that show that changes in formal structure can entail far-reaching changes in meaning.

In translating the fable “La Cigale et la Fourmi,” Krylov substituted a dragonfly for La Fontaine’s grasshopper. In French Grasshopper is feminine and therefore well suited to symbolise a light-hearted, carefree attitude. The nuance would be lost in a literal translation, since in Russian Grasshopper is masculine. When he settled for dragonflies, which is feminine in Russian, Krylov disregarded the literal meaning in favour of the grammatical form required to render La Fontaine’s thought.

Tjutchev did the same in his translation of Heine’s poem about a fir and a palm. In German fir is masculine and palm feminine, and the poem suggests the love of a man for a woman. In Russian, both trees are feminine. To retain the implication, Tjutchev replaced the fir by a masculine cedar. Lermontov, in his more literal translation of the same poem, deprived it of these poetic overtones and gave it an essentially different meaning, more abstract and generalised. One grammatical detail may, on occasion, change the whole of which is to purport of what is said.

Behind words, there is the independent grammar of thought, the syntax of word meanings. The simplest utterance, far from reflecting a constant, rigid correspondence between sound and meaning, is really a process. Verbal expressions cannot emerge fully formed but must develop gradually. This complex process of transition from meaning to sound must itself be developed and perfected. The child must learn to distinguish between semantics and phonetics and understand the nature of the difference. At first he uses verbal forms and meanings without being conscious of them as separate. The word, to the child, is an integral part of the object it denotes. Such a conception seems to be characteristic of primitive linguistic consciousness. We all know the old story about the rustic who said he wasn’t surprised that savants with all their instruments could figure out the size of stars and their course – what baffled him was how they found out their names. Simple experiments show that preschool children “explain” the names of objects by their attributes. According to them, an animal is called “cow” because it has horns, “calves” because its horns are still small, “dog” because it is small and has no horns; an object is called “car” because it is not an animal. When asked whether one could interchange the names of objects, for instance call a cow “ink,” and ink “cow,” children will answer no, “because ink is used for writing, and the cow gives milk.” An exchange of names would mean an exchange of characteristic features, so inseparable is the connection between them in the child’s mind. In one experiment, the children were told that in a game a dog would be called “cow.” Here is a typical sample of questions and answers: Does a cow have horns? “Yes.” “But do you not remember that the cow is really a dog? Come now, does a dog have horns? “Sure, if it is a cow, if it is called cow, it has horns. That kind of dog has to have little horns.

We can see how difficult it is for children to separate the name of an object from its attributes, which cling to the name when it is transferred like possessions following their owner.

The fusion of the two planes of speech, semantic and vocal begins to break down as the child grows older, and the distance between them gradually increases. Each stage in the development of word meanings has its own specific interrelation of the two planes. A child’s ability to communicate through language is directly related to the differentiation of word meanings in his speech and consciousness.

To understand this, we must remember a basic characteristic of the structure of word meanings. In the semantic structure of a word, we distinguish between referent and meaning correspondingly, we distinguish a word’s nominative from its significative function. When we compare these structural and functional relations at the earliest, middle, and advanced stages of development, we find the following genetic regularity: In the beginning, only the nominative functions exist, and semantically, only the unbiased objective becomes the reference, and independent of naming, and meaning independent of reference, appear later and develop along the paths we have attempted to trace and describe.

Only when this development is completed does the child become fully able to formulate his own thought and to understand the speech of others. Until then, his usage of words coincides with that of adults in its objective reference but not in its meaning.

We must probe still deeper and explore the plane of inner speech lying beyond the semantic plane. We will discuss here some of the data of the special investigation we have made of it. The relationship of thought and word cannot be understood in all its complexity without a clear understanding of the psychological nature of inner speech. Yet, of all the problems connected with thought and language, this is perhaps the most complicated, beset as it is with terminological and other misunderstandings.

The term inner speech, or endophasy, has been applied to various phenomena, and authors argue about different things that they call by the same name. Originally, inner speech seems to have been understood as verbal memory. An example would be the silent recital of a poem known by heart. In that case, inner speech differs from vocal speech only as the idea or image of an object differs from the real object. It was in this sense that inner speech was understood by the French authors who tried to find out how words were reproduced in memory – whether as auditory, visual, motor, or synthetic images. We will see that word memory is indeed one of the constituent elements of inner speech but not all of it.

In a second interpretation, inner speech is seen as truncated external speech - as “speech minus sound” (Mueller) or “sub-vocal speech” (Watson). Bekhterev defined it as a speech reflex inhibited in its motor part. Such an explanation is by no measure of sufficiency. Silent “pronouncing” of words is not equivalent to the total process of inner speech.

The third definition is, on the contrary, too broad. To Goldstein, the term covers everything that precedes the motor act of speaking, including Wundt’s “motives of speech” and the indefinable, non-sensory and non-motor specific speech experience -, i.e., the whole interior aspect of any speech activity. It is hard to accept the equation of inner speech with an inarticulate inner experience in which the separate identifiable structural planes are dissolved without trace. This central experience is common to all linguistic activity, and for this reason alone Goldstein’s interpretation does not fit that specific, unique function that alone deserves the name of inner speech. Logically developed, Goldstein’s view must lead to the thesis that inner speech is not speech at all but rather an intellectual and affective-volitional activity, since it includes the motives of speech and the thought that is expressed in words.

To get a true picture of inner speech, one must embark upon that which is a specific formation, with its own laws and complex relations to the other forms of speech activity. Before we can study its relation to thought, on the one hand, and to speech, on the other, we must determine its special characteristics and function.

Inner speech allows one to speak for one’s external oration, for which of the others would be surprising, if such a difference in function did not affect the structure of the two kinds of speech. Absence of vocalisation per se is only a consequence of the specific nature of inner speech, which is neither an antecedent of external speech nor its reproduction in memory but is, in a sense, the opposite of external speech. The latter is the turning of thought into words, its materialisation and objectification. With inner speech, the process is reversed: Speech turns into inward thought. Consequently, their structures must differ.

The area of inner speech is one of the most difficult to investigate. It remained almost inaccessible to experiments until ways were found to apply the genetic method of experimentation. Piaget was the first to pay attention to the child’s egocentric speech and to see its theoretical significance, but he remained blind to the most important trait of egocentric speech - its genetic connection with inner speech – and this warped his interpretation of its function and structure. We made that relationship the central problem of our study and thus were able to investigate the nature of inner speech with unusual completeness. A number of considerations and observations led us to conclude that egocentric speech is a stage of development preceding inner speech: Both fulfil intellectual functions; Their structures are similar; egocentric speech disappears at school age, when inner speech begins to develop. From all this we infer that one change into the other.

If this transformation does take place, then egocentric speech provides the key to the study of inner speech. One advantage of approaching inner speech through egocentric speech is its accessibility to experimentation and observation. It is still vocalised, audible speech, i.e., external in its mode of expression, but at the same time inner speech in function and structure. To study an internal process, in that it is necessary to externalise it experimentally, by connecting it with some outer activity; barely then is objective functional analysis possible. Egocentric speech is, in fact, a natural experiment of this type.

This method has another great advantage: Since egocentric speech can be studied at the time when some of its characteristics are waning and new ones forming, we are able to judge which traits are essential to inner speech and which are only temporary, and thus to determine the goal of this movement from egocentric to inner speech -, i.e., the nature of inner speech.

Before we go on to the results obtained by this method, we will briefly discuss the nature of egocentric speech, stressing the differences between our theory and Piaget’s. Piaget contends that the child’s egocentric speech is a direct expression of the egocentrism of his thought, which in turn is a compromise between the primary autism of his thinking and its gradual socialisation. As the child grows older, and as autism overturns the associative remembers affiliated to socialisation progresses, leading to the waning of egocentrism in his thinking and speech.

In Piaget’s conception, the child in his egocentric speech does not adapt himself to the thinking of adults. His thought remains entirely egocentric; This makes his talk incomprehensibly to others. Egocentric speech has no function in the child’s realistic thinking or activity, but it merely accompanies them. And since it is an expression of egocentric thought, it disappears together with the child’s egocentrism. From its climax at the beginning of the child’s development, egocentric speech drops to zero on the threshold of school age. Its history is one of involution rather than evolution. It has no future.

In our conception, egocentric speech is a phenomenon of the transition from interpsychic to intrapsychic functioning, i.e., from the social, collective activity of the child to his more individualised activity - a pattern of development common to all the higher psychological functions. Speech for oneself originates through differentiation from speech for others. Since the main course of the child’s development is one of gradual individualisation, this tendency is reflected in the function and structure of his speech.

The function of egocentric speech is similar to that of inner speech: It does not merely accompany the child’s activity; it serves mental orientation, conscious understanding; it helps in overcoming difficulties; it is speech for oneself, intimately and usefully connected with the child’s thinking. Its fate is very different from that described by Piaget. Egocentric speech develops along a rising not a declining, curve; it goes through an evolution, not an involution. In the end, it becomes inner speech.

Our hypothesis has several advantages over Piaget’s: It explains the function and development of egocentric speech and, in particular, its sudden increase when the child’s face’s difficulties that demand consciousness and reflection – a fact uncovered by our experiments and which Piaget’s theory cannot explain. But the greatest advantage of our theory is that it supplies a satisfying answer to a paradoxical situation described by Piaget himself. To Piaget, the quantitative drop in egocentric speech as the child grows older means the withering of that form of speech. If that were so, its structural peculiarities might also be expected to decline; it is hard to believe that the process would affect only its quantity, and not its inner structure. The child’s thought becomes infinitely less egocentric between the ages of three and seven. If the characteristics of egocentric speech that make it incomprehensible to others are indeed rooted in egocentrism, they should become less apparent as that form of speech becomes less frequent; Egocentric speech should approach social speech and become ever more intelligible. Yet what are the facts? Is the talk of a three-year-old harder to follow than that of a seven-year-old? Our investigation established that the traits of egocentric speech that makes for inscrutability are at their lowest point at three and at their peak at seven. They develop in a reverse direction to the frequency of egocentric speech. While the latter keeps declining and reaches the point of zero at school age, the structural characteristics become more pronounced.

This throws a new light on the quantitative decrease in egocentric speech, which is the cornerstone of Piaget’s thesis.

What does this decrease mean? The structural peculiarities of speech for oneself and its differentiation from external speech increase with age. What is it that diminishes? Only one of its aspects verbalizes. Does this mean that egocentric speech as a whole is dying out? We believe that it does not, for how then could we explain the growth of the functional and structural traits of egocentric speech? On the other hand, their growth is perfectly compatible with the decrease of vocalisation - indeed, clarifies its meaning. Its rapid dwindling and the equally rapid growth of the other characteristics are contradictory in appearance only.

To explain this, let us start from an undeniable, experimentally established fact. The structural and functional qualities of egocentric speech become more marked as the child develops. At three, the difference between egocentric and social speech matches that to zero; At seven, we have speech that in structure and function is totally unlike social speech. A differentiation of the two speech functions has taken place. This is a fact - and facts are notoriously hard to refute.

Once we accept this, everything else falls into place. If the developing structural and functional peculiarities of egocentric speech progressively isolate it from external speech, then its vocal aspect must fade away. This is exactly what happens between three and seven years. With the progressive isolation of speech for oneself, its vocalisation becomes unnecessary and meaningless and, because of its growing structural peculiarities, also impossible. Speech for oneself cannot find expression in external speech. The more independent and autonomous egocentric speech becomes, the poorer it grows in its external manifestations. In the end it separates itself entirely from speech for others, ceases to be vocalised, and thus appears to die out.

But this is only an illusion. To interpret the sinking coefficient of egocentric speech as a sign that this kind of speech is dying out is like saying that the child stops counting when he ceases to use his fingers and starts adding in his head. In reality, behind the symptoms of dissolution lies a progressive development, the birth of a new speech form.

The decreasing vocalisation of egocentric speech denotes a developing abstraction from sound, the child’s new faculty to “think words” instead of pronouncing them. This is the positive meaning of the sinking coefficient of egocentric speech. The downward curve indicates development toward inner speech.

We can see that all the known facts about the functional, structural, and genetic characteristics of self-indulgent or egocentric speech points to one thing: It develops in the direction of inner speech. Its developmental history can be understood only as a gradual unfolding of the traits of inner speech.

We believe that this corroborates our hypothesis about the origin and nature of egocentric speech. To turn our hypothesis into a certainty, we must devise an experiment capable of showing which of the two interpretations is correct. What are the data for this critical experiment?

Let us restate the theories between which we must decide as for Piaget believes, that egocentric speech stems from the insufficient socialisation of speech and that its only development is decrease and eventual death. Its culmination lies in the past. Inner speech is something new brought in from the outside along with socialisation. We demonstrated that in egocentric speech stems from the insufficient individualisation of primary social speech. Its culmination lies in the future. It develops into inner speech.

To obtain evidence for one or the other view, we must place the child alternately in experimental situations encouraging social speech and in situations discouraging it, and see how these changes affect egocentric speech. We consider this an experimentum crucis for the following reasons.

If the child’s egocentric talk results from the egocentrism of his thinking and its insufficient socialisation, then any weakening of the social elements in the experimental setup, any factor contributing to the child’s isolation from the group, must lead to a sudden increase in egocentric speech. But if the latter results from an insufficient differentiation of speech for oneself from speech for others, then the same changes must cause it to decrease.

We took as the starting point of our experiment three of Piaget’s own observations: (1) Egocentric speech occurs only in the presence of other children engaged in the same activity, and not when the child is alone; i.e., it is a collective monologue. (2) The child is under the illusion that his egocentric talk, directed to nobody, is understood by those who surround him. (3) Egocentric speech has the character of external speech: It is audible or whispered. These are certainly not chance peculiarities. From the child’s own point of view, egocentric speech is not yet separated from social speech. It occurs under the subjective and objective conditions of social speech and may be considered a correlate of the insufficient isolation of the child’s individual consciousness from the social whole.

In our first series of experiments, we tried to destroy the illusion of being understood. After measuring the child’s coefficient of egocentric speech in a situation similar to that of Piaget’s experiments, we put him into a new situation: Either with deaf-mute children or with children speaking a foreign language. In all other respects the setup remained the same. The coefficient of egocentric speech dropped to zero in the majority of cases, and in the rest to one-eighth of the previous figure, on the average. This proves that the illusion of being understood is not a mere epiphenomenon of egocentric speech but is functionally connected with it. Our results must seem paradoxical from the point of view of Piaget’s theory: The weaker the child’s contact is with the group – amounting to less of the social situation forces’ him to adjust his thoughts to others and to use social speech – that there is more as freely should be the egocentrism of his thinking and speech manifest itself. But from the point of view of our hypothesis, the meaning of these findings is clear: Egocentric speech, springing from the lack of differentiation of speech for oneself from speech for others, disappears when the feeling of being understood, essential for social speech, is absent.

In the second series of experiments, the variable factor was the possibility of some collective monologue. Having measured the child’s coefficient of egocentric speech in a situation permitting collective monologue, we put him into a situation excluding it - in a group of children who were strangers to him, or by his being of self, at which point, a separate table in a corner of the room, for which he worked entirely alone, even the experimenter leaving the room. The results of this series agreed with the first results. The exclusion of the group monologue caused a drop in the coefficient of egocentric speech, though not such a striking one as in the first case - seldom to zero and, on the average, to one-sixth of the original figure. The different methods of precluding a collective characterize monologues that were not equally effective in reducing the coefficient of egocentric speech. The trend, however, was obvious in all the variations of the experiment. The exclusion of the collective factor, instead of giving full freedom to egocentric speech, depressed it. Our hypothesis was once more confirmed.

In the third series of experiments, the variable factor was the vocal quality of egocentric speech. Just outside the laboratory where the experiment was in progress, an orchestra played so loudly, or so much noise was made, that it drowned out not only the voices of others but the child’s own; in a variant of the experiment, the child was expressly forbidden to talk loudly and allowed to talk only in whispers. Once again the coefficient of egocentric speech went down, the relation to the original unit being the different methods were not equally effective, but the basic trend was invariably present.

The purpose of all three series of experiments was to eliminate those characteristics of egocentric speech that bring it close to social speech. We found that this always led to the dwindling of egocentric speech. It is logical, then, to assume that egocentric speech is a form developing out of social speech and not yet separated from it in its manifestation, though already distinct in function and structure.

The disagreement between us and Piaget on this point will be made quite clear by the following example: I am sitting at my desk talking to a person who is behind me and whom I cannot see; he leaves the room without my noticing it, and I continue to talk, under the illusion that he listens and understands. Outwardly, I am talking with myself and for myself, but psychologically my speech is social. From the point of view of Piaget’s theory, the opposite happens in the case of the child: His egocentric talk is for and with himself; it only has the appearance of social speech, just as my speech gave the false impression of being egocentric. From our point of view, the whole situation is much more complicated than that: Subjectively, the child’s egocentric speech already has its own peculiar function - to that extent, it is independent from social speech; Yet its independence is not complete because it is not felt as inner speech and is not distinguished by the child from speech for others. Objectively, also, it is different from social speech but again not entirely, because it functions only within social situations. Both subjectively and objectively, egocentric speech represents a transition from speech for others to speech for oneself. It already has the function of inner speech but remains similar to social speech in its expression.

The investigation of egocentric speech has paved the way to the understanding of inner speech, while our experiments convinced us that inner speech must be regarded, not as speech minus sound, but as an entirely separate speech function. Its main distinguishing trait is its peculiar syntax. Compared with external speech, inner speech appears disconnected and incomplete.

This is not a new observation. All the students of inner speech, even those who approached it from the behaviouristic standpoint, noted this trait. The method of genetic analysis permits us to go beyond a mere description of it. We applied this method and found that as egocentric speech transforms by its showing tendencies toward an altogether specific form of abbreviation: Namely, omitting the subject of a sentence and all words connected with it, while preserving the predicate. This tendency toward predication appears in all our experiments with such regularity that we must assume it to be the basic syntactic form of inner speech.

It may help us to understand this tendency if we recall certain situations in which external speech shows a similar structure. Pure predication occurs in external speech in two cases: Either as an answer or when the subject of the sentence is known beforehand to all concerned. The answer to “Would you like a cup of tea?” is never “No, I do not want a cup of tea “ but a simple “No.?” Obviously, such a sentence is possible only because its subject is tacitly understood by both parties. To “Has your brother read this book?” No one ever replies, “Yes, my brother has read this book.” The answer is a short “Yes,” or “Yes, he has.” Now let us imagine that several people are waiting for a bus. No one will say, on seeing the bus approach, “The bus for which we are waiting is coming.” The sentence is likely to be an abbreviated “Coming,” or some such expression, because the subject is plain from the situation. Exceptionally hold to a frequent shortened sentence causing confusion. The listener may relate the sentence to a subject foremost in his own mind, not the one meant by the speaker. If the thoughts of two people coincide, perfect understanding can be achieved through the use of mere predicates, but if they are thinking about different things they are bound to misunderstand each other.

Having examined abbreviation in external speech, we can now return enriched to the same phenomenon in inner speech, where it is not an exception but the rule. It will be instructive to compare abbreviation in oral, inner, and written speech. Communication in writing relies on the formal meanings of words and requires a much greater number of words than oral speech to convey the same idea. It is addressed to an absent person who rarely has in mind the same subject as the writer. Therefore, it must be fully deployed; Syntactic differentiation is at a maximum, and expressions are used that would seem unnatural in conversation. Griboedov’s “He talks like writing” refers to the droll effect of elaborate constructions in daily speech.

The multifunctional nature of language, which has recently attracted the close attention of linguists, had already been pointed out by Humboldt in relation to poetry and prose – two forms very different in function and in the means they use. Poetry, according to Humboldt, is inseparable from music, while prose depends entirely on language and is dominated by thought. Consequently, each has its own diction, grammar, and syntax. This is a conception of primary importance, although neither Humboldt nor those who encourage in developing his thought fully realised its implications. They distinguished only between poetry and prose, and within the latter between the exchange of ideas and ordinary conversation, i.e., the mere exchange of news or conventional chatter. There are other important functional distinctions in speech. One of them is the distinction between dialogue and monologue, as if written through the avenue of inner speech representation whereby it seems profoundly definitely strung by the monologue; The totalities of expression are uttered of some oral fashion as their linguistic manner as to be inferred by the spoken exchange that might be correlated by speech, in that in most cases, are contained through dialogue.

Dialogue always presupposes that in accordance with the collaborator’s formality that holds within the forming of knowledge, which it is maintained by its subject and is likely to be approved by an abbreviated speech and, under certain conditions, purely predicative sentences. It also presupposes that each person can see his partners, their facial expressions and gestures, and hear the tone of their voices. We have already discussed abbreviation and will consider here only its auditory aspect, using a classical example from Dostoevski’s, The Diary of a Writer, to show how much intonation helps the subtly differentiated understanding of a word’s meaning.

Dostoevski relates a conversation of drunks that entirely consisted of one unprintable word: “One Sunday night I happened to walk for some fifteen paces next to a group of six drunken young labourers, and I suddenly realised that all thoughts, feelings and even a whole chain of reasoning could be expressed by that one noun, which is moreover extremely short. One young fellow said it harshly and forcefully, to express his utter contempt for whatever it was they had all been talking about. Another answered with the same noun but in a quite different tone and sense - doubting that the negative attitude of the first one was warranted. A third suddenly became incensed against the first and roughly intruded on the conversation, excitedly shouting the same noun, this time as a curse and obscenity. Here the second fellow interfered again, angry with the third, the aggressor, and restraining him, in the sense of “Presently, as to implicate the now in question why to do you have to butt in, we were discussing things quietly and here you come and start swearing. And he told this whole thought in one word, the same venerable word, except that he also raised his hand and put it on the third fellow’s shoulder. All at once a fourth, the youngest of the group, who had kept silent till then, probably having suddenly found a solution to the original difficulty that had started the argument, raised his hand in a transport of joy and shouted . . . Eureka, do you think? I have it? No, not eureka and not I have it; he repeated the unprintable noun, one word, merely one word, but with ecstasy, in a shriek of delight - which was apparently too strong, because the sixth and the oldest, a glum-looking fellow, did not like it and cut the infantile joy of the other one short, addressing him in a sullen, exhortative bass and repeating . . . yes, still the same noun, forbidden in the presence of ladies but which this time clearly meant “What are you yelling yourself hoarse for? So, without uttering a single other word, they repeated that one beloved word is six times in a row, and only one after another, and understood one another completely.” [The Diary of a Writer]

Inflection reveals the psychological context within which a word is to be understood. In Dostoevski’s story, it was contemptuous negation in one case, doubt in another, anger in the third. When the context is as clear as in this example, it really becomes possible to convey all thoughts, feelings, and even a whole chain of reasoning by one word.

In written speech, as tone of voice and knowledge of subject are excluded, we are obliged to use many more words, and to use them more exactly. Written speech is the most elaborate form of speech.

Some linguists consider dialogue the natural form of oral speech, the one in which language fully reveals its nature, and monologue to a greater degree for being artificial. Psychological investigation leaves no doubt that monologue is indeed the higher, more complicated form, and of later historical development. At present, however, we are interested in comparing them only in regard with the tendency toward abbreviation.

The speed of oral speech is unfavourable to a complicated process of formulation, but it does not leave time for deliberation and choice. Dialogue implies immediate unpremeditated utterance. It consists of replies, repartee; it is a chain of reactions. Monologue, by comparison, is a complex formation; the linguistic elaboration can be attended too leisurely and consciously.

In written speech, lacking situational and expressive supports, communication must be achieved only through words and their combinations; this requires the speech activity to take complicated forms - hence the use of first drafts. The evolution from the draft to the final copy reflects our mental process. Planning has an important part in written speech, even when we do not actually write out a draft. Usually we say to ourselves what we are going to write; This is also a draft, though in thought only. As we tried to show in the preceding chapter, this mental draft is inner speech. Since inner speech functions as a draft not only in written but also in oral speech, we will now compare both these forms with inner speech in respect to the tendency toward abbreviation and predication.

This tendency, never found in written speech and only some times in oral speech, arises in inner speech always. Predication is the natural form of inner speech, psychologically as it consists of predicates only. It is as much a law of inner speech to omit subjects as it is a law of written speech to contain both subjects and predicates.

The key to this experimentally established fact is the invariable, inevitable presence in inner speech of the factors that facilitate pure predication: We know what we are thinking about -, i.e., we always know the subject and the situation. Psychological contact between partners in a conversation may establish a mutual perception leading to the understanding of abbreviated speech. In inner speech, the “mutual” perception is always there, in absolute form; Therefore, a practically wordless “communisation” of even the most complicated thoughts is the rule. The predominance of predication is a product of development. In the beginning, egocentric speech is identical in structure with social speech, but in the process of its transformation into inner speech it gradually becomes less thorough and coherent as it becomes governed by the entire predicative syntax. Experiments show clearly how and why the new syntax takes hold. The child talks about the things he sees or hears or does at a given moment. As a result, he tends to leave out the subject and all words connected with it, condensing his speech frequently until only predicates are left. The more differentiated the specific function of egocentric speech becomes, the more pronounced are its syntactic peculiarities - simplification and predication. Hand in hand with this change goes decreasing vocalisation. When we converse with ourselves, we need even fewer words than Kitty and Levin did. Inner speech is speech almost without words.

With syntax and sound reduced to a minimum, meaning is more than ever in the forefront. Inner speech works with semantics, not phonetics. The specific semantic structure of inner speech also contributes to abbreviation. The syntax of meanings in inner speech is no less original than its grammatical syntax. Our investigation established three main semantic peculiarities of inner speech.

The first and basic one is the preponderance of the sense of a word over its meaning, and a distinction we accredit to Paulhan. The sense of a word, according to him, is the sum of all the psychological events aroused in our consciousness by the word. It is a dynamic, fluid, complex whole, which has several zones of unequal stability. Means is only one of the zones of sense, are the most stable and precise area. A word acquires its sense from the context in which it appears; in different contexts, it changes its sense. Meaning remains stable throughout the changes of sense. The dictionary meaning of a word is no more than a stone in the edifice of sense, no more than a potentiality that finds diversified realisation in speech.

The last words of the previously mentioned fable by Krylov, “The Dragonfly and the Ant,” is a good illustration of the difference between sense and meaning. The words “Go and dances” comprise of a definite and constant meaning, but in the context of the fable they acquire a much broader intellectual and affective sense. They mean both to “Enjoy yourself” and “Perish.” This enrichment of words by the sense they gain from the context is the fundamental law of the dynamics of word meanings. A frame in the circumstance of having to a context, it means both are more and fewer than the same word in isolation: More, because it acquires new content; less, because its meaning is limited and narrowed by the context. The sense of a word, says Paulhan, is a complex, mobile, protean phenomenon; it changes in different minds and situations and is almost unlimited. A word derives its sense from the sentence, which in turn gets its sense from the paragraph, the paragraph from the book, the book from all the works of the author.

Paulhan rendered a further service to psychology by analysing the relation between word and sense and showing that they are much more independent of each other than word and meaning. It has long been known that words can change their sense. Recently it was pointed out that sense can change words or, better, that ideas often change their names. Just as the sense of a word is connected with the whole word, and not with its single sounds, the sense of a sentence is connected with the whole sentence, and not with its individual words. Therefore, a word may sometimes be replaced by another without any change in sense. Words and sense are relatively independent of each other.

In inner speech, the predominance of sense over meaning, of the sentences over communicative words as their formalities and of context over sentences that are the rule.

This leads us to the other semantic peculiarities of inner speech. Both concern word combination. One of them is rather like agglutination, and a way of combining words fairly frequents in some languages and comparatively rare in others. German often forms one noun out of several words or phrases. In some primitive languages, such adhesion of words is a general rule. When several words are merged into one word, the new word not only expresses a rather complex idea but designates all the separate elements contained in that idea. Because the stress is always on the main root or idea, such languages are easy to understand. The egocentric speech of the child displays some analogous phenomena. As egocentric speech approaches inner speech, the child uses agglutination frequently as a way of forming compound words to express complex ideas.

The third basic semantic peculiarity of inner speech is the way in which senses of words combine and unite - a process governed by different laws from those governing combinations of meanings. When we observed this singular way of uniting words in egocentric speech, we called it “influx of sense.” The senses of different words flow into one another - literally “influence” one and another - so that the earlier ones are contained in, and modify, the later ones. Thus, a word that keeps recurring in a book or a poem sometimes absorbs all the variety of sense contained in it and becomes, in a way, equivalent to the work itself. The title of literary works expresses its content and completes its sense to a much greater degree than does the name of a painting or of a piece of music. Titles like Don Quixote, Hamlet, and Anna Karenina illustrate this very clearly - the whole sense of its operative word is contained in one name. Another excellent example is Gogol’s Dead Souls. Originally, the title referred to dead serfs whose names had not yet been removed from the official lists and who could still be bought and sold as if they were alive. It is in this sense that the words are used throughout the book, which is built up around this traffic in the dead. But through their intimate relationship with which the work as a whole, as these two words acquire the diversity of new and changing significance, an infinitely broader sense. When we reach the end of the book, “Dead Souls” means to us not so much the defunct serfs as all the characters in the story, who are alive physically but dead spiritually.

In inner speech, the phenomenon reaches its peak. A single word is so saturated with sense that many words would be required to explain it in external speech. No wonder about why egocentric speech is incomprehensible to others. Watson says that inner speech would be incomprehensible even if it could be recorded. Its opaqueness is further increased by a related phenomenon that, incidentally, Tolstoy noted in external speech: In Childhood, Adolescence, and Youth, he describes how between people in close psychological contact words acquire special meanings understood only by the initiated. In inner speech, the same kind of idiom develops – the kind that is difficult to translate into the language of external speech.

With this we will conclude our survey of the peculiarities of inner speech, which we first observed in our investigation of egocentric speech. In looking for comparisons in external speech, we found that the latter already contain, potentially at least, the traits typical of inner speech; Predication, decreases the vocalisation, and preponderance of sense over meaning, agglutinations, etc., appear under certain conditions also in external speech. This, we believe, is the best confirmation of our hypothesis that inner speech originates through the differentiation of egocentric speech from the child’s primary social speech.

All our observations indicate that inner speech is an autonomous speech function. We can confidently regard it as a distinct plane of verbal thought. It is evident that the transition from inner to external speech is not a simple translation from one language into another. It cannot be achieved by merely vocalising silent speech. It is a complex, dynamic process involving the transformation of the predicative, idiomatic structure of inner speech into syntactically articulated speech intelligible to others.

We can now return to the definition of inner speech that we proposed before presenting our analysis. Inner speech is not the interior aspect of external speech, but it is a function in itself. It remains speech, i.e., thought connected with words. But while in external speech thought is embodied in words, in inner speech words die as they bring forth thought. Inner speech is to a large extent thinking in pure meanings. It is a dynamic, shifting, unstable thing, fluttering between word and thought, as two more or less sensible stables that are more or less firmly delineated components of verbal thought. Its true nature and place can be understood only after examining the next plane of verbal thought the one still more inward than inner speech.

That plane is thought itself. As we have said, every thought creates a connection, fulfils a function, solves a problem. The flow of thought is not accompanied by a simultaneous unfolding of speech. The two processes are not identical, and there is no rigid correspondence between the units of thought and speech. This is especially obvious when a thought process miscarries - when, as Dostoevski put it, a thought “will not enter words.” Thought has its own structure, and the transition from it to speech is no easy matter. The theatre faced the problem of the thought behind the words before psychology did. In teaching his system of acting, Stanislavsky required the actors to uncover the “subtext” of their lines in a play. In Griboedov’s comedy Woe from Wit, the hero, Chatsky, says to the hero, who maintains that she has never stopped thinking of him, “Thrice blessed who believes. Believing warms the heart.” Stanislavsky interpreted this as “Let us stop this mutter”; However, to stop, it could just as well be interpreted as “I do not believe you. You say it to comfort me,” or as “Don’t you see how you torment me? I wish I could believe you. That would be bliss.” Every sentence that we say in real life has some kind of subtext, a thought hidden behind it. In the examples we gave earlier of the lack of coincidence between grammatical and psychological subject and predicate, we did not pursue our analysis to the end. Just as one sentence may express different thoughts, one thought may be expressed in different sentences. For instance, “The clock fell,” in answer to the question “Why did the clock stop?” Could mean? “It is not my fault that the clock is out of order; it fell.” The same thought, for determining the self justification, could take the form of “It is not my habit to touch other people’s things. I was just dusting here,” or a number of others.

Though, unlike speech, does not consist of separate units. When I wish to communicate the thought that today I saw a barefoot boy in a blue shirt running down the street, I do not see every item separately: the boy, the shirt, its blue colour, his running, the absence of shoes. I conceive of all this in one thought, but I put it into separate words. A speaker often takes several minutes to disclose one thought. In his mind the whole thought is present at once, but in speech it has to be developed successively. A thought may be compared with a cloud shedding a shower of words. Precisely because thought does not have its automatic counterpart in words, the transition from thought to word leads through meaning. In our speech, there is always the hidden thought, the subtext. Because a direct transition from thought to word is impossible, there have always been laments about the inexpressibility of thought: “How shall the heart express itself? How shall another understand?”

Direct communication between minds is impossible, not only physically but psychologically. Communication can be achieved only in a roundabout way. Thought must pass first through meanings and then through words.

We come now to the last step in our analysis of verbal thought. Though to be itself is too engendered by motivation, i.e., by our desires and needs, our interests and emotions. Behind every thought there is an affective-volitional tendency, which holds the answer to the last “why” in the analysis of thinking. A true and full understanding of another’s thought is possible only when we understand its affective-volitional basis. We will illustrate this by an example already used: The interpretation of parts in a play. Stanislavsky, in his instructions to actors, listed the motives behind the words of their parts.

To understand another’s speech, it is not sufficient to understand his words, but we must understand his thought. But even that is not enough - we must also know its motivation. No psychological analysis of an utterance is complete until that plane is reached.

In the end, the verbal thought appeared as a complex, dynamic entity, and the relation of thought and word within it as a movement through a series of planes. Our analysis followed the process from the outermost to the innermost plane. In reality, the development of verbal thought takes the opposite course: From the motive that engenders a thought to the shaping of the thought, first in inner speech, then in meanings of words, and finally in words. It would be a mistake, however, to imagine that this is the only road from thought to word. The development may stop at any point in its complicated course; An infinite variety of movements back and forth, of ways still unknown to us, is possible. A study of these manifold variations lies beyond the scope of our present task.

Here we have wished to study the inner workings of thought and speech, hidden from direct observation. Meaning and the whole inward aspects of language, the position of which its turning toward the person, is not toward the outer world, have been so far an almost unknown territory. No matter how they were interpreted, the relations between thought and word were always considered constant, established forever. Our investigation has shown that they are, on the contrary, delicate, changeable relations between processes, which arise during the development of verbal thought. We did not intend to, and could not, exhaust the subject of verbal thought. We tried only to give a general conception of the infinite complexity of this dynamic structure - a conception starting from experimentally documented facts.

To association psychology, thought and its inscription of words was united by external bonds, similar to the bonds between two nonsense syllables. Gestalt psychology introduced the concept of structural bonds but, like the older theory, did not account for the specific relations between thought and word. All the other theories grouped themselves around two poles - either the behaviourist concept of thought as speech minus sound or the idealistic view, held by the Wuerzburg school and Bergson, that thought could be “pure,” unrelated to language, and that it was distorted by words. Tjutchev’s “A thought once uttered is a lie” could well serve as an epigraph for the latter group. Whether inclining toward pure naturalism or extreme idealism, all these theories have one trait in common - their antihistorical bias. They study thought and speech without any reference to their developmental history.

A historical theory of inner speech can deal with this immense and complex problem. The relation between thought and word is a living process; Thought is born through words. A word devoid of thought is a dead thing, and a thought unembodied in words remains a shadow. The connection between them, however, is not a preformed and constant one. It emerges in the course of development, and it evolves. To the Biblical “In the beginning was the Word,” Goethe makes Faust reply, “In the beginning was the deed.” The intent here is to detract from the value of the word, but we can accept this version if we emphasise it differently: In the beginning was the deed. The word was not the beginning, and action was there first; it is the end of development, crowning the deed.

We cannot, without mentioning the perspectives that our investigation opens. We studied the inward aspects of speech, which were as unknown to science as the other side of the moon. We showed that a generalised reflection of reality is the basic characteristic of words. This aspect of the word brings us to the threshold of a wider and deeper subject - the general problem of consciousness. Though and language, for which reflect reality in a way different from that of perception, that which is the key to the nature of human consciousness. Words play a central part not only in the development of thought but in the historical growth of consciousness as a whole. A word is a microcosm of human consciousness

The hermetic tradition has long been concerned with the relationship between the inner world of our consciousness and the outer world of nature, between the microcosm and the macrocosm, below and the above, the material and the spiritual, the centric and the peripheral. The hermetic world view held by such as Robert Fludd, having conceived by some great chain of being linking our inner spark of consciousness with all the facets of the Great World. There were grands to see the platonic metaphysical clockwork, as it were, through which our inner world was linked by means of a hierarchy of beings and planes to the highest unity of the Divine.

This view though comforting is philosophically unsound, and the developments in thought since the early 17th century have made such a hermetic world view seems as untenable and still philosophically naive. It is impossible to try to argue the case for such an hermetic metaphysic with anyone who has had philosophical training, for they will quickly and mercilessly reveal deep philosophical contradictions in this world view.

So do we now have to abandon such a beautiful and spiritual world view and adopt the prevailing reductionist materialist conception of the world that has become accepted in the intellectual tradition of the West?

I am not so sure. There still remains the problem of our consciousness and its relationship to our material form - the Mind / Brain problem. Behavioural psychologists such as Skinner tried to reduce this to one level - the material brain - by viewing the mental or consciousness events from the outside for being merely, stimulus-response loops. This simplistic view works well for basic reflex actions - "I itch therefore I scratch" - but dissolves into absurdity when applied to any real act of the creative intellect or artistic imagination. Skinners’ determinism collapses when confronted with trying to explain the creative source of our consciousness revealing itself in an artist at work or a mathematician discovering through his thinking a new property of an abstract mathematical system. The psychologists' attempts to reduce the mind/brain problem to a merely material one of neurophysiology obviously failed. The idea that consciousness is merely a secretion or manifestation of a complex net of electrical impulses working within the mass of cells in our brain, is now discredited. The advocates of this view are strongly motivated by a desire to reduce the world to one level, to get rid of the necessity for "consciousness,” "mind" or "spirit" as a real facet of the world.

This materialistic determinism in which everything in the world (including the phenomenon of consciousness) can be reduced to simple interactions on a physical/chemical level, belongs really to the nineteenth century scientific landscape. Nineteenth century science was founded upon a "Newtonian Absolute Physics" which provided a description of the world as an interplay of forces obeying immutable laws and following a predetermined pattern. This is the "billiard ball" view of the world - one in which, provided we are given the initial state of the system (the layout of the balls on the table, and the exact trajectory, momentum and other parameters of the cue ball, etc.) then theoretically the exact layout after each interaction can be precisely calculated to absolute precision. All could be reduced to the determinate interplay of matter obeying the immutable laws of physics. The concept of the "spiritual" was unnecessary, even "mind" was dispensable, and "God" of course had no place in this scheme of things.

This comfortably solid "Newtonian" world view of the materialists has however been entirely undermined by the new physics of the twentieth century, and in particular through Quantum Theory. Physicists investigating the properties of sub-atomic matter, found that the deterministic Newtonian absolutism broke down at the foundation level of matter. An element of probability had to be introduced into the physicists' calculations, and each sub-atomic event was itself inherently unpredictably - one could only ascribe a probability to the outcome. The simple billiard ball model collapsed at the sub-atomic level. For if the billiard table was intended as a picture of a small region of space on the atomic scale and each ball was to be a particle (an electron, proton, or neutron, etc.), then physicists came to realise that this model could not represent reality on that level. For in Quantum theory one could not define the position and momentum of a particle both at the same moment. As soon as we establish the parameters of motion of a body, its position is uncertain and can only be described mathematically as a wave of probability. Our billiard table dissolved into a fluid ever-moving undulating surface, with each ball at one moment focussed to a point then at another dissolving and spreading itself out over an area of the space of the table. Trying to play billiards at this sub-atomic level was rather difficult.

In the Quantum picture of the world, each individual event cannot be determined exactly, but has to be described by a wave of probability. There is a kind of polarity between the position and energy of any particle in which they cannot be simultaneously determined. This was not a failing of experimental method but a property of the kinds of mathematical structures that physicists have to use to describe this realm of the world. The famous equation of Quantum theory embodying Heisenberg's Uncertainty Principle is: Planck's constant = (uncertainty in energy) x (uncertainty in position)

Thus if we try to fix the position of the particle (i.e., reduce the uncertainty in its position to a small factor) then as a consequence of this equation the uncertainty in the energy must increase to balance this, and therefore we cannot find a value for the energy of the particle simultaneous with fixing its position. Planck's constant being very small means that these infractions as based of the factors only become dominant on the extremely small scale, which are within the realm of the atom.

So we see that the Quantum picture of reality has at its foundation a non-deterministic view of the fundamental building Forms of matter. Of course, when dealing with large masses of particles these quantum indeterminacies effectively cancel each other out, and physicists can determine and predict the state of large systems. Obviously planets, suns, galaxies being composed of large numbers of particles do not exhibit any uncertainty in their position and energies, for when we look at such large aggregates as some of its totality, the total quantum uncertainty is a systems reduction as placed by zero, and in respect to their large scale properties can effectively be treated as deterministic systems.

Thus on the large scale we can effectively apply a deterministic physics, but when we wish to look in detail at the properties of the sub-atomic realm, lying at the root and foundation of our world, we must enter a domain of quantum uncertainties and find the neat ordered picture dissolving into a sea of ever flowing forces that we cannot tie down or set into fixed patterns.

Some people when faced with this picture of reality find comfort in dismissing the quantum world as having little to do with the "real world" of appearances. We do not live within the sub-atomic level after all. However, it does spill out into our outer world. Most of the various electronic devices of the past decades rely on the quantum tunnelling effect in transistors and silicon chips. The revolution in quantum physics has begun to influence the life sciences, and biologists and botanists are beginning to come up against quantum events as the basis of living systems, in the structure of complex molecules in the living tissues and membranes of cells for example. When we look at the blue of the sky, we are looking at a phenomenon only recently understood through quantum theory.

Although the Quantum picture of reality might seem strange indeed, I believe the picture it presents of the foundations of the material world, the ever flowing sea of forces metamorphosing and interacting through the medium of "virtual" or quantum messenger particles, has certain parallels with nature of our consciousness.

I believe that if we try to examine the nature of our consciousness we will find at its basis it exhibits "quantum" like qualities. Seen from a distant, large scale and external perspective, we seem to be able to structure our consciousness in an exact and precise way, articulating thoughts and linking them together into long chains of arguments and intricate structures. Our consciousness can build complex images through its activity and seems to have all the qualities of predictability and solidity. The consciousness of a talented architect is capable of designing and holding within itself an image of large solid structures such as great cathedrals or public buildings. A mathematician is capable of inwardly picturing an abstract mathematical system, deriving its properties from a set of axioms.

In this sense our consciousness might appear as an ordered and deterministic structure, capable of behaving like and being explicable in the same terms as other large scale structures in the world. However, this is not so. For if we through introspection try to examine the way in which we are conscious, in a sense to look at the atoms of our consciousness, this regular structure disappears. Our consciousness does not actually work in such an ordered way. We only nurture an illusion if we try to hold to the view that our consciousness is fixed by an ordered deterministic structure. True, we can create the large scale designs of the architect, the abstract mathematical systems, a cello concerto, but anyone who has built such structures within their consciousness knows that this is not achieved by a linear deterministic route.

Our consciousness is at its root a maverick, ever moving, increasing by its accommodating perception, feeling, thought, to another. We can never hold it still or focus it at a point for long. Like the quantum nature of matter, the more we try to hold our consciousness to a fixed point, the greater the uncertainty in its energy will become. So when we focus and narrow our consciousness to a fixed centre, it is all the more likely to jump with a great rush of energy to some seemingly unrelated aspect of our inner life suddenly. We all have such experiences each moment of the day. As in our daily work we try to focus our mind upon some problem only to experience a shift to another domain in ourselves suddenly, another image or emotional current intrudes then vanishes again, like an ephemeral virtual particle in quantum theory.

Those who begin to work upon their consciousness through some kinds of meditative exercises will experience these quantum uncertainties in the field of consciousness in a strong way.

In treating our consciousness as if it were a digital computer or deterministic machine after the model of 19th century science, I believe we foster a limited and false view of our inner world. We must now take the step toward a quantum view of consciousness, recognising that at its base and root our consciousness behaves like the ever flowing sea of the sub-atomic world. The ancient hermeticists foresaw consciousness as the "Inner Mercury.” Those who have experienced the paradoxical way in which the metal Mercury is both dense and metallic and yet so elusive, flowing and breaking up into small globules, and just as easily coming together again, will see how perceptive the alchemists were of the inner nature of consciousness, in choosing this analogy. Educators who treat the consciousness of children as if it were a filing cabinet to be filled with ordered arrays of knowledge are hopelessly wrong.

We can believe of the stepping stones whereby the formidable combinations await to the presence of the future, yet the nature of consciousness, and the look upon of what we see is justly to how this overlays links’ us with the mind/brain problem. The great difficulty in developing a theory of the way in which consciousness/mind is embodied in the activity of the brain, has I believe arisen out of the erroneous attempt to press a deterministic view onto our brain activity. Skinner and the behaviourist psychologists attempted to picture the activity of the brain as a computer where each cell behaved as an input/output device or a complex flip/flop. They saw nerve cells with their axons (output fibres) and dendrites (input fibres) being linked together into complex networks. An electrical impulse travelling onto a dendrite made a cell ‘fire’ and sent an impulse out along its axon so setting another nerve cell into action. The resulting patterns of nerve impulses constituted a reflex action, an impulse to move a muscle, a thought, a feeling, an intuitive experience. All could be reduced to the behaviour of this web of axons and dendrites of the nerve cells.

This simplistic picture, of course, was insufficient to explain even the behaviour of creatures like worms with primitive nervous systems, and in recent years this approach has largely been abandoned as it is becoming recognised that these events on the membranes of nerve cells are often triggered by shifts in the energy levels of sub-atomic particles such as electrons. In fact, at the root of such interactions lie quantum events, and the activity of the brain must now be seen as reflecting these quantum events.

The brain can no longer be seen as a vast piece of organic clockwork, but as a subtle device amplifying quantum events. If we trace a nerve impulse down to its root, there lies a quantum uncertainty, a sea of probability. So just how is it that this sea of probability can cast up such ordered structures and systems as the conception of a cello concerto or abstract mathematical entities? Perhaps here we may glimpse a way in which "spirit" can return into our physics.

The inner sea of quantum effects in our brain is in some way coupled to our ever flowing consciousness. When our consciousness focusses to a point, and we concentrate on some abstract problem or outer phenomenon, the physical events in our brain, the pattern of impulses, shifts in some ordered way. In a sense, the probability waves of a number of quantum systems in different parts of the brain, are brought into resonance, and our consciousness is able momentarily to create an ordered pattern that manifests physically through the brain. The thought, feeling, perception is momentarily earthed in physical reality, brought from the realm of the spiritual potential into outer actuality. This focussed ordering of the probability waves of many quantum systems requires an enormous amount of energy, but this can be borrowed in the quantum sense for a short instant of time. Thus we have through this quantum borrowing a virtual quantum state that is the physical embodiment of a thought, feeling, etc. However, as this can only be held for a short time, the quantum debt must be paid and the point of our consciousness is forced to jump to another quantum state, perhaps in another region of the brain. Thus our thoughts are jumbled up with emotions, perceptions, fantasy images.

The central point within our consciousness, our "spirit" in the hermetic sense, can now be seen as an entity that can work to control quantum probabilities. To our "spirits" our brain is a quantum sea providing a rich realm in which it can incarnate and manifest patterns down into the electrical/chemical impulses of the nervous system. (It has been calculated that the number of interconnections existing in our brains far exceeds the number of atoms in the whole universe - so in this sense the microcosm truly mirrors the macrocosm!). Our "spirit" allows the unswerving quantum, of which it borrows momentarily to press of a certain order into this sea that manifests the containment of a thought, emotion, etc. Such an ordered state can only exist momentarily, before our spirit or point of consciousness is forced to jump and move to other regions of the brain, where at that moment the pattern of probability waves for the particles in these nerve cells, can reflect the form that our spirit is trying with which to work.

This quantum borrowing to create regular patterns of probability waves is bought for a high price in that a degree of disorder must inevitably arise whenever the spirit tries to focus and reflect a linked sequential chain of patterns into the brain (such as we would experience as a logical adaption of our thought or some inward picture of some elaborate structure). Thus, it is not surprising that our consciousness sometimes brings to adrift and jumps about in a seemingly chaotic way. The quantum borrowing might also be behind our need for sleep and dream, allowing the physical brain to rid itself of the shadowy echos of these patterns pressed into it during waking consciousness. Dreaming may be that point in a cycle where consciousness and its vehicle interpenetrate and flow together, allowing the patterns and waves of probability to appear without any attempt to focus them to a point. In dream and sleep we experience our point of consciousness dissolving, decoupling and defocussing.

The central point of our consciousness, when actively thinking or feeling, must jump around the sea of patterns in our brain. (It is well known through neurophysiology that function cannot be located at a certain point in the brain, but that different areas and groups of nerve cells can take on a variety of different functions.) We all experience this when in meditation we merely let our consciousness move as it will. Then we come to sense the elusive mercurial eternal movement of the point of our consciousness within our inner space. You will find it to be a powerful and convincing experience if you try in meditation to follow the point of your consciousness moving within the space of your skull. Many religious traditions teach methods for experiencing this inner point of spirit.

I believe the movement of this point of consciousness, which appears as a pattern of probability waves in the quantum sea, must occur in extremely short segments of time, of necessity shorter than the time an electron takes to move from one state to another within the molecular structure of the nerve cell membranes. We are thus dealing in time scales significantly less than 10 to the power -16 of a second and possibly down to 10 to the power -43 of a second. During such short periods of time, the Heisenberg Uncertainty Principle that lies at the basis of quantum theory, means that this central spark of consciousness can borrow a large amount of energy, which explains how it can bring a large degree of ordering into a pattern. Although our point of consciousness lives at this enormously fast speed, our brain, which transforms this into a pattern of electro/chemical activity runs at a much slower rate. Between creating each pattern our spark of consciousness must wait almost an eternity for this to be manifested on the physical level. Perhaps this may account for the sense we all have sometimes of taking an enormous leap in consciousness, or travelling though vast realms of ideas, or flashes of images, in what is only a fleeting moment.

At around 10 to the power -43 of a second, time itself becomes quantized, that is it appears as discontinuous particles of time, for there is no way in which time can manifest in quantities less than 10 to the power -43 (the so-called Planck time). For here the borrowed quantum energies distort the fabric of space turning it back upon themselves. Their time must have a stop. At such short intervals the energies available are enormous enough to create virtual black holes and wormhole in space-time, and at this level we have only a sea of quantum probabilities - the so-called Quantum Foam. Contemporary physics suggests that through these virtual wormhole in space-time there are links with all time past and future, and through the virtual black holes even with parallel universes.

It must be somewhat above this level that our consciousness works, weaving probability waves into patterns and incarnating them in the receptive structure of our brains. Our being or spirit lives in this Quantum Foam, which is thus the Eternal Now, infinite in extent and a plenum of all possibilities. The patterns of everything that has been, that is now, and will come to be, exists latently in this quantum foam. Perhaps this is the realm though which the mystics stepped into timelessness, the eternal present, and sensed the omnipotence and omniscience of the spirit.

I believe that these exciting discoveries of modern physics could be the basis for a new view of consciousness and the way it is coupled to our physical nature in the brain. (Indeed, one of the fascinating aspects of Quantum theory which puzzles’ and mystifies contemporary physicists is the way in which their quantum description of matter requires that they recognise the consciousness of the observer as a factor in certain experiments. This enigma has caused not a few physicists to take an interest in spirituality especially inclining them to eastern traditions like Taoism or Buddhism, and in time I hope that perhaps even the hermetic traditions might prove worthy of their interest).

An important experiment carried out as recently as summer 1982 by the French physicist, Aspect, has unequivocally demonstrated the fact that physicists cannot get round the Uncertainty Principle and simultaneously determine the quantum states of particles, and confirmed that physicists cannot divorce the consciousness of the observer from the events observed. This experiment (in disproving the separability of quantum measurements) has confirmed what Einstein, Bohr and Heisenberg were only able to debate over philosophically - that with quantum theory we have to leave behind our naive picture of reality under which there happens as some unvaryingly compound structure if only to support its pictured clockwork. We are challenged by quantum theory to build new ways in which to picture reality, a physics, moreover, in which consciousness plays a central role, in which the observer is inextricably interwoven in the fabric of reality.

In a sense it may now be possible to build a new model of quantum consciousness, compatible with contemporary physics and which allows a space for the inclusion of the hermetic idea of the spirit. It may be that science has taken a long roundabout route through the reductionist determinism of the 19th century and returned to a more hermetic conception of our inner world.

In this short essay, incompletely argued though it may be, I hope I have at least presented some of the challenging ideas that lie behind the seeming negativity of our present age. For behind the hopelessness and despair of our times we stand on the brink of a great breakthrough to a new recognition of the vast spiritual depths that live within us all as human beings.

The idea that people may create devices that are conscious is known as artificial consciousness (AC). This is an ancient idea, perhaps dating back to the ancient Greek Promethean myth in which conscious people were supposedly manufactured from clay, pottery being an advanced technology in those days. In modern science fiction artificial people or conscious beings are described for being manufactured from electronic components. The idea of artificial consciousness (which is also known as machine consciousness (MC) or synthetic consciousness) is an interesting philosophical problem in the twenty first century because, with increased understanding of genetics, neuroscience and information processing it may soon be possible to create an entity that is conscious. It may be possible biologically to create a being by manufacturing a genome that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can, technological technique be used in the design of computers and be to adapt and create a conscious entity? Would it ever be ethical to do such a thing? Neuroscience hypothesizes that consciousness is the synergy generated with the inter-operation of various parts of our brain, what have come to be called the neuronal correlates of consciousness, or NCC. The brain seems to do this while avoiding the problem described in the Homunculus fallacy and overcoming the problems described below in the section on the nature of consciousness. A quest for proponents of artificial consciousness is therefore to manufacture a machine to emulate this inter-operation, which no one yet claims fully to understand.

Consciousness is described at length in the consciousness article in Wikipedia. Wherefore, some informal type of naivete has to the structural foundation of realism and the direct of realism are that we perceive things in the world directly and our brains perform processing. On the other hand, according to indirect realism and dualism our brains contain data about the world that is obtained by processing but what we perceive is some sort of mental model or state that appears to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes dualism). Which of these general approaches to consciousness is correct has not been resolved and is the subject of fierce debate. The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to supervene directly on the world outside the brain. On the other hand, if we perceive things indirectly, via a model of the world in our brains, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience. If we perceive things directly self-awareness is difficult to explain because one of the principal reasons for proposing direct perception is to avoid Ryle's regress where internal processing becomes an infinite loop or recursion. The belief in direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion. Self awareness is less problematic for entities that perceive indirectly because, by definition, they are perceiving their own state. However, as mentioned above, proponents of indirect perception must suggest some phenomenon, either physical or dialyzed to prevent Ryle's regress. If we perceive things indirectly then self awareness might result from the extension of experience in time described by Immanuel Kant, William James and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics.

Information processing consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implement the instructions need not be electronic, it could be mechanical or fluids. Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. The Wikipedia article on Artificial Intelligence (AI) considers this problem in depth. If technologists were limited to the use of the principles of digital computing when creating a conscious entity, they would have the problems associated with the philosophy of strong AI. The most serious problem is John Searle's Chinese room argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc. Searle's objection does not convince those who believe in direct perception because they would maintain that 'meaning' is only to be found in the objects of perception, which they believe is the world itself. The objection is also countered by the concept of emergence in which it is proposed that some unspecified new physical phenomenon arise in very complex processors as a result of their complexity. It is interesting that the misnomer digital sentience is sometimes used in the context of artificial intelligence research. Sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It draws attention to the way that conscious experience is a state rather than a process that might occur in processors.

The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualities believe that there is something nonphysical about consciousness while physicalist hold that all things are physical. Those who believe that consciousness is physical are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness be due to another physical phenomenon. The eminent neurologist Wilder Penfield was of this opinion and scientists such as Arthur Stanley Eddington, Roger Penrose, Herman Weyl, Karl Pribram and Henry Stapp among many others, have also proposed that consciousness involve physical phenomena that are more subtle than simple information processing. Even some of the most ardent supporters of consciousness in information processors such as Dennett suggests that some new, emergent, scientific theory may be required to account for consciousness. As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.

Some technologists working in the field of artificial consciousness are trying to create devices that appear conscious. These devices might simulate consciousness or actually be conscious but provided are those that appear conscious in the desired result that has been achieved. In computer science, the term digital sentience is used to describe the concept of digital numeration could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts.” In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness so that the device appears conscious. Another, but less to agree about, that its responsible and corresponding definition as extracted in the word of “conscious,” slowly emerges as to be inferred through the avenue in being of, "Possessing knowledge by the seismical provisions that allow whether are by means through which ane existently internal and/or externally is given to its observable property, whereas of becoming labelled for reasons that posit in themselves to any assemblage that has forwarded by ways of the conscious experience. Although, the observably existing provinces are those that are by their own nature the given properties from each that occasion to natural properties of a properly ordered approving for which knowledgeable entities must somehow endure to exist in the awarenesses of sensibility.

There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; These are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however, there are many others that are not covered. The ability to predict (or anticipate) foreseeable events is considered a highly desirable attribute of AC by Igor Aleksander: He writes in Artificial Neuro-consciousness: An Update: "Prediction is one of the key functions of consciousness. An organism that cannot predict would have itself its own serious hamper of consciousness." The emergent’s multiple draft’s principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test. Another test of AC, in the opinion of some, should include a demonstration that machines can learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; Since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, the AC should have outputs that indicate where its attention is focussed at anyone time, at least during the aforementioned test. By Antonio Chella from University of Palermo "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attentive mechanistic implementation, by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted. "Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness. To illustrate this point is the philosopher David Chalmers (1996) controversially puts forward the panpsychist argument that a thermostat could be considered conscious: it has states corresponding too hot, too cold, or at the correct temperature. The results of the experiments of neuro-scanning on monkeys suggest that a process, not a state or object activate neurons. For such reaction there must be created a model of the process based on the information received through the senses, creating models in such that its way demands a lot of flexibility, and is also useful for making predictions. Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behavioural psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; The Turing test, which measures by a machine's personality, is not considered generally useful anymore. Learning is also considered necessary for AC. By engineering consciousness, a summary by Ron Chrisley, studying at the University of Sussex, says that of consciousness is and involves self, transparency, learning (of dynamics), planning, heterophenomenology, split of attentional signal, action selection, attention and timing management. Daniel Dennett said in his article "Consciousness in Human and Robotic Minds" are said that, "It might be vastly easier to make an initial unconscious or nonconscious "infant, as a, robot and let it "grow up" into consciousness, is more or less the way we all do. Chrisley explained that the robot Cog, is easily described, "Will did not bring about the adult at first, in spite of its adult size. But it is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world, and in addition, ‘nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access too literally millions if not billions of logically independent items of world knowledge. In that of either of these must be hand-coded individually by human programmers-a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas-or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world. An interesting article about learning is Implicit learning and consciousness by Axel Cleeremans, University of Brussels and Luis Jiménez, University of Santiago, where learning is defined as “a set of phylogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments. Anticipation is the final characteristic that could possibly be used to make a machine appear conscious. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.

Newborn babies have been trying for centuries to convince us they are, like the rest of us, sensing, feeling, thinking human beings. Struggling by implies of its position, but now seems as contrary to thousands of years of ignorant supposition that newborns are partly human, sub-human, or not-yet human, the vast majority of babies arrive in hospitals today, greeted by medical specialists who are still sceptical as to whether they can actually see, feel pain, learn, and remember what happens to them. Physicians, immersed in protocol, employ painful procedures, confident no permanent impression, certainly no lasting damage, will result from the manner in which babies are received into this world.

The way "standard medicine" sees infants-by no means universally shared by women or by the midwives who used to assist them at birth-has taken on increasing importance in a country where more than 95% are hospitals born and a quarter of these surgically delivered. While this radical change was occurring, the psychological aspects of birth were little considered. In fact, for most of the century, medical beliefs about the infants nervous system prevailed in psychology as well. However, in the last three decades, research psychology has invested heavily in infant studies and uncovered many previously hidden talents of both the fetus and the newborn baby. The findings are surprising: Babies are more sensitive, more emotional, and more cognitive than we used to believe. They are not what we thought. Babies are so different that we must create new paradigms to describe accurately who they are and what they can do.

Not long ago, experts in pediatrics and psychology were teaching that babies were virtually blind, had no sense of colour, could not recognize their mothers, and heard in "echoes.” They believed babies cared little about sharp changes in temperature at birth and had only a crude sense of smell and taste. Their pain was "not like our pain" yet, their cries not meaningful, their smiles were "gas," and their emotion’s undeveloped. Worst of all, most professionals believed babies were not equipped with enough brain matter to permit them to remember, learn, or find meaning in their experiences.

These false and unflattering views are still widely spread between both professionals and the public. No wonder people find it hard to believe that a traumatic birth, whether it is by cesarean section or vaginal, has significant, on-going effects.

Unfortunately, today these unfounded prejudices still have the weight of "science" behind them, but the harmful results to babies are hardly better than the rank superstitions of the past. The resistance of "experts" who continue to see infants in terms of their traditional incapacities may be the last great obstacle for babies to leap over before being embraced for whom they really are. Old ideas are bound to die under the sheer weight of new evidence, but not before millions of babies suffer unnecessarily because their parents and their doctors do not know they are fully human.

As the light of research reaches into the dark corners of prejudice, we may thank those in the emerging field of prenatal/perinatal psychology. Since this field is often an enter professional collaboration and does not fit conveniently to accepted academic departments, the field is not yet recognized in the academic world by endowed chairs or even by formal courses. At present only a few courses exist throughout the world. Yet research teams have achieved a succession of breakthroughs that challenge standard "scientific" ideas of human development.

Scholars in this field respect the full range of evidence of infant capabilities, whether from personal reports contributed by parents, revelations arising from therapeutic work, or from formal experiments. Putting together all the bits and pieces of information gathered from around the globe yields a fundamentally different picture of a baby.

The main way information about sentient, conscious babies has reached the public, especially pregnant parents, has been via popular media: books, movies, magazine features, and television. Among the most outstanding have been The Secret Life of the Unborn Child by Canadian psychiatrist Thomas Verny (now in 25 languages), movies like Look Who's Talking, and several talk shows, including Oprah Winfrey, where a program on therapeutic treatment of womb and birth traumas probably reached 25 million viewers in 25 countries. Two scholarly journals are devoted entirely to prenatal/perinatal psychology, one in North America that began in 1986, and one in Europe beginning in 1989. The Association for Pre- and Perinatal Psychology and Health (APPPAH) is a gathering place for people interested in this field and who keep informed through newsletters, journals, and conferences.

Evidence that babies are sensitive, cognitive, and are affected by their birth experiences may come from various sources. The oldest evidence is anecdotal and intuitive. Mothers are the principal contributors to the idea of baby as a person, one you can talk to, and one who can talk back as well. This process, potentially available to any mother, is better explained in psychic terms than in word-based language. This exchange of thoughts is probably telepathic rather than linguistic.

Mothers who communicate with their infants know that the baby is a person, mind and soul, with understanding, wisdom, and purpose. This phenomenon is cross-cultural, probably universal, although all mothers do not necessarily engage in this dialogue. In an age of "science," a mother's intuitive knowledge is too often dismissed. What mothers know has not been considered as valid data. What mothers say about their infants must be venal, self-serving, or imaginary, and can never be equal to what is known by "experts" or "scientists."

This prejudice extends into a second category of information about babies, and the evidence derived from clinical work. Although the work of psychotherapy is usually done by formally educated, scientifically trained, licensed persons who are considered expert in their field, the information they listen to is anecdotal and their methods are the blending of science and art.

Their testimony of infant intelligence, based on the recollections of clients, is often compelling. Therapists are privy to clients' surprising revelations, many of which show a direct connection between traumas surrounding birth and later disabilities of heart and mind. Although it is possible for these connections to be purely imaginary, we know they are not when hospital records and eyewitness reports confirm the validity of the memories. Obstetrician David Cheek, using hypnosis with a series of subjects, discovered that they could accurately report the full set of left and right turns and sequences involved in their own deliveries. This is technical information that no ordinary person would have unless his memories are accurate.

Psychologists using hypnosis, have found it necessary to test the reliability of memories people gave me about their traumas during the birth process, memories that had not previously been conscious. I hypnotized mother and child pairs who said they had never spoken in any detail about that child's birth. I received a detailed report of what happened from the now-adult child that I compared with the mother's report, given also in hypnosis.

The reports dovetailed at many points and were clearly reports of the same birth. By comparing one story with the other, I could see when the adult child was fantasizing, rather than having accurate recall, but fantasy was rare. It is to conclude that these birth memories were real memories, and were a reliable guide to what had happened.

Some of the first indications that babies are sentient came from the practice of psychoanalysis, stretching back to the beginning of the century to the pioneering work of Sigmund Freud. Although Freud himself was sceptical about the operation of the infant mind, his clients kept bringing him information that seemed to link their anxieties and fears to events surrounding their births. He theorized that birth might be the original trauma upon which later anxiety was constructed.

Otto Rank, Freud's associate, was more certain that birth traumas underlay many later neuroses, so he reorganized psychoanalysis around the assumption of birth trauma. He was rewarded by the rapid recovery of his clients who were "cured" in far less time than was required for a customary psychoanalysis. In the second half of the century, important advances have been made in resolving early trauma and memories of trauma.

Hypnotherapy, primal therapy, psychedelic therapies, various combinations of body work with breathing and sound stimulation, sand tray therapy, and art effects have all proved useful in accessing important imprints, decisions, and memories stored by the infant mind. If there had been no working mind in infancy, of course there would be no need to return to it to heal bad impressions, change decisions, and otherwise resolve mental and emotional problems.

A third burgeoning source of information about the conscious nature of babies comes from scientific experiments and systematic observations utilizing breakthrough technologies. In our culture, with its preference for refined measurement and strict protocols, these are the studies that get funding. And the results are surprising from this contemporary line of empirical research.

We have learned so much about babies in the last twenty years that most of what we thought we knew before is suspect, and much of it is obsolete. I will highlight the new knowledge in three sections: development of the physical senses, beginnings of self-expression, and evidence of active mental life.

First, we have a much better idea of our physical development, the process of the embodiment from conception to birth. Our focus here is on the senses and when they become available during gestation. Touch is our first sense and perhaps our last. Sensitivity to touch begins in our faces about seven weeks gestational age. Tactile sensitivity expands steadily to include most parts of the fetal body by 17 weeks. In the normal womb, touch is never rough, and temperature is relatively constant. At birth, this placid environment ends with dramatic new experiences of touch that no baby can overlook.

By only 14 weeks gestational age, the taste buds are formed, and ultrasound shows both sucking and swallowing. A fetus controls the frequency of swallowing amniotic fluid, and will speed up or slow in reaction to sweet and bitter tastes. Studies show babies have a definite preference for sweet tastes. Hearing begins earlier than anyone thought possible at 16 weeks. The ear is not complete until about 24 weeks, a fact revealing the complex nature of listening, which includes reception of vibes through our skin, skeleton, and vestibular system as well as the ear. Babies in the womb are listening to maternal sounds and to the immediate environment for almost six months. By birth, their hearing is about as good as ours.

Our sense of sight also develops before birth, although our eyelids remain fused from week 10 through 26. Nevertheless, babies in the womb will react to light flashed on the mother's abdomen. By the time of birth, vision is well-advanced, though not yet perfect. Babies have no trouble focussing at the intimate 16-inch distance where the faces of mothers and fathers are usually found.

Mechanisms for pain perception like those for touch, develop early. By about three month, if babies are accidentally struck by a needle inserted into the womb to withdraw fluid during amniocentesis, they quickly twist away and try to escape from the needle. Intrauterine surgery, a new aspect of fetal medicine made possibly in part by our new ability to see inside the womb, means new opportunities for fetal pain.

Although surgeons have long denied prenates experience pain, a recent experiment in London proved unborn babies feel pain. Babies who were needled for intrauterine transfusions showed a 600% increase in beta-endorphins, hormones generated to deal with stress. In just ten minutes of needling, even 23 week old fetuses were mounting a full-scale stress response. Needling at the intrahapatic vein provokes vigorous body and breathing movements.

Finally, our muscle systems develop under buoyant conditions in the fluid environment of the womb and are regularly used in navigating the area. However, after birth, in the dry world of normal gravity, our muscle systems look feeble. As everyone knows, babies cannot walk, and they struggle, usually in vain, to hold up their own heads. Because the muscles are still relatively undeveloped, babies give a misleading appearance of incompetence. In truth, babies have remarkably useful sensory equipment very much like our own.

A second category of evidence for baby consciousness comes from empirical research on bodily movement in utero. Except for the movement a mother and father could sometimes feel, we have had almost no knowledge of the extent and variety of movement inside the womb. This changed with the advent of real-time ultrasound imaging, giving us moment by moment pictures of fetal activity.

One of the surprises is that movement commences between eight and ten weeks gestational age. This has been determined with the aid of the latest round of ultrasound improvements. Fetal movement is voluntary, spontaneous, and graceful, not jerky and reflexive as previously reported. By ten weeks, babies move their hands to their heads, face, and mouth; they flex and extend their arms and legs; They open and close their mouths and rotate longitudinally. From 10 to 12 weeks onward, the repertoire of body language is largely complete and continues throughout gestation. Periodic exercise alternates with rest periods on a voluntary basis reflecting individual needs and interests. Movement is self-expression and expressional personalities.

Twins viewed periodically via ultrasound during gestation often show highly independent motor profiles, and, over time continue to distinguish themselves through movement both inside and outside the womb. They are expressing their individuality.

Close observation has brought many unexpected behaviours to light. By 16 weeks, male babies are having their first erections. As soon as they have hands, they are busy exploring everywhere and everything, feet, toes, mouth, and the umbilical cord: these are their first toys.

By 30 weeks, babies have an intense dream life, spending more time in the dream state of sleep than they ever do after they are born. This is significant because dreaming is definitely a cognitive activity, a creative exercise of the mind, and because it is a spontaneous and personal activity.

Observations of the fetus also reveal a number of reactions to conditions in the womb. Such are the reactions to provocative circumstances is a further sign of selfhood. Consciousness of danger and manoeuver of the self-defence are visible in fetal reactions to amniocentesis. Even when things go normally and babies are not struck by needles, they react with wild variations of normal heart activity, alter their breathing movements, may "hide" from the needle, and often remain motionless for a time - suggesting fear and shock.

Babies react with alarm to loud noises, car accidents, earthquakes, and even to their mother's watching terrifying scenes on television. They swallow less when they do not like the taste of amniotic fluid, and they stop their usual breathing movements when their mothers drink alcohol or smoke cigarettes.

In a documented report of work via ultrasound, a baby struck accidentally by a needle not only twisted away, but located the needle barrel and collide repeatedly-surely an aggressive and angry behaviours. Similarly, ultrasound experts have reported seeing twins hitting each other, while others have seen twins playing together, gently awakening one-another, rendering cheek-to-cheek, and even kissing. Such scenes, some at only 20 weeks, were never anticipated in developmental psychology. No one anticipated sociable behaviour nor emotional behaviour until months after a baby's birth.

We can see emotion expressed in crying and smiling long before 40 weeks, the usual time of birth. We see first smiles on the faces of premature infants who are dreaming. Smiles and pleasant looks, along with a variety of unhappy facial expressions, tell us dreams have pleasant or unpleasant contents to which babies are reacting. Mental activity is causing emotional activity. Audible crying has been reported by 23 weeks, in cases of abortion, revealing that babies are experiencing very appropriate emotion by that time. Close to the time of birth, medical personnel have documented crying from within the womb, in association with obstetrical procedures that have allowed air to enter the space around the fetal larynx.

Finally, a third source of evidence for infant consciousness is the research that confirms various forms of learning and memory both in the fetus and the newborn. Since infant consciousness was considered impossible until recently, experts have had to accept a growing body of experimental findings illustrating that babies learn from their experiences. In studies that began in Europe in 1925 and America in 1938, babies have demonstrated all the types of learning formally recognized in psychology at the time: classical conditioning, habituation, and reinforcement conditioning, both inside and outside the womb.

In modern times, as learning has been understood more broadly, experiments have shown a range of learning abilities. Immediately after birth, babies show recognition of musical passages, which they have heard repeatedly before birth, whether it is the bassoon passage in Peter and the Wolf, "Mary Had a Little Lamb," or the theme music of a popular soap opera.

Language acquisition begins in the womb as babies listen repeatedly to their mothers' intonations and learn their mother tongue. As early as 25 weeks, the recording of a baby's first cry contains so many rhythms, intonations, and other features common to their mother's speech that their spectrographs can be matched. In experiments shortly after birth, babies recognize their mother's voice and prefer her voice to other female voices. In the delivery room, babies recognize their father's voice and recognize specific sentences their fathers have spoken, especially if the babies have heard these sentences frequently while they were in the womb. After birth, babies show special regard for their native language, preferring it to a foreign language.

Fetal learning and memory also consist of stories that are read aloud to them repeatedly before birth. At birth, babies will alter their sucking behaviour to obtain recordings of the familiar stories. In a recent experiment, a French and American team had mothers repeat a particular children's rhyme each day from week 33 to week 37. After four weeks of exposure, babies reacted to the target rhymes and not to other rhymes, proving they recognize specific language patterns while they are in the womb.

Newborn babies quickly learn to distinguish their mother's face from other female faces, their mother's breast pads from other breast pads, their mother's distinctive underarm odour, and their mother's perfume if she has worn the same perfume consistently.

Premature babies learn from their unfortunate experiences in neonatal intensive care units. One boy, who endured surgery parlayed with curare, but was given no pain-killing anaesthetics, of developed and pervading fear of doctors and hospitals that remains undiminished in his teens. He also learned to fear the sound and sight of adhesive bandages. This was in reaction to having some of his skin pulled off with adhesive tape during his stay in the premature nursery.

Confirmation that early experiences of pain have serious consequences later has come from recent studies of babies at the time of first vaccinations. Researchers who studied infants being vaccinated four to six months after birth discovered that babies who had experienced the pain of circumcision had higher pain scores and cried longer. The painful ordeal of circumcision had apparently conditioned them to pain and set their pain threshold lower. This is an example of learning from experience: Perinatal pain.

Happily, there are other things to learn besides pain and torture. The Prenatal Classroom is a popular program of prenatal stimulation for parents who want to establish strong bonds of communication with a baby in the womb. One of the many exercises is the "Kick Game," which you play by responding to the child's kick by touching the spot your baby just kicked, and saying "kick, baby kick." Babies quickly learn to respond to this kind of attention: They do kick again and they learn to kick anywhere their parents touch. One father taught his baby to kick in a complete circle.

Babies also remember consciously the big event of birth itself, at least during the first years of their lives. Proof of this comes from little children just learning to talk. Usually around two or three years of age, when children are first able to speak about their experiences, some spontaneously recall what their birth was like. They tell what happened in plain language, sometimes accompanied by pantomime, pointing and sound effects. They describe water, black and red colours, the coming light, or dazzling light, and the squeezing sensations. Cesarean babies tell about a door or window suddenly opening, or a zipper that zipped open and let them out. Some babies remember fear and danger. They also remember and can reveal secrets.

One of my favourite stories of a secret birth memory came from Cathy, a midwife's assistant. With the birth completed, she found herself alone with a hungry, restless baby after her mother had gone to bathe and the chief midwife was busy in another room. Instinctively, Cathy offered the baby her own breast for a short time: then she wondered if this were appropriate and stopped feeding the infant without telling anyone what had happened. Years later, when the little young woman was almost four, Cathy was babysitting her. In a quiet moment, she asked the child if she remembered her birth. The child did, and volunteered various accurate details. Then, moving closer to whisper a secret, she said "You held me and gave me titty when I cried, and Mommy wasn't there." Cathy said to herself, "Nobody can tell me babies don't remember their births"

Is a baby a conscious and real person? To me it is no longer appropriate to speculate. It is too late to speculate when so much is known. The range of evidence now available in the form of knowledge of the fetal sensory system, observations of fetal behaviour in the womb, and experimental proof of learning and memory - all of this evidence-amply verifies what some mothers and fathers have sensed from time immemorial, that a baby is a real person. The baby is real in having a sense of self that can be seen in creative efforts to adjust or to influence its environment. Babies show self-regulation (retreating from invasive needles and strong light), self-assertion, combat with a needle, or striking out at a bothersome twin.

Babies are like us in having clearly manifested feelings in their reactions to assaults, injuries, irritations, or medically inflicted pain. They smile, cry, and kick in protest, manifest fear, anger, grief, pleasure, or displeasure in ways that seem entirely appropriate in relation to their circumstances. Babies are cognitive beings, thinking their own thoughts, dreaming their own dreams, learning from their own experiences, and remembering their own experiences.

An iceberg can serve as a useful metaphor to understand the unconscious mind, its relationship to the conscious mind and how the two parts of our mind can better work together. As an iceberg floats in the water, the huge mass of it remains below the surface.

Only a small percentage of the whole iceberg is visible above the surface. In this way, the iceberg is like the mind. The conscious mind is what we notice above the surface while the unconscious mind, the largest and most powerful part, remains unseen below the surface.

In our metaphor that regards of the small amount of icebergs, far and above the surface represents the conscious mind; The huge mass below the surface, the unconscious mind. The unconscious mind holds all awareness that is not presently in the conscious mind. All memories, feelings and thoughts that are out of conscious awareness are by definition "unconscious." It is also called the subconscious and is known as the dreaming mind or deep mind.

Knowledgeable and powerful in a different way than the conscious mind, the unconscious mind handles the responsibility of keeping the body running well. It has memory of every event we've ever experienced; it is the source and storehouse of our emotions; and it is often considered our connection with Spirit and with each other.

No model of how the mind works disputes, the tremendous power, which is in constant action below the tip of the iceberg. The conscious mind is constantly supported by unconscious resources. Just think of all the things you know how to do without conscious awareness. If you drive, you use more than 30 specific skills . . . without being aware of them. These are skills, not facts; they are processes, requiring intelligence, decision-making and training.

Besides these learned resources that operate below the surface of consciousness there are important natural resources. For instance, the unconscious mind regulates all the systems of the body and keeps them in harmony with each other. It controls heart rate, blood pressure, digestion, the endocrine system and the nervous system, just to name a few of its natural, automatic duties.

The conscious mind, like the part of the iceberg above the surface, is a small portion of the whole being. The conscious mind is what we ordinarily think of when we say "my mind." It's associated with thinking, analysing and making judgments and decisions. The conscious mind is actively sorting and filtering its perceptions because only so much information can reside in consciousness at once. Everything else falls back below the water line, into unconsciousness.

Only seven bits of information, and/or minus two can be held consciously at one time. Everything else we are thinking, feeling or perceiving now . . . along with all our memories remains unconscious, until called into consciousness or until rising spontaneously.

The imagination is the medium of communication between the two parts of the mind. In the iceberg metaphor, the imagination is at the surface of the water. It functions as a medium through which content from the unconscious mind can come into conscious awareness.

Communication through the imagination is two-way. The conscious mind can also use the medium of the imagination to communicate with the unconscious mind. The conscious mind sends suggestions about what it wants through the imagination to the unconscious. It imagines things, and the subconscious intelligencer work to make them happen.

The suggestions can be words, feelings or images. Athletes commonly use images mentally to rehearse how they want to perform by picturing themselves successfully completing their competition. A tennis player may see a tennis ball striking the racket at just the right spot, at just the perfect moment in the swing. Studies show that this form of imaging improves performance.

However, the unconscious mind uses the imagination to communicate with the conscious mind far more often than the other way around. New ideas, hunches, daydreams and intuitions come from the unconscious to the conscious mind through the medium of the imagination.

An undeniable example of the power in the lower part of the iceberg is dreaming. Dream images, visions, sounds and feelings come from the unconscious. Those who are aware of their dreams know how rich and real they can be. Even filtered, as they are when remembered later by the conscious mind, dreams can be quite powerful experiences.

Many people have received workable new ideas and insights, relaxing daydreams, accurate hunches, and unexpected intuitive understandings by replaying their dreams in a waking state. These are everyday examples of what happens when unconscious intelligencer and processes communicate through the imagination with the conscious mind.

Unfortunately, the culture has discouraged us from giving this information credibility. "It's just, but your imagination" is a commonly heard dismissal of information coming from the deep mind. This kind of conditioning has served to keep us disconnected from the deep richness of our vast unconscious resources.

In the self-healing work we'll be using the faculty of the imagination in several ways. In regression processes to access previously unconscious material from childhood, perinatal experiences and past lives, and even deeper realms of the "universal unconscious." Inner dialogue is another essential tool that makes use of the imagination in process work.

To shoulder atop the iceberg metaphor forward, each of us can be represented an iceberg, with the larger part of ourselves remain deeply submerged. And there's a place in the depths where all of our icebergs come together, a place in the unconscious where we connect with each other

The psychologist Carl Jung has named this realm the "Collective Unconscious." This is the area of mind where all humanity shares experience, and from where we draw on the archetypal energies and symbols that are common to us all. "Past life" memories are drawn from this level of the unconscious.

Another, even deeper level can be termed the "Universal Unconscious" where experiences beyond just humanity's can also be accessed with regression process. It is at this level that many "core issues" begin, and where their healing needs to be accomplished.

The unconscious connection "under the iceberg" between people is often more potent than the conscious level connection, and important consideration in doing the healing work. Relationship is an area rich with triggers to deeply buried material needing healing. And some parts of us cannot be triggered in any way other than "under the iceberg."

Although the conscious mind, steeped in cognition and thought, is able to deceive another . . . the unconscious mind, based in feeling, will often give us information from under the iceberg that contradicts what is being communicated consciously.

"Sounds right but feels wrong," is an example of information from under the iceberg surfacing in the conscious mind, but conflicting with what the conscious mind was ably to attain of its own. This kind of awareness is also called "intuition."

Intuitive information comes without a searching of the conscious memory or a formulation to be filled by imagination. When we access the intuition, we seem to arrive at an insight by a path from unknown sources directly to the conscious awareness. Wham! Out of nowhere, in no time.

No matter what the precise neurological process, the ability to access and use information from the intuition is extremely valuable in the effective and creative use of the tools of self healing. In relating with others, it's important to realize that your intuition will bring you information about the other and your relationship from under the iceberg.

When your intuition is the source of your words and actions, they are usually much more appropriate and helpful than what thinking or other functions of the conscious mind could muster. What you do and say from the intuition in earnest communication will be meaningful to the other, even though it may not make sense to you.

The most skilful and comprehending way to nurture and develop your intuition is to trust all of your intuitive insights. Trust encourages the intuition to be more present. Its information is then more accessible and the conscious mind finds less reason to question, analyse or judge intuitive insights.

The primary skills needed for easy access and trust of intuitive information are: (1) The ability to get out of the way. (2) The ability to accept the information without judgment.

Two easy ways to access intuition and help the conscious mind get out of the way occur: (3) Focus your attention in your abdominal area and imagine you have a "belly brain.” As you feel into and sense this area, "listen" to what your belly brain has to say. This is often referred to as listening to our "gut feelings." (4) With your eyes looking down and to your left and slightly de focussed, simply feel into what to say next.

Once the intuition is flowing, it will continue easily, unless it is Formed. The most usual Formage is for which we may become of, and only because the conscious mind's finds within to all judgments of the intuitive information. The best way to avoid this is to get the cooperation of the conscious mind so it will step aside and become the observer when intuition is being accessed. Cosmic Consciousness is an ultra high state of illumination in the human Mind that is beyond that of "self-awareness," and "ego-awareness." In the attainment of Cosmic Consciousness, the human Mind has entered a state of Knowledge instead of mere beliefs, a state of "I know," instead of "I believe." This state of Mind is beyond that of the sense reasoning in that it has attained an awareness of the Universe and its relation to being and a recognition of the Oneness in all things that is not easily shared with others who have not personally experienced this state of Mind. The attainment of Cosmic illumination will cause an individual to seek solitude from the multitude, and isolation from the noisy world of mental pollution.

Carl Jung was a student and follower of Freud. He was born in a small town in Switzerland in 1875 and all his life was fascinated by folk tales, myths and religious stories. Nonetheless, he had a close friendship with Freud early in their relationship, his independent and questioning mind soon caused a break.

Jung did not accept Freud’s contention that the primary motivations behind behaviour was sexual urges. Instead of Freud’s instinctual drives of sex and aggression, Jung believed that people are motivated by a more general psychological energy that pushes them to achieve psychological growth, self-realization, psychic wholeness and harmony. Also, unlike Freud, he believed that personality continues to develop throughout the lifespan.

It is for his ideas of the collective unconscious that students of literature and mythology are indebted to Jung. In studying different cultures, he was struck by the universality of many themes, patterns, stories and images. These same images, he found, frequently appeared in the dreams of his patients. From these observations, Jung developed his theory of the collective unconscious and the archetypes.

Like Freud, Jung posited the existence of a conscious and an unconscious mind. A model that psychologists frequently use here is an iceberg. The part of the iceberg that is above the surface of the water is seen as the conscious mind. Consciousness is the part of the mind we know directly. It is where we think, feel, sense and intuit. It is through conscious activity that the person becomes an individual. It’s the part of the mind that we “live in” most of the time, and contains information that is in our immediate awareness, the level of the conscious mind, and the bulk of the ice berg, is what Freud would call the unconscious, and what Jung would call the “personal unconscious.” Here we will find thoughts, feelings, urges and other information that is difficult to bring to consciousness. Experiences that do not reach consciousness, experiences that are not congruent with whom we think we are, and things that have become “repressed” would make up the material at this level. The contents of the personal unconscious are available through hypnosis, guided imagery, and especially dreams. Although not directly accessible, material in the personal unconscious has gotten there sometime during our lifetime. For example, the reason you are going to school now, why you picked a particular shirt to wear or your choice of a career may be a choice you reached consciously. But it is also possible that education, career, or clothing style has been influenced by a great deal of unconscious material: Parents’ preferences, childhood experiences, even movies you have seen but about which you do not think when you make choices or decisions. Thus, the depth psychologist would say that many decisions, indeed some of the most important ones that have to do with choosing a mate or a career, are determined by unconscious factors. But still, material in the personal unconscious has been environmentally determined.

The collective unconscious is different. It’s like eye colour. If someone were to ask you, “How did you get your eye colour,” you would have to say that there was no choice involved – conscious or unconscious. You inherited it. Material in the collective unconscious is like a dramatization for this as self bequeathed. It never came from our current environment. It is the part of the mind that is determined by heredity. So we inherit, as part of our humanity, a collective unconscious; the mind is pre-figured by evolution just as is the body. The individual is linked to the past of the whole species and the long stretch of evolution of the organism. Jung thus placed the psyche within the evolutionary process.

What’s in the collective unconscious? Psychological archetypes. This idea of psychological archetypes is among Jung’s most important contributions to Western thought. An ancient idea somewhat like Plato’s idea of Forms or “patterns” in the divine mind that determine the form material objects will take, and the archetype is in all of us. The word “archetype” comes from the Greek “Arche” meaning first, and type meant to “imprinting or patterns.” Psychological archetypes are thus first prints, or patterns that form the basic blueprint for major dynamic counterparts of the human personality. For Jung, archetypes pre-exist in the collective unconscious of humanity. They repeat themselves eternally in the psyches of human beings and they determine how we both perceive and behave. These patterns are inborn within us. They are part of our inheritance as human beings. They reside as energy within the collective unconscious and are part the psychological life of all peoples everywhere at all times. They are inside us and they are outside us. We can meet them by going inward to our dreams or fantasies. We can meet them by going outward to our myths, legends, literature and religions. The archetype can be a pattern, such as a kind of story. Or it can be a figure, such as a kind of character.

In her book Awakening the Heroes Within, Carolyn Pearson identifies twelve archetypes that are fairly easy to understand. These are the Innocent, the Orphan, the Warrior, the Caregiver, the Seeker, the Destroyer, the Lover, the Creator, the Ruler, the Magician, the Sage, and the Fool. If we look at art, literature, mythology and the media, we can easily identify some of these patterns. One familiarized is the contemporary western culture is the Warrior. We find the warrior myth encoded in all the great heroes whoever took on the dragon, stood up to the tyrant, fought the sorcerer, or did battle with the monster: And in so doing rescued himself and others. The true Warrior is not just overbearing. The aggressive man (or women) fights to feel superior to others, to keep them down. The warrior fights to protect and ennoble others. The warrior protects the perimeters of the castle or the family or the psyche. The warrior’s myth is active in each of us any time we stand up against unfair authority, be it a boss, teacher or parent. The highest level warrior has at some time confronted his or her own inner dragons. We see the Warrior’s archetype in the form of pagan deities, for example the Greek god of war, Mars. David, who fights Goliath, or Michael, who casts Satan out of Heaven is familiar Biblical warrior. Hercules, Xena (warrior princess) and Conan the Barbarian are more contemporary media forms the warrior takes. And it is in this widely historical variety that we can find an important point about the archetype. It really is unconscious. The archetype is like the invisible man in famous story. In the story, a man invents a potion that, when ingested, renders him invisible. He becomes visible only when he puts on clothes. The archetype is like this. It remains invisible until it unfolds within the Dawn of its particular culture: in the Middle Ages this was King Arthur; in modern America, it may be Luke Skywalker. But if the archetype were not a universal pattern imprinted on our collective psyche, we would not be able to continue to recognize it over and over. The love goddess is another familiar archetypal pattern. Aphrodite to the Greeks, Venus to the Romans, she now appears in the form of familiar models in magazines like “Elle” and “Vanity Fair.” And whereas in ancient Greece her place of worship was the temple, today is it the movie theatre and the cosmetics counter at Nordstrom’s. The archetype remains; the garments it dawns are those of its particular time and place.

This brings us to our discussion of the Shadow as archetype. The clearest and most articulate discussion of this subject is contained in Johnson’s book Owning Your Own Shadow. The Shadow is not a difficult concept. It is merely the “dark side” of the psyche. It’s everything that doesn’t fit into the Persona. The word “persona” comes from the theatre. In the Roman theatre, characters would put on a mask that represented who the character was in the drama. The word “persona” literally means “mask.” Johnson says that the persona is how we would like to be seen by the world, a kind of psychological clothing that “mediates between our true elves and our environment” in much the same way that clothing gives an image. The Shadow is what doesn’t fit into this Persona. These “refused and unacceptable” characteristics don’t go away; They are stuffed or repressed and can, if unattended to, begin to take on a life of their own. One Jungian likens the process to that of filling a bag. We learn at a very young age that there are certain ways of thinking, being and relating that are not acceptable in our culture, and so we stuff them into the shadow bag. In our Western culture, particularly in the United States, thoughts about sex are among the most prevalent that are unacceptable and so sex gets stuffed into the bag. The shadow side of sexuality is quite evident in our culture in the form of pornography, prostitution, and topless bars. Psychic energy that is not dealt within a healthy way takes a dark or shadow form and begins to take on a life of its own. As children our bag is fairly small, but as we get older, it becomes larger and more difficult to drag.

Therefore, it is not difficult to see that there is a shadow side to the Archetypes discussed earlier. The shadow side to the warrior is the tyrant, the villain, the Darth Vader, who uses his or her skills for power and ego enhancement. And whereas the Seeker Archetype quests after truth and purity, the shadow Seeker is controlled by pride, ambition, and addictions. If the Lover follows his/her bliss, commits and bonds, the shadow lover signifies a seducer a sex addict or interestingly enough, a puritan.

But we can use the term “shadow” in a more general sense. It is not merely the dark side of a particular archetypal pattern or form. Wherever Persona is, Shadow is also. Wherever good is, is evil. We first know the shadow as the personal unconsciousness, for in all that we abhor, deny and repressing power, greed, cruel and murderous thoughts, unacceptable impulses, morally and ethically wrong actions. All the demonic things by which human beings betray their inhumanity to other beings are shadow. Shadow is unconscious. This is a very important idea. Since it is unconscious, we know it only indirectly, projection, just as we know the other Archetypes of Warrior, Seeker and Lover. We encounter the shadow in other people, things, and places where we project it. The scape goat is a perfect example of shadow projection. The Nazi’s projection of shadow onto the Jews gives us some insight into how powerful and horrific the archetype is. Jung says that when you are in the grips of the archetype, you don’t have it, it has you.

This idea of projection raises an interesting point. It means that the shadow stuff isn’t “out there” at all; it is really “in here”; that is inside us. We only know it is inside us because we see it outside. Shadow projections have a fateful attraction to us. It seems that we have discovered where the bad stuff really is: in him, in her, in that place, there! There it is! We have found the beast, the demon, the bad guy. But does Obscenity really exist, or is what we see as evil all merely projection of our own shadow side? Jung would say that there really is such a thing as evil, but that most of what we see as evil, particularly collectively, is shadow projection. The difficulty is separating the two. And we can only do that when we discover where the projection ends. Hence, Johnson’s book title “Owning Your own Shadow.”

Amid all the talk about the "Collective Unconscious" and other sexy issues, most readers are likely to miss the fact that C.G. Jung was a good Kantian. His famous theory of Synchronicity, "an accusal connecting principle," is based on Kant's distinction between phenomena and things-in-themselves and on Kant's theory that causality will not operate among thing-in-themselves the way it does in phenomena. Thus, Kant could allow for free will (unconditioned causes) among things-in-themselves, as Jung allows for synchronicity ("meaningful coincidences"). Next to Kant, Jung is close to Schopenhauer, praising him as the first philosopher he had read, "who had the courage to see that all was not for the best in the fundamentalists of the universe" [Memories, Dreams, Reflections, p. 69]. Jung was probably unaware of the Friesian background of Otto's term "numinosity" when he began to use it for his Archetypes, but it is unlikely that he would object to the way in which Otto's theory, through Fries, fits into Kantian epistemology and metaphysics.

Jung's place in the Kant-Friesian tradition is on a side that would have been distasteful to Kant, Fries, and Nelson, whose systems were basically rationalistic. Thus Kant saw religion as properly a rational expression of morality, and Fries and Nelson, although allowing an aesthetic content to religion different from morality, nevertheless did not expect religion to embody much more than good morality and good art. Schopenhauer, Otto, and Jung all represent an awareness that more exists to religion and to human psychological life than this. The terrifying, uncanny, and fascinating elements of religion and ordinary life are beneath the notice of Kant, Fries, and Nelson, while they are indisputable and irreducible elements of life, for which there must be an account, with Schopenhauer, Otto, and Jung. As Jung once, again said of Schopenhauer: "He was the first to speak of the suffering of the world, which visibly and glaringly surrounds us, and of confusion, passion, evil - all those things that the others hardly seemed to notice and always tried to resolve into all-embracing harmony and comprehensibility." It is an awareness of this aspect of the world that renders the religious ideas of "salvation" meaningful; yet "salvation" as such is always missing from moralistic or aesthetic renderings of religion. Only Jung could have written his Answer to Job.

Jung's great Answer to Job, indeed, represents an approach to religion that is all but unique. Placing God in the Unconscious might strike most people as reducing him to a mere psychological object; Nevertheless, that is to overlook Jung's Kantianism. The unconscious, and especially the Collective Unconscious, belongs to Kantian things-in-themselves, or to the transcendent Will of Schopenhauer. Jung was often at pains not to complicate his theory of the Archetypes by committing himself to a metaphysical theory - he wanted the theory to work whether he was talking about the brain or about the Transcendent - but that was merely a concession to the materialistic bias of contemporary science. He had no materialistic commitment himself and, when it came down to it, was not going to accept such naive reductionism. Instead, he was willing to rethink how the Transcendent might operate. Thus, he says about Schopenhauer: I felt sure that by "Will" he really meant God, the Creator, and that he was saying that God was blind. Since I knew from experience that God was not offended by any blasphemy, which on the contrary, he could even encourages it on the account that He wished to evoke not only man's bright and positive side but also his darkness and ungodliness, Schopenhauer's view did not distress me.

The Problem of Evil, which for so many people simply dehumanizes religion, and which Schopenhauer used to reject the value of the world, became a challenge for Jung in the psychoanalysis of God. The God of the Bible is indeed a personality, and seemingly not always the same one. God as a morally evolving personality is the extraordinary conception of Answer to Job. What Otto saw as the evolution of human moral consciousness, Jung turns right around on the basis of the principle that the human unconscious, expressed spontaneously in religious practice and literature, transcends mere human subjectivity. But the transcendent reality in the unconscious is different in kind from consciousness. As Jung said in Memories, Dreams, Reflections again: If the Creator were conscious of Himself, He would not need conscious creatures; nor is it probable that the extremely indirect methods of creation, which squander millions of years upon the development of countless species and creatures, are the outcome of purposeful intention. Natural history tells us of a haphazard and casual transformation of species over hundreds of millions of years of devouring and being devoured. The biological and political history of man is an elaborate repetition of the same thing. But the history of the mind offers a different picture. Here the miracle of reflecting consciousness intervenes - the second cosmogony [ed. note: what Teilhard de Chardin called the origin of the "oosphere," the layer of "mind"]. The importance of consciousness is so great that one cannot help suspecting the element of meaning to be concealed somewhere within all the monstrous, apparently senseless biological turmoil, and that the road to its manifestation was ultimately found on the level of warm-blooded vertebrates possessed of a differentiated brain - found as if by chance, unintended and unforeseen, and yet somehow sensed, felt and groped for out of some dark urge.

In other words, a "meaningful coincidence." Jung also says, As far as we can discern, the sole purpose of human existence is to kindle a light in the darkness of mere being. It may even be assumed that just as the unconscious affects us, so the increase in our consciousness affects the unconscious.

However, Jung has missed something there. If consciousness is "the light in the darkness of mere being," consciousness alone cannot be the "sole purpose of human existence," since consciousness as such could appear as just a place of "mere being" and so would easily become an empty, absurd, and meaningless Existentialist existence. Instead, consciousness allows for the meaningful instantiation of existence, both through Jung's process of Individuation, by which the Archetypes are given unique expression in a specific human life, and from the historic process that Jung examines in Answer to Job, by which interaction with the unconscious alters in turn the Archetypes that come to be instantiated. While Otto could understand Job's reaction to God, as the incomprehensible Numen, Jung thinks of God's reaction to Job, as an innocent and righteous man jerked around by God's unconsciousness. Jung's idea that the Incarnation then is the means by which God redeems Himself from His morally false position in Job is an extraordinary reversal (I hesitate to say "deconstruction") of the consciously expressed dogma that the Incarnation is to redeem humanity.

It is not too difficult to see this turn in other religions. The compassion of the Buddhas in Mahâyâna Buddhism, especially when the Buddha Shakyamuni comes to be seen as the expression of a cosmic and eternal Dharma Body, is a hand of salvation stretched out from the Transcendent, without, however, the complication that the Buddha is ever thought responsible for the nature of the world and its evils as their Creator. That complication, however, does occur with Hindu views of the divine Incarnations of Vishnu. Closer to a Jungian synthesis, on the other hand, is the Bahá'í theory that divine contact is though "Manifestations," which are neither wholly human nor wholly divine: merely human in relation to God, but entirely divine in relation to other humans. Such a theory must appear Christianizing in comparison to Islam, but it avoids the uniqueness of Christ as the only Incarnation in Christianity itself. This is conformable to the Jungian proposition that the unconscious is both a side of the human mind and a door into the Transcendent. When that door opens, the expression of the Transcendent is then conditioned by the person through which it is expressed, possessing that person, but it is also genuinely Transcendent and reflecting the ongoing interaction that the person historically embodies. The possible "mere being" even of consciousness then becomes the place of meaning and value.

Whether "psychoanalysis” as practised by Freud or Jung is to be taken seriously and no less than questions asked; however both men will survive as philosophers long after their claims to science or medicine may be discounted. Jung's Kantianism enables him to avoid the materialism and reductionism of Freud ("all of the civilization is a substitute for incest") and, with a great breadth of learning, employs principles from Kant, Schopenhauer, and Otto that are easily conformable to the Kant-Friesian tradition. The Answer to Job, indeed, represents a considerable advance beyond Otto, into the real paradoxes that are the only way we can conceive transcendent reality.

In the state of Cosmic Consciousness has an individual developed a keen awareness of his own mental states and activities and that of others around him or her. This individual is aware of a very distinct "I" personality that empowers the individual with a powerful expression of the "I am" that is not swayed or moved by the external impressions of the trifling mental states of others. This individual stands on a "rock solid" foundation that is not easily understood by the common mind. Cosmic Consciousness is void of the "superficial" ego.

The existence of the conscious "I" and the "Subconscious Mind" on the Mental Plane is a manifestation of the seventh Hermetic principle, the Principle of Gender. Every human, male and female, is composed of the Masculine and Feminine aspect of Mind on the Mental Plane. Each male has its female element, and each female has its male element of Mental Gender from which the creation of all thoughts proceed. The "I" being the masculine aspect of Mind, and the Subconscious Mind being the feminine. The Principle of Gender manifests itself as male and female in all species of Life and Being that makes the sexual reproduction and multiplication of the species possible on the Great Physical Plane. The phenomena of this principle can be found in all three great groups of life manifestations, as questionably answered to those that are duly respected thereof, that in the Spiritual, Mental, and Physical plane of Life and Being.

On the Physical Plane, its role is recognized as sexual reproduction, while on the higher planes it takes on higher, more subtler functions of Mental and Spiritual Gender. Its role is always in the direction of reproduction, generation and regeneration. The Masculine and Feminine principles are always present and active in all phases of phenomena and every plane of Life. An understanding in the manifesting power of this Principle, will give us a greater understanding of ourselves and an awareness of the enormous latent power awaiting to be tapped.

In the Spiritual developed individual, the person who becomes aware of, and recognizes the conscious "I," or "I am" within, will be able to exert its will upon the subconscious mind with definite causation and purpose. The recognition and awareness of the "I," will enable a person to expand his or her mind into regions of consciousness that is unthinkable to the societal conditioned thinking process of the world community.

True Spiritual, or Mental development, enables the sharpening of the five bodily senses, enhancing the richness of Life as our minds are allowed to expand into advanced Spiritual knowledge. Knowledge that will enable the proper use of the five wonderful bodily senses as they report to us the external world from which we derive information to store in the memory banks of the brain to create a knowledge base of experience. The greater the Conscious awareness, the more acute the bodily senses become. At the same time, the lesser the Conscious awareness (nonmaterial sixth sense), the minor acutely of the five bodily senses become and considerably of our external world would not even be acknowledged. This difference of mental states is most likely the cause of debate between religious and scientific circles.

The "I" Consciousness in each human is the true "Higher Self." The "Higher Self" of each human exists as a constant moving whirlpool of Cosmic Consciousness, or an eddy in the Infinite Spirit of "The all," which manifest’s LIFE in all of us and all living entities of the lower and higher planes. The "I" within all of us for being apart of the Mind but not separated exists in all of us and is the instrument of the conscious "I." It is Eternal and indestructible and mortality and Immortality is not an issue in existence. There is no force in existence capable of destroying the "I." This "I" or "Higher Self" is the SOUL of the Soul and is holographically connected to The all, giving the powerful "I" the Image of its Creator. All of us are created in the image of GOD without any exceptions or exclusions and none can escape its Omnipresent Infinite Living Mind. The all, of being the Ruler of all fate, or destiny, in all peoples, nations, governments, religious institutions, suns, worlds, galaxies, planes, dimensions, and Universes. All are subject to its Wills and Efforts, and is the Law that keeps all things in relationship to their Source. There is no "existence" outside of The all.

When the particular "I" is consciously recognized within ourselves, the "Will" of "I" is powerfully exerted upon the Subconscious Mind, giving the Subconscious Mind purpose and a sense of direction in Life. The Mind is the instrument by which the conscious "I" pries open the many deep, and hidden secrets of Nature.

To cause advancement, each individual would have to initiate the effort in learning the deep secrets of their nature, setting aside all the trifling efforts of self-condemnation, low self esteem, and hurts in their daily living that is caused by allowing the ignorant brainwashing of societal conditioning and self inflicted wounds. All the brainwashing, and imagined hurts that we experience in our lives are lessons to overcome these obstacles and to learn, and recognize the powerful "I."

Only the person who created the negative state of Mind can eliminate this by making a fundamental change in the way they think and what is held in their thoughts and to allow them the Spiritual education that is needed in for advancement. There is no red carpet treatment or royal road in accomplishing this. It takes a will, a desire, diligent effort, and perseverance in cultivating this knowledge. The resulting rewards of this attainment will far exceed the greatest worldly rewards known to humanity.

Most people fail to recognize this reality and they will unconsciously and painfully race through Life from cradle to grave and not even experience a momentary glimpse of this great Truth.

The "I," when recognized in a conscious and deliberate manner, will enable a person to accomplish things in Life that is limited only to his or her own imagination. The accomplishments of educators, scientists, engineers, and leaders, who make up the smaller percentage of the world population, have to a degree recognized this "I" within themselves, mostly in an unconscious manner, nevertheless, many have accomplished successful professional careers. They have accomplished a mental focus on a subject (or object), that escaped the ability of most people, giving them a sense of direction and a meaningful purpose in society. Every human is capable of accomplishing this, if they will only learn to focus and concentrate on one subject at a time.

When the will of "I" is utilized and exerted in an unrecognized and unconscious manner, it becomes misused and abused, bringing misery to the individual and others around him or her. Often, is this reality seen in the work place between people and where persons are in a position of authority, such as supervisors, managers, directors, etc., who bring misery to themselves and to their workers because of the powerful will of the unrecognized "I" or "I am." This aspect will cause a lack of harmony in an individual corporate, or company structure and at times bring chaos to the organization when enough of these types of individuals are employed in one place. Teamwork becomes a very labouring effort as competition between employees becomes its theme causing discontent and thus reducing the efficiency of a corporate environment. There is strength in number, either positive or negative. The realm of Spirit affects all levels of our society.

When the human Mind learns to become focussed on a single object or subject at a time, without wandering, excluding all other objects/subjects waiting in line, the Mind is capable of gathering previously unknown energy and information about a given subject or object. The entire world of that person seems to revolve in such a manner that it would bring them information from the unknown regions of the Mind. This is true meditation, to gather information about the unknown while being in a focussed meditative state of Mind. Each true meditation should bring a person information that will cause his or her Mind to expand with Knowledge, especially, when the focal point of concentration is that of Spirit. A person who learns to master this mental art will find that the proper books will manifest into their Life and bring to them the missing puzzles of Life. Books that will draw the attention of an individual on a given subject, and when the new knowledge is applied to the individual's Mind, it is allowed to expand further upon the subject by allowing the Mind to gather additional information and increasing the knowledge base, causing further advancement for others as well.

The mental art of concentration by employing the exertion of the will and creating desire upon a given subject or object is very rare because the lazy human mind is content with wandering twirlingly through Life. The untrained average human Mind is constantly rapidly wandering from one subject/object to another and is unable to focus on a single subject because of the constant carousel of external impressions of objects from the surrounding material world. The untrained mind is constantly jumping from one subject/object to another, like the jumping around of a wild monkey, never able to pause for a moment, to concentrate, and focalize long enough to allow the Mind to gather information about a given subject or object. This is what thinking is. To allow the Mind to gather information about the unknown. When this is disallowed, a person will wander aimlessly through Life and maintaining an ignorant state of Mind.

Wandering aimlessly through Life is a dangerous mental state to maintain because of the possible danger of other minds with stronger wills and efforts to manipulate the person who has not taken responsibility in the discipline and control of their own mind. A person having no control of their own responsibilities are more to wander of mind, having no control in Life's destiny because of the lack of focus and direction in Life. It can be compared with a rudderless ship that is constantly tossed by the rise and fall of the waves from the powerful ocean.

When the Mind becomes trained and learns to concentrate and focalize on a single object or subject at a time, that state of Mind will bring the individual Universal Knowledge and Wisdom. This is how genius is created by applying the mental art of concentration and focalizing on any worthwhile subject. The famous theories and hypothesis come into being such as Einstein's theory of relativity, man's ability to fly through the air, space travel, etc., by applying the mental art of concentration. It is an unbending mental aspect of the human mind as it continues to expand and gathers ever more information about all known and unknown subjects and objects, constantly causing change and advancement in Spirituality and technology. Unbiased, Spiritual Wisdom enables the proper use of technology and is the catalyst for its increasingly rapid advancement. It may be difficult, however, to conceive that Spirituality and technology go hand in hand, but are nonetheless, the lack of Spiritual Wisdom will dampen the infinite possibilities because of a limited, diminutive belief system.

Technology ends where the mortal barrier begins, then, it becomes a necessity to look into the realm of Spirit in order to continue human evolution. Without the continuous advancement of evolution, this civilization will become dissolved and perish off the face of the earth, like the many previous civilizations before us. The mortal barrier begins when science and technology will reach the limitation of the atomic and sub-atomic particles and a quantum leap into the realm of the Waveform (Spirit) becomes a necessity in order to continue upward progress

When a person learns to find a quiet moment in their lives to be able to become mentally focussed and entered on their profession, job, Spirituality, whatever the endeavour, they will find the answers and renewed energy to solve problems and create new knowledge and ideas.

When a person (no matter who) learns to focus and concentrate on Spirit, their Mind will gather from their Cosmic Consciousness, the deepest secrets of the Universe, as to how it is composed, by what means, and to what end. But, the enigma of the deepest inner secret Nature of The all, or God will always remain unknowable to us by reason of its Infinite stature to which no human qualities can, or should, ever be ascribed.

There is more on the subject of the powerful "I" consciousness the "I Am," the "Higher Self," which is, each one of us.

In what could turn out to be one of the most important discoveries in cognitive studies of our decade, it has been found that there are five million magnetite crystals per gram in the human brain. Interestingly, The meninges, (the membrane that envelops the brain), has twenty times that number. These ‘bio magnetite' crystals demonstrate two interesting features. The first is that their shapes do not occur in nature, suggesting that they were formed in the tissue, rather than being absorbed from outside. The other is that these crystals appear to be oriented so as to maximize their magnetic moment, which tends to give groups of these crystals the capacity to act as a system. The brain has also been found to emit very low intensity magnetic fields, a phenomenon that forms the basis of a whole diagnostic field, Magnetoencephalography.

Unfortunately for the present discussion, there is no way to ‘read' any signals that might be carried by the brain’s magnetic emissions at present. We expect that subtle enough means of detecting such signals will eventually appear, as there is compelling evidence that they do exist, and constitute a means whereby communication happens between various parts of the brain. This system, we speculate, is what makes the selection of which neural areas to recruit, so that States (of consciousness) can elicit the appropriate Phenomenological, behavioural, and affective responses.

While there have been many studies that have examined the effects of magnetic fields on human consciousness, none have yielded findings more germane to understanding the role of neuromagnetic signalling than the work of the Laurentian University Behavioural Neuroscience group. They have pursued a course of experiments that rely on stimulating the brain, especially the temporal lobes, with complex low intensity magnetic signals. It turns out that different signal’s produce different phenomena.

One example of such phenomenons is vestibular sensation, in which one's normal sense of balance is replaced by illusions of motion similar to the feelings of levitation reported in spiritual literature as well as the sensation of vertigo. Transient ‘visions', whose content includes motifs that also appear in near-death experiences and alien abduction scenarios have also appeared. Positive effectual parasthesias (electric-like buzzes in the body) have occurred. Another experiences that has been elicited neuromagnetically is bursts of emotion, most commonly of fear and joy. Although the content of these experiences can be quite striking, the way they present themselves is much more ordinary. It approximates the ‘twilight state' between waking and sleep called hypnogogia. This can produce brief, fleeting visions, feelings that the bed is moving, rocking, floating or sinking. Electric-buzz like somatic sensations and hearing an inner voice call one's name can also occur in hypnogogia. The range of experiences it can produce is quite broad. If all signals produced the same phenomena, then it would be difficult to conclude that these magnetic signals approximate the postulated endogenous neuromagnetic signals that create alterations in State. In fact, the former produces a wide variety of phenomena. One such signal makes some women apprehensive, but another doesn't. One signal creates such strong vestibular sensations that one can't stand up. Another doesn't.

The temporal lobes are the parts of the brain that mediate states of consciousness. EEG readouts from the temporal lobes are markedly different when a person is asleep, having a hallucinogenic seizure, or on LSD. Siezural disorders confined to the temporal lobes (complex partial seizures) have been characterized as impairments of consciousness. There was also a study done in which monkeys were given LSD after having various parts of their brains removed. The monkeys continued to ‘trip' no matter what part or parts of their brains were missing until both temporal lobes were taken out. In these cases, the substance did not seem to affect the monkeys at all. The conclusion seems unavoidable. In addition to all their other functions (aspects of memory, language, music, etc.), the temporal lobes mediate states of consciousness.

If exposing the temporal lobes to magnetic signals can induce alterations in States, then it seems reasonable to suppose that States find part of their neural basis in our postulated neuromagnetic signals, arising out of the temporal lobes.

Hallucinations are known to be the Phenomenological correlates of altered States. Alterations in state of consciousness leads, following input, and phenomena, whether hallucinatory or not, follows in response. We can offer two reasons for drawing this conclusion.

The first is one of the results obtained by a study of hallucinations caused by electrical stimulation deep in the brain. In this study, the content of the hallucinations was found to be related to the circumstances in which they occurred, so that the same stimulations could produce different hallucinations. The conclusion was that the stimulation induced altered states, and the states facilitated the hallucinations.

The second has to do with the relative speeds of the operant neural processes.

Neurochemical response times are limited by the time required for their transmission across the synaptic gap, .5 to 2msec.

By comparison, the propagation of action potentials is much faster. For example, an action potential can travel a full centimetre (a couple of orders of magnitude larger than a synaptic gap) in about 1.3 msec. The brain's electrical responses, therefore, happen orders of magnitude more quickly than do its chemical ones.

Magnetic signals are propagated with greater speeds than those of action potentials moving through neurons. Contemporary physics requires that magnetic signals be propagated at a significant fraction of the velocity of light, so that the entire brain could be exposed to a neuromagnetic signal in vanishingly small amounts of time.

It seems possible that neuromagnetic signals arise from structures that mediate our various sensory and cognitive modalities. These signals then recruit those functions (primarily in the limbic system) that adjust the changes in state. These temporal lobe signals, we speculate, then initiate signals to structures that mediate modalities that are enhanced or suppressed as the state changes.

The problem of defining the phrase ‘state of consciousness' has plagued the field of cognitive studies for some time. Without going into the history of studies in the area, we would like to outline a hypothesis concerning states of consciousness in which the management of states gives rise to the phenomenon of consciousness

There are theories that suggest that cognitive modalities (such as memory, affect, ideation and attention) may be seen as analogs to sensory modalities.

We hypothesize that the entire set of modalities, cognitive and sensory, may be heuristically compared with a sound mixing board. In this metaphor, all the various modalities are represented as vertical rheostats with enhanced functioning increasing towards the top, and suppressed function increasing toward the bottom. Further, the act of becoming conscious of phenomena in any given modality involves the adjustment of that modality's ‘rheostat'

Sensory input from any modality can alter one's state. The sight of a sexy person, the smell of fire, the unexpected sensation of movement against one's skin (there's a bug on me!), a sudden bitter taste experienced while eating ice cream, or the sound of one's child screaming in pain; all of these phenomena can induce alterations in State. Although the phrase ‘altered states' has come to be associated with dramatic, otherworldly experiences, alterations in state, as we will be using the phrase, refer primarily to those alterations that take us from one normal state to another.

Alterations in state can create changes within the various sensory and cognitive modalities. An increase in arousal following the sight of a predator will typically suppress the sense of smell (very few are able to stop and ‘smell the roses' while a jaguar is chasing them), suppressive introspection (nobody wants to know ‘who I really am?' Nonetheless, an anaconda breeds for wrapping itself around them, suppresses sexual arousal, and alters vision so that the centre of the visual field is better attended then one's peripheral vision allowing one to see the predator's movement better? The sight of a predator will also introduce a host of other changes, all of which reflect the State.

In the Hindu epic, the Mahabharata, there is a dialogue between the legendary warrior, Arjuna, and his archery teacher. Arjuna was told by his teacher to train his bow on a straw bird used as a target. Arjuna was asked to describe the bird. He answered ‘I can't'. ‘Why not?', Asked his teacher. ‘I can only see its eye', he answered. ‘Release your arrow', commanded the teacher. Arjuna did, and hit the target in the eye. ‘I'll make you the finest archer in the world', said his teacher.

In this story, attention to peripheral vision had ceased so completely that only the very centre of his visual field received any. Our model of states would be constrained to interpret Arjuna's (mythical) feat as a behaviour specific to a state. The unique combination of sensory enhancement, heightened attention, and sufficient suppression of emotion, ideation, and introspection that support such an act suggests specific settings for our metaphorical rheostats.

Changes in state make changes in sensory and cognitive modalities, and they in turn, trigger changes in state. We can reasonably conclude that there is a feedback mechanism whereby each modality is connected to the others.

States also create tendencies to behave in specific ways in specific circumstances, maximizing the adaptivity of behaviour in those circumstances; behaviour that tends to meet our needs and respond to threats to our ability to meet those needs.

Each circumstance adjusts each modality’s setting, tending to maximize that modality's contribution to adaptive behaviour in that circumstance. The mechanism may function by using both learned and inherited default settings for each circumstance and then repeating those settings in similar circumstances later on. Sadly, this often makes states maladaptive. Habitually to alteration in State, in response to threats from an abusive parent, for example, can make for self-defeating responses to stress in other circumstances, where theses same responses are no longer advantageous.

Because different States are going to be dominated by specific combinations of modalities, it makes sense that a possible strategy for aligning the rheostats (making alterations in state) is to move them in tandem, so that after a person associates the sound of a scream to the concept of a threat, that sound, with its unique auditory signature, will cause all the affected modalities (most likely most of them in most cases) to take the positions they had at the time the association was made.

hen we say changing states, we are referring to much more than the dramatic states created by LSD, isolation tanks, REM. sleep, etc. We are also including normal states of consciousness, which we can imagine as kindled ‘default settings' of our various modalities. When any one of these settings returns to one of its default settings, it will, we conjecture, tend to entrain all the other modalities to the settings they habitually take in that state.

To accomplish this, we must suggest that each modality be connected to every other one. A sight, a smell, a sound, or a tactile feeling can all inspire fear. Fear can motivate ideation. Ideation can inspire arousal. Changes in effect can initiate alterations in introspection. Introspection alters affect. State specific settings of individual modalities could initiate settings for other modalities.

Our main hypothesis here is that all these intermodal connections, as operating as a single system, have a single Phenomenological correlate. The phenomena of subjective awareness.

The structures associated with that modality then broadcasts are neuromagnetic signals to the temporal lobes, which then produces signals that then recruits various structures throughout the brain. Specifically, those structures whose associated modalities' values must be changed in order to accomplish the appropriate alteration in state. In the second section, we found the possibility that states are settings for the variable aspects of cognitive and sensory modalities. We also offered the suggestion that consciousness is the Phenomenological correlate of the feedback between the management of states on the one hand, and the various cognitive and sensory modalities, on the other. If all of these conclusions were to stand up to testing, we could conclude that the content of the brain's hypothesized endogenous magnetic signals might consist of a set of values for adjusting each sensory and cognitive rheostat. We might also conclude that neuromagnetic signalling is the context in which consciousness occurs.

The specific mechanism whereby subjectivity is generated is out of the reach of this work. Nevertheless, we can say that the fact that multiple modalities are experienced simultaneously, together with our model's implication that they are ‘reset,' all at once, with each alteration in state suggests that our postulated neuromagnetic signals may come in pairs, with the two signals running slightly out of condition with one another. In this way, neuromagnetic signals, like the two laser beams used to produce a hologram, might be able to store information in a similar way, as has already been explored by Karl Pibhram. The speed at which neuromagnetic signals continue to propagate, and together with their capacity to recruit/alter multiple modalities suggests that the underlying mechanism have been selected to make instant choices on which specific portions to recruit in order to facilitate the behaviours acted out of the State, and to do so quickly.

In this way, the onset time for the initiation of States is kept to a minimum, and with it, the times needed to make the initial, cognitive response to stimuli. When it comes to response to threats, or sighting prey, the evolutionary advantages are obvious.

Higher-order theories of consciousness try to explain the distinctive properties of consciousness in terms of some relation obtaining between the conscious state in question and a higher-order representation of some sort (either a higher-order experience of that state, or a higher-order thought or belief about it). The most challenging properties to explain are those involved in phenomenal consciousness - the sort of state that has a subjective dimension, which has ‘feel’, or which it is like something to undergo.

One of the advances made in recent years has been in distinguishing between different questions concerning consciousness. Not everyone agrees on quite which distinctions need to be drawn. But all are agreeing that we should distinguish creature consciousness from mental-state consciousness. It is one thing to say of an individual or organism that it is conscious (either in general or of something in particular). It is quite another thing to say of one of the mental states of a creature that it is conscious.

It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious, and finds of its own sorted simplicities (intransitive) is to say just that it is awake, as opposing to an ever vanquishing state of unconsciousness, only to premises the fact, that the unconscious is literally resting, not of an awakening state. There do not appear to be any deep philosophical difficulties lurking here (or at least, they are not difficulties specific to the topic of consciousness, as opposed to mentality in general). But to say of an organism that it is conscious of such-and-such (transitive) is normally to say at least that it is perceiving such-and-such, or aware of such-and-such. So we say of the mouse that it is conscious of the cat outside its hole, in explaining why it does not come out is, perhaps, to mean that it perceives the cat's presence. To provide an account of transitive creature-consciousness would thus be to attempt a theory of perception.

There is a choice to be made concerning transitive creature-consciousness, failure to notice which may be a potential source of confusion. For we have to decide whether the perceptual state in virtue of which an organism may be said to be transitively-conscious of something must itself be a conscious one (state-conscious). If we say ‘Yes’ then we will need to know more about the mouse than merely that it perceives the cat if we are to be assured that it is conscious of the cat - we will need to establish that its percept of the cat is itself conscious. If we say ‘No’, on the other hand, then the mouse's perception of the cat will be sufficient for the mouse to count as conscious of the cat, but we may have to say that although it is conscious of the cat, the mental state in virtue of which it is so conscious is not itself a conscious one! It may be best to by-pass any danger of confusion here by avoiding the language of transitive-creature-consciousness altogether. Nothing of importance would be lost to us by doing this. We can say simply that organism O observes or perceives x. We can then assert, explicitly, that if we wish, that its percept be or is not conscious.

Turning now to the notion of mental-state consciousness, the major distinction here is between phenomenal consciousness, on the one hand - which is a property of states that it is like something to be in, which have a distinctive ‘feel’ (Nagel, 1974) - and various functionally-definable forms of access consciousness, on the other. Most theorists believe that there are mental states - such as occurrent thoughts or judgments - which are access-conscious (in whatever is the correct functionally-definable sense), but which are not phenomenally conscious. In contrast, there is considerable dispute as to whether mental states can be phenomenally-conscious without also being conscious in the functionally-definable sense - and even more dispute about whether phenomenal consciousness can be reductively explained in functional and/or representational terms.

It seems plain that there is nothing deeply problematic about functionally-definable notions of mental-state consciousness, from a naturalistic perspective. For mental functions and mental representations are the staple fares of naturalistic accounts of the mind. But this leaves plenty of room for dispute about the form that the correct functional account should take. Some claim that for a state to be conscious in the relevant sense is for it to be poised to have an impact on the organism's decision-making processes, perhaps also with the additional requirement that those processes should be distinctively rational ones. Others think that the relevant requirement for access-consciousness is that the state should be suitably related to higher-order representations - experiences and/or beliefs - of that very state.

What is often thought to be naturalistically problematic, in contrast, is phenomenal consciousness. And what is really and deeply controversial is whether phenomenal consciousness can be explained in terms of some or other functionally-definable notion. Cognitive (or representational) theories maintain that it can. Higher-order cognitive theories maintain that phenomenal consciousness can be reductively explained in terms of representations (either experiences or beliefs) which are higher-order. Such theories concern us here.

Higher-order theories, like cognitive/representational theories in general, assume that the right level at which to seek an explanation of phenomenal consciousness is a cognitive one, providing an explanation in terms of some combination of causal role and intentional content. All such theories claim that phenomenal consciousness consists in a certain kind of intentional or representational content (analog or ‘fine-grained’ in comparison with any concepts we may possess) figuring in a certain distinctive position in the causal architecture of the mind. They must therefore maintain that these latter sorts of mental property do not already implicate or presuppose phenomenal consciousness. In fact, all cognitive accounts are united in rejecting the thesis that the very properties of mind or mentality already presuppose phenomenal consciousness, as proposed by Searle (1992, 1997) for example.

The major divides among representational theories of phenomenal consciousness in general, is between accounts that are provided in purely first-order terms and those that implicate higher-order representations of one sort or another (see below). These higher-order theorists will allow that first-order accounts - of the sort defended by Dretske (1995) and Tye (1995), for example - can already make some progress with the problem of consciousness. According to first-order views, phenomenal consciousness consists in analog or fine-grained contents that are available to the first-order processes that guide thought and action. So a phenomenally-conscious percept of red, for example, consisting in a state, with which the parallel contentual representations are red under which are betokened in such a way as to take food into thoughts about red, or into actions that are in one way or another guide by way of redness. Now, the point to note in favour of such an account is that it can explain the natural temptation to think that phenomenal consciousness is in some sense ineffable, or indescribable. This will be because such states have fine-grained contents that can slip through the mesh of any conceptual net. We can always distinguish many more shades of red than we have concepts for, or could describe in language (other than indexically -, e.g., ‘That shade’)

The main motivation behind higher-order theories of consciousness, in contrast, derives from the belief that all (or at least most) mental-state types admit of both conscious and non-conscious varieties. Almost everyone now accepts, for example, (post-Freud) that beliefs and desires can be activated non-consciously. (Think, here, of the way in which problems can apparently become resolved during sleep, or while one's attention is directed to other tasks. Notice, that appearance to non-conscious intentional states is now routine in cognitive science.) And then if we ask what makes the difference between a conscious and a non-conscious mental state, one natural answer is that consciously states are states we are aware of them but not as to their actualization as based upon its nature. And if awareness is thought to be a form of creature-consciousness, then this will translate into the view that conscious states are states of which the subject is aware, or states of which the subject is creature-conscious. That is to say, these are states that are the objects of some sort of higher-order representation - whether to some higher-order of perception or experience, or a higher-order of belief or thought.

One crucial question, then, is whether perceptual states as well as beliefs admit of both conscious and non-conscious varieties. Can there be, for example, such a thing as a non-conscious visual perceptual state? Higher-order theorists are united in thinking that there can. Armstrong (1968) uses the example of absent-minded driving to make the point. Most of us at some time have had the rather unnerving experience of ‘coming to’ after having been driving on ‘automatic pilot’ while our attention was directed elsewhere - perhaps having been day-dreaming or engaged in intense conversation with a passenger. We were apparently not consciously aware of any of the route we have recently taken, nor of any of the obstacles we avoided on the way. Yet we must surely have been seeing, or we would have crashed the car. Others have used the example of blind-sight. This is a condition in which subjects have had a portion of their primary visual cortex destroyed, and apparently become blind in a region of their visual field as a result. But it has now been known for some time that if subjects are asked to guess at the properties of their ‘blind’ field (e.g., whether it contains a horizontal or vertical grating, or whether it contains an ‘X’ or an ‘O’), they prove remarkably accurate. Subjects can also reach out and grasp objects in their ‘blind’ field with something like 80% or more of normal accuracy, and can catch a ball thrown from their ‘blind’ side, all without conscious awareness.

More recently, a powerful case for the existence of non-conscious visual experience has been generated by the two-systems theory of vision proposed and defended by Milner and Goodale (1995). They review a wide variety of kinds of neurological and neuro-psychological evidence for the substantial independence of two distinct visual systems, instantiated in the temporal and parietal lobes respectively. They conclude that the parietal lobes provide a set of specialized semi-independent modules for the on-line visual control of action; Though the temporal lobes are primarily concerned with subsequent off-line functioning, such as visual learning and object recognition. And only the experiences generated by the temporal-lobe system are phenomenally conscious, on their account.

(Note that this is not the familiar distinction between what and where visual systems, but is rather a successor to it. For the temporal-lobe system is supposed to have access both to property information and to spatial information. Instead, it is a distinction between a combined what-where system located in the temporal lobes and a how-to or action-guiding system located in the parietal lobes.)

To get the flavour of Milner and Goodale's hypothesis, consider just one strand from the wealth of evidence they provide. This is a neurological syndrome called visual form agnosia, which results from damage localized to both temporal lobes, leaving primary visual cortex and the parietal lobes composed. (Visual form agnosia is normally caused by carbon monoxide poisoning, for reasons that are little understood.) Such patients cannot recognize objects or shapes, and may be capable of little conscious visual experience; still, their sensorimotor abilities remain largely intact

One particular patient has now been examined in considerable detail. While D.F. is severely agnosia, she is not completely lacking in conscious visual experience. Her capacities to perceive colours and textures are almost completely preserved. (Why just these sub-modules in her temporal cortex should have been spared is not known.) As a result, she can sometimes guess the identity of a presented object - recognizing a banana, say, from its yellow Collor and the distinctive texture of its surface. Nevertheless, she is unable to perceive the shape of the banana (whether straight or curved, say); Nor its orientation (upright or horizontal), nor of many of her sensorimotor abilities are close too normal - she would be able to reach out and grasp the banana, orienting her hand and wrist appropriately for its position and orientation, and using a normal and appropriate finger grip. Under experimental conditions it turns out that although D.F. is at chance in identifying the orientation of a broad line or letter box, she is almost normal when posting a letter through a similarly-shaped slot oriented at random angles. In the same way, although she is at chance when trying to choose as between the rectangular Forms of very different sizes, her reaching and grasping behaviours when asked to pick up such a Form are virtually indistinguishable from those of normal controls. It is very hard to make sense of this data without supposing that the sensorimotor perceptual system is functionally and anatomically distinct from the object-recognition/conscious system.

There is a powerful case, then, for thinking that there are non-conscious as well as conscious visual percepts. While the perceptions that ground your thoughts when you plan in relation to the perceived environment (‘I'll pick up that one’) may be conscious, and while you will continue to enjoy conscious perceptions of what you are doing while you act, the perceptual states that actually guide the details of your movements when you reach out and grab the object will not be conscious ones, if Milner and Goodale (1995) are correct

But what implication does this have for phenomenal consciousness? Must these non-conscious percepts also be lacking in phenomenal properties? Most people think so. While it may be possible to get oneself to believe that the perceptions of the absent-minded car driver can remain phenomenally conscious (perhaps lying outside of the focus of attention, or being instantly forgotten), it is very hard to believe that either blind-sight percepts or D.F.'s sensorimotor perceptual states might be phenomenally conscious ones. For these perceptions are ones to which the subjects of those states are blind, and of which they cannot be aware. And the question, then, is what makes the relevant difference? What is it about a conscious perception that renders it phenomenal, which a blind-sight perceptual state would correspondingly lack? Higher-order theorists are united in thinking that the relevant difference consists in the presence of something higher-order in the first case that is absent in the second. The core intuition is that a phenomenally conscious state will be a state of which the subject is aware.

What options does a first-order theorist have to resist this conclusion? One is to deny the data, it can be said that the non-conscious states in question lack the kind of fineness of grain and richness of content necessary to count as genuinely perceptual states. On this view, the contrast discussed above isn't really a difference between conscious and non-conscious perceptions, but rather between conscious perceptions, on the one hand, and non-conscious belief-like states, on the other. Another option is to accept the distinction between conscious and non-conscious perceptions, and then to explain that distinction in first-order terms. It might be said, for example, that conscious perceptions are those that are available to belief and thought, whereas non-conscious ones are those that are available to guide movement. A final option is to bite the bullet, and insist that blind-sight and sensorimotor perceptual states are indeed phenomenally conscious while not being access-conscious. On this account, blind-sight percepts are phenomenally conscious states to which the subjects of those states are blind. Higher-order theorists will argue, of course, that none of these alternatives is acceptable.

In general, then, higher-order theories of phenomenal consciousness claim the following: A phenomenally conscious mental state is a mental state (of a certain sort - see below) which either is, or is disposed to be, the object of a higher-order representation of a certain sort. Higher-order theorists will allow, of course, that mental states can be targets of higher-order representation without being phenomenally conscious. For example, a belief can give rise to a higher-order belief without thereby being phenomenally conscious. What is distinctive of phenomenal consciousness is that the states in question should be perceptual or quasi-perceptual ones (e.g., visual images as well as visual percepts). Moreover, most cognitive/representational theorists will maintain that these states must possess a certain kind of analog (fine-grained) or non-conceptual intentional content. What makes perceptual states, mental images, bodily sensations, and emotions phenomenally conscious, on this approach, is that they are conscious states with analog or non-conceptual contents. So putting these points together, we get the view that phenomenally conscious states are those states that possess fine-grained intentional contents of which the subject is aware, being the target or potential target of some sort of higher-order representation.

There are then two main dimensions along which higher-order theorists disagree among themselves. One relate to whether the higher-order states in question are belief-like or perception-like. That taking to the former option is higher-order thought theorists, and those taking the latter are higher-order experience or ‘inner-sense’ theorists. The other disagreement is internal to higher-order thought approaches, and concerns whether the relevant relation between the first-order state and the higher-order thought is one of availability or not. That is, the question is whether a state is conscious by virtue of being disposed to give rise to a higher-order thought, or rather by virtue of being the actual target of such a thought. These are the options that will now concern us.

According to this view, humans not only have first-order non-conceptual and/or analog perceptions of states of their environments and bodies, they also have second-order non-conceptual and/or analog perceptions of their first-order states of perception. Humans (and perhaps other animals) not only have sense-organs that scan the environment/body to produce fine-grained representations that can then serve to ground thoughts and action-planning, but they also have inner senses, charged with scanning the outputs of the first-order senses (i.e., perceptual experiences) to produce equally fine-grained, but higher-order, representations of those outputs (i.e., to produce higher-order experiences). A version of this view was first proposed by the British Empiricist philosopher John Locke (1690). In our own time it has been defended especially by Armstrong.

(A terminological point: this view is sometimes called a ‘higher-order experience (HOE) theory’ of phenomenal consciousness; But the term ‘inner-sense theory’ is more accurate. For as we will see in section 5, there are versions of a higher-order thought (HOT) approaches that also implicate higher-order perceptions, but without needing to appeal to any organs of inner sense.

(Another terminological point: ‘Inner-sense theory’ should more strictly be called ‘higher-order-sense theory’, since we of course have senses that are physically ‘inner’, such as pain-perception and internal touch-perception, which are not intended to fall under its scope. For these are first-order senses on a par with vision and hearing, differing only in that their purpose is to detect properties of the body rather than of the external world. According to the sort of higher-order theory under discussion in this section, these senses, too, determine what needs have their outputs scanned to produce higher-order analog contents in order for them to become phenomenally conscious. In what follows, however, the term ‘inner sense’ will be used to mean, more strictly, ‘higher-order sense’, since this terminology is now pretty firmly established.)

A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is in turn the target of a higher-order analog/non-conceptual intentional state, via the operations of a faculty of ‘inner sense’.

On this account, the difference between a phenomenally conscious percept of red and the sort of non-conscious percepts of red that guide the guesses of a blind-sighter and the activity of sensorimotor system, is as follows. The former is scanned by our inner senses to produce a higher-order analog state with the content experience of red or seems red, whereas the latter states are not - they remain merely first-order states with the analog content red. In so remaining, they lack any dimension of seeming or subjectivity. According to inner-sense theory, it is our higher-order experiential themes produced by the operations of our inner-senses which make some mental states with analog contents, but not others, available to their subjects. And these same higher-order contents constitute the subjective dimension or ‘feel’ of the former set of states, thus rendering them phenomenally conscious.

One of the main advantages of inner-sense theory is that it can explain how it is possible for us to acquire purely recognisable concepts of experience. For if we possess higher-order perceptual contents, then it should be possible for us to learn to recognize the occurrence of our own perceptual states immediately - or ‘straight off’ - grounded in those higher-order analog contents. And this should be possible without those recognizable concepts thereby having any conceptual connections with our beliefs about the nature or content of the states recognized, nor with any of our surrounding mental concepts. This is then how inner-sense theory will claim to explain the familiar philosophical thought-experiments concerning one's own experiences, which are supposed to cause such problems for physicalist/naturalistic accounts of the mind.

For example, I can think, ‘This type of experience [as of red] might have occurred in me, or might normally occur in others, in the absence of any of its actual causes and effects.’ So on any view of intentional content that sees content as tied to normal causes (i.e., to information carried) and/or to normal effects (i.e., teleological or an inferential role), this type of experience might occur without representing red. In the same sort of way, I will be able to think, ‘This type of experience [pain] might have occurred in me, or might occur in others, in the absence of any of the usual causes and effects of pains. There could be someone in whom these experiences occur but who isn't bothered by them, and where those experiences are never caused by tissue damage or other forms of a bodily insult. And conversely, there could be someone who behaves and acts just as I do when in pain, and in response to the same physical causes, but who is never subject to this type of experience.’ If we possess purely recognitional concepts of experience, grounded in higher-order percepts of those experiences, then the thinkability of such thoughts is both readily explicable, and apparently unthreatening to a naturalistic approach to the mind.

Inner-sense theory does face a number of difficulties, however. If inner-sense theory were true, then how is it that there is no phenomenology distinctive of inner sense, in the way that there is a phenomenology associated with each outer sense? Since each of the outer senses gives rise to a distinctive set of Phenomenological properties, you might expect that if there were such a thing as inner sense, then there would also be a phenomenology distinctive of its operation. But there doesn't appear to be any.

This point turns on the so-called ‘transparency’ of our perceptual experience (Harman, 1990). Concentrate as hard as you like on your ‘outer’ (first-order) experiences - you will not find any further Phenomenological properties arising out of the attention you pay to them, beyond those already belonging to the contents of the experiences themselves. Paying close attention to your experience of the Collor of the red rose, for example, just produces attention to the redness - a property of the rose. But put like this, however, the objection just seems to beg the question in favour of first-order theories of phenomenal consciousness. It assumes that first-order - ‘outer’ - perceptions already have a phenomenology independently of their targeting by inner sense. But this is just what an inner-sense theorist will deny. And then in order to explain the absence of any kind of higher-order phenomenology, an inner-sense theorist only needs to maintain that our higher-order experiences are never themselves targeted by an inner-sense-organ that might produce third-order analog representations of them in turn.

Another objection to inner-sense theory is as follows if there really were an organ of inner sense, then it ought to be possible for it to malfunction, just as our first-order senses sometimes do. And in that case, it ought to be possible for someone to have a first-order percept with the analog content red causing a higher-order percept with the analog content seems-orange. Someone in this situation would be disposed to judge, ‘It is rouge red, but, till, it immediately stands as non-inferential (i.e., not influenced by beliefs about the object's normal Collor or their own physical state). But at the same time they would be disposed to judge, ‘It seems orange’. Not only does this sort of thing never apparently occur, but the idea that it might do so conflicts with a powerful intuition. This is that our awareness of our own experiences is immediate, in such a way that to believe that you are undergoing an experience of a certain sort is to be undergoing an experience of that sort. But if inner-sense theory is correct, then it ought to be possible for someone to believe that they are in a state of seeming-orange when they are actually in a state of seeming-red.

A different sort of objection to inner-sense theory is developed by Carruthers (2000). It starts from the fact that the internal monitors postulated by such theories would need to have considerable computational complexity in order to generate the requisite higher-order experiences. In order to perceive an experience, the organism would need to have mechanisms to generate a set of internal representations with an analog or non-conceptual content representing the content of that experience, in all its richness and fine-grained detail. And notice that any inner scanner would have to be a physical device (just as the visual system of itself is) which depends upon the detection of those physical events in the brain that is the output of the various sensory systems (just as the visual system is a physical device that depends upon detection of physical properties of surfaces via the reflection of light). For it is hard to see how any inner scanner could detect the presence of an experience as experience. Rather, it would have to detect the physical realizations of experiences in the brain, and construct the requisite higher-order representation of the experiences that those physical events realize, on the basis of that physical-information input. This makes is seem inevitable that the scanning device that supposedly generates higher-order experiences of our first-order visual experience would have to be almost as sophisticated and complex as the visual system itself

Now the problem that arises here is this. Given this complexity in the operations of our organs of inner sense, there had better be some plausible story to tell about the evolutionary pressures that led to their construction. For natural selection is the only theory that can explain the existence of organized functional complexity in nature. But there would seem to be no such stories on the market. The most plausible suggestion is that inner-sense might have evolved to subserve our capacity to think about the mental states of conspecific, thus enabling us to predict their actions and manipulate their responses. (This is the so-called ‘Machiavellian hypothesis’ to explain the evolution of intelligence in the great-ape lineage. But this suggestion presupposes that the organism must already have some capacity for higher-order thought, since such thoughts in which an inner sense is supposed to subserve. And yet, some higher-order thought theories can claim all of the advantages of inner-sense theory as an explanation of phenomenal consciousness, but without the need to postulate any ‘inner scanners’. At any rate, the ‘computational complexity objection’ to inner-sense theories remains as a challenge to be answered.

`We could derive a scientific understanding of ideas aligned with the aid of precise deduction, just as Descartes continued his acclamation that we could lay the contours of physical reality within the realm of a three-dimensional coordinate system with an organized integrated whole made up of diverse but interrelated and interdependent parts. Following the publication of Isaac Newton's 'Principia Mathematica' in 1687, reductionism and mathematical medaling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.

The radical separation between mind and nature formalized by Descartes, served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanism without any concern about its spiritual dimensions or ontological foundations. Meanwhile, attempts to rationalize, reconcile or eliminate Descartes's merging division between mind and matter became the most central characterization of Western intellectual life.

The nineteenth-century Romantics in Germany, England and the United States revived Jean-Jacques Rousseau (1712-78) attempt to posit a ground for human consciousness by reifying nature in a different form. Wolfgang von Johann Goethe (1749-1832) and Friedrich Wilhelm von Schelling (1775-1854) proposed a natural philosophy premised on ontological Monism (the idea that adhering manifestations that govern toward evolutionary principles have grounded inside an inseparable spiritual Oneness) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter, nature became a mindful agency that loves illusion, as it shrouds man in mist, presses him or her heart and punishes those who fail to see the light. The principal philosopher of German Romanticism Friedrich Wilhelm von Schelling (1775-1854) arrested a version of cosmic unity, and argued that scientific facts were at best partial truths and that the mindful creative spirit that unities mind and matter is progressively moving toward self-realization and undivided wholeness.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge (1772-1834), placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the incommunicable powers of the immortal sea empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundations of the mind became the province of social scientists and humanists. Adolphe Quételet proposed a social physics that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant (1724-1804), sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The figure most responsible for infusing our understanding of Cartesian dualism with emotional content was the death of God theologian Friedrich Nietzsche (1844-1900). After declaring that God and divine will do not exist, Nietzsche reified the existence of consciousness in the domain of subjectivity as the ground for individual will and summarily dismissed all previous philosophical attempts to articulate the will to truth. The problem, claimed Nietzsche, is that earlier version of the will to truth, disguised the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressions or manifestations of individual will.

In Nietzsche's view, the separation between mind and matter is more absolute and total that had previously been imagined. Based on the assumption that there is no real or necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he declared that we are all locked in a prison house of language. The prison as he conceived it, however, was also a space where the philosopher can examine the innermost desires of his nature and articulate a new massage of individual existence founded on will.

Those who fail to enact their existence in this space, aforesaid by Nietzsche, are enticed into sacrificing their individuality on the non-existent altars of religious beliefs and/or democratic or socialist ideals and become, therefore members of the anonymous and docile crowd. Nietzsche also invalidated science in the examination of human subjectivity. Science, he said, not only exalted natural phenomena and favors reductionistic examinations of phenomena at the expense of mind. It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow any basis for the free exercise of individual will.

What is not widely known, however, is that Nietzsche and other seminal figures in the history of philosophical postmodernism were very much aware of an epistemological crisis in scientific thought than arose much earlier that occasioned by wave-particle dualism in quantum physics. The crisis resulted from attempts during the last three decades of the nineteenth century to develop a logically self-consistent definition of number and arithmetic that would serve to reenforce the classical view of correspondence between mathematical theory and physical reality.

Nietzsche appealed to this crisis in an effort to reinforce his assumptions that, in the absence of ontology, all knowledge (scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl attempted to preserve the classical view of correspondence between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigor. Thus effort to ground mathematical physics in human consciousness, or in human subjective reality was no trivial matter. It represented a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.

Exceeding in something otherwise that extends beyond its greatest equilibria, and to the highest degree, as in the sense of the embers sparking aflame into some awakening state, whereby our capable abilities to think-through the estranged dissimulations by which of inter-twirling composites, it's greater of puzzles lay withing the thickening foliage that lives the labyrinthine maze, in that sense and without due exception, only to be proven done. By some compromise, or formally subnormal surfaces of typically free all-knowing calculations, are we in such a way, that from underneath that comes upon those by some untold story of being human. These habituating and unchangeless and, perhaps, incestuous desires for its action's lay below the conscious struggle into the further gaiting steps of their pursuivants endless latencies, that we are drawn upon such things as their estranging dissimulations of arranging agreements, by which time and again we appear not to any separation of its actions, process, or an instance of separating or of being separated. The subsequent of being, occurring or carried out after something else, as a subsequent event disapproved by predictions, the ensuing realism in human subjectivity as ingrained of some external reality, may that be deducibly subtractive, but, that, if in at all, that we but locked in a prison house of language. The prison as he concluded it, was also a space where the philosopher can examine the innermost desires of his nature and articulate a new message of individual existence founded on will.

Nietzsche's emotionally charged defense of intellectual freedom and his radical empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought, With which apprehend the valuing cognation for which is self-removed by the underpinning conditions of substantive intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor was to resolve this crisis resulting in a view of the character of consciousness that closely resembled that of Nietzsche.

Descartes, the foundational architect of modern philosophy, was able to respond without delay or any assumed hesitation or indicative to such ability, and spotted the trouble too quickly realized that there appears of nothing in viewing nature that implicate the crystalline possibilities of reestablishing beyond the reach of the average reconciliation, for being between a full-fledged comparative being such in comparison with an expressed or implied standard or absolute, yet the inclination to talk freely and sometimes indiscretely, if not, only not an idea on expressing deficient in originality or freshness, belonging in community with or in participation, that the diagonal line has been worn between Plotinus and Whiteheads view which finds non-locality stationed within a particular point or points as occupied in space-time, only to its peculiarity finds itself as being outside the scope of concerns, that the comparability with the state or fact of having independent reality, its customs have recently come into evidence, and actualized by the existent idea of 'God,' especially. Still and all, the primordial nature of God', with which is eternal, a consequent of nature, which is in a flow of compliance, insofar as differentiation occurs of that which can be known as having existence in space or time, the significant relevance is cognitional to the thought noticeably regaining, excluding the use of examples in order to clarify that to explicate upon the interpolating relationships or the sequential occurrence to bring about an orderly disposition of individual approval that bears the settlements with the quantum theory,

Given that Descartes disgusted the information from the senses to the point of doubling the perceptive results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith, God constricted the world as said by Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering, in their pristine essence the truths of classical physics Descartes viewed them were quite literally 'revealed' truths, and it was this seventeenth-century metaphysical given that history became of science what we term the 'hidden ontology of classical epistemology?'

While classical epistemology would serve the progress of science very well, it also presented us with a terrible dilemma about the relationships between mind and world. If there is a real or necessary correspondence between mathematical ideas in subject reality and external physical reality, how do we know that the world in which 'we have live, and love, and succumb to an enviable death, actually exists? Descartes's resolution of the dilemma took the form of an exercise. He asked us to direct our attention inward and to divest our consciousness of all awareness of external physical reality. If we do so, he concluded, the real existence of human subjective reality could be confirmed.

'As it turned out, this resolution was considerably more problematic and oppressive than Descartes could have imagined, 'I think, therefore I am' may be as marginally persuasive way of confirming the real existence of the thinking self. But the understanding of physical reality that obliged Descartes and others to doubt the existence of the self-clearly implied that the separation between the subjective world and the world of life, and the real world of physical objectivity was 'absolute.'

Unfortunate, the inclined to error plummets suddenly and involuntary, their prevailing odds or probability of chance aggresses of standards that seem less than are fewer than some, in its gross effect, the fallen succumb moderately, but are described as 'the disease of the Western mind.' The dialectical conductivity servicing as background knowledge for understanding these modernistic styles and first-hand anatomical relationships between parts and wholes in physics. With a similar view, that of for something that provides a reason for something else, perhaps, by unforeseen persuadable partiality, or perhaps, by some unduly powers exerted over the minds or behaviour of others, giving cause to some entangled assimilation as 'x' imparts upon passing directly into dissimulated diminution. Relationships that emerge of the co-called 'new biology' and in recent studies thereof, finding that evolution directed toward a scientific understanding proved uncommonly exhaustive, in that to a greater or higher degree, that usually for reasons that posit in themselves the perceptual notion as deemed of existing or dealing with what exists only in the mind, therefore the ideational conceptual representation of ideas, and includes it’s as paralleled and, of course, as lacking nothing that properly belongs to it, that is with 'content

As the quality or state of being ready or skilled, that in dexterity brings forward for consideration the adequacy that is to make known the inclination to expound of the actual notion that bing exactly as appears ir is claimed is undoubted. The representation of an actualized entity is supposed a self-realization that blends into harmonious processes of self-creation

Nonetheless, it seems a strong possibility that Plotonic and Whitehead connect upon the same issue of the creation, that the sensible world may by looking at actual entities as aspects of nature's contemplation, that these formidable contemplations of nature are obviously an immensely intricate affair, whereby, involving a myriad of possibilities, and, therefore one can look upon the actualized entities as, in the sense of obtainability, that the basic elements are viewed into the vast and expansive array of processes.

We could derive a scientific understanding of these ideas aligned with the aid of precise deduction, just as Descartes continued his claim that we could lay the contours of physical reality within the realm of a three-dimensional co-ordinate system. Following the publication of Isaac Newton's 'Principia Mathematica' in 1687, reductionism and mathematical medaling became the most powerful tools of modern science. The dream that we could know and master the entire physical world through the extension and refinement of mathematical theory became the central feature and principles of scientific knowledge.

Philosophers like John Locke, Thomas Hobbes, and David Hume tried to articulate some basis for linking the mathematical describable motions of matter with linguistic representations of external reality in the subjective space of mind. Descartes' compatriot Jean-Jacques Rousseau reified nature on the ground of human consciousness in a state of innocence and proclaimed that 'Liberty, Equality, Fraternities' are the guiding principles of this consciousness. Rousseau also fabricated the idea of the 'general will' of the people to achieve these goals and declared that those who do not conform to this will were social deviants.

The conceptualization attributed to the Enlightenment idea of 'deism', which imaged the universe as a clockwork and God as the clockmaker, provided grounds for believing in a divine agency, from which the time of moment the formidable creations also imply, in of which, the exhaustion of all the creative forces of the universe at origins ends, and that the physical substrates of mind were subject to the same natural laws as matter. In that the only means of mediating the gap between mind and matter was pure reason, causally by the traditional Judeo-Christian theism, which had previously been based on both reason and revelation, responded to the challenge of deism by debasing traditionality as a test of faith and embracing the idea that we can know the truths of spiritual reality only through divine revelation. This engendered a conflict between reason and revelation that persists to this day. And laid the foundation for the fierce completion between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which they should ultimately define the special character of each.

The nineteenth-century Romantics in Germany, England and the United States revived Rousseau's attempt to posit a ground for human consciousness by reifying nature in a different form. Goethe and Friedrich Schelling proposed a natural philosophy premised on ontological Monism ( the idea that adhering manifestations that govern toward evolutionary principles have grounded inside an inseparable spiritual Oneness ) and argued God, man, and nature for the reconciliation of mind and matter with an appeal to sentiment, mystical awareness, and quasi-scientific attempts, as he afforded the efforts of mind and matter, nature became a mindful agency that 'loves illusion', as it shrouds man in mist, presses him or her heart and punishes those who fail to see the light. Schelling, in his version of cosmic unity, argued that scientific facts were at best partial truths and that the mindful creative spirit that unities mind and matter is progressively moving toward self-realization and 'undivided wholeness'.

The British version of Romanticism, articulated by figures like William Wordsworth and Samuel Taylor Coleridge, placed more emphasis on the primary of the imagination and the importance of rebellion and heroic vision as the grounds for freedom. As Wordsworth put it, communion with the 'incommunicable powers' of the 'immortal sea' empowers the mind to release itself from all the material constraints of the laws of nature. The founders of American transcendentalism, Ralph Waldo Emerson and Henry David Theoreau, articulated a version of Romanticism that commensurate with the ideals of American democracy.

The American envisioned a unified spiritual reality that manifested itself as a personal ethos that sanctioned radical individualism and bred aversion to the emergent materialism of the Jacksonian era. They were also more inclined than their European counterpart, as the examples of Thoreau and Whitman attest, to embrace scientific descriptions of nature. However, the Americans also dissolved the distinction between mind and natter with an appeal to ontological monism and alleged that mind could free itself from all the constraint of assuming that by some sorted limitation of matter, in which such states have of them, some mystical awareness.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche. Nietzsche reified the existence of consciousness in the domain of subjectivity as the ground for individual will and summarily reducing all previous philosophical attempts to articulate the will to truth. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche's earlier versions to the will to truth, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of will.

In Nietzsche's view, the separation between mind and matter is more absolute and total than previously been imagined. To serve as a basis on the assumptions that there are no really imperative necessities corresponding in common to or in participated linguistic constructions that provide everything needful, resulting in itself, but not too far as to distance from the influence so gainfully employed, that of which was founded as close of action, wherefore the positioned intent to settle the occasioned-difference may that we successively occasion to occur or carry out at the time after something else is to be introduced into the mind, that from a direct line or course of circularity inseminates in its finish. Their successive alternatives are thus arranged through anabatic existing or dealing with what exists only in the mind, so that, the conceptual analysis of a problem gives reason to illuminate, for that which is fewer than is more in the nature of opportunities or requirements that employ something imperatively substantive, moreover, overlooked by some forming elementarily whereby the gravity held therein so that to induce a given particularity, yet, in addition by the peculiarity of a point as placed by the curvilinear trajectory as introduced through the principle of equivalence, there, founded to the occupied position to which its order of magnitude runs a location of that which only exists within self-realization and corresponding physical theories. Ours being not rehearsed, however, unknowingly their extent temporality extends the quality value for purposes that are substantially spatial, as analytic situates points indirectly into the realities established with a statement with which are intended to upcoming reasons for self-irrational impulse as explicated through the geometrical persistence so that it is implicated by the position, and, nonetheless, as space-time, wherein everything began and takes its proper place and dynamic of function.

Earlier, Nietzsche, in an effort to subvert the epistemological authority of scientific knowledge, sought to appropriate a division between mind and world was much as unformidably than was originally envisioned by Descartes. In Nietzsche's view, the separation between mind and matter is more absolute and total than previously thought. Based on the assumption that there is no real or necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, but quick to realize, that there was nothing in this of nature that could explain or provide a foundation for the mental, or for all that we know from direct experience as distinctly human. Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by taking a leap if faith - God constructed the world, said Descartes, in accordance with the mathematical ideas that our minds are capable of uncovering in their pristine essence. The truth of classical physics as Descartes viewed them were quite literally revealed truths, and this was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the hidden ontology of classical epistemology, however, if there is no real or necessary correspondence between non-mathematical ideas in subjective reality and external physical reality, how do we know that the world in which we live, breath, and have our Being, actually exists? Descartes resolution of this dilemma took the form of an exercise. But, nevertheless, as it turned out, its resolution was considerably more problematic and oppressive than Descartes could have imagined, I think therefore I am, may be marginally persuasive in the ways of confronting the real existence of the thinking self. But, the understanding of physical reality that obliged Descartes and others to doubt the existence of this self clearly implied that the separation between the subjective world and the world of life, and the real wold of physical reality as absolute.

There is a multiplicity of different positions to which the term epistemological relativism has been applied, however, the basic idea common to all forms denies that there is a single, universal context. Many traditional epistemologists have striven to uncover the basic process, method or determined rules that allow us to hold true belief's, recollecting, for example, of Descartes's attempt to find the rules for directions of the mind. Hume's investigation into the science of mind or Kant's description of his epistemological Copernican revolution, where each philosopher attempted to articulate universal conditions for the acquisition of true belief.

The coherence theory of truth, finds to it view that the truth of a proposition consists in its being a member of some suitably defined body of other propositions, as a body that is consistent, coherent and possibly endowed with other virtues, provided there are not defined in terms of truth. The theory has two strengths: We cannot step outside our own best system of beliefs, to see how well it is doing in terms of correspondence with the world. To many thinkers the weak points of pure coherence theories in that they fail to include a proper sense of the way in which include a proper sense of the way in which actual systems of belief are sustained by persons with perceptual experience, impinged upon using their environment. For a pure coherence theorist, experience is only relevant as the source of perceptual representations of beliefs, which take their place as part of the coherent or incoherent set. This seems not to do justice to our sense that experience plays a special role in controlling our systems of belief, but Coherentists have contested the claim in various ways.

The pragmatic theory of truth is the view particularly associated with the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of the utility of accepting it. Put so badly the view is open too objective, since there are things that are false that it may be useful to accept, and conversely there are things that are true that it may be damaging to accept. However, their area deeply connects between the ideas that a representative system is accurate, and he likely success of the projects and purposes formed by its possessor. The evolution of a system of representation, of whether its given priority in consistently perceptual or linguistically bond by the corrective connection with evolutionary adaption, or under with utility in the widest sense, as for Wittgenstein's doctrine that means its use of deceptions over which the pragmatic emphasis on technique and practice are the matrix which meaning is possible.

Nevertheless, after becoming the tutor of the family of the Add de Madly that Jean-Jacques Rousseau (1712-78) became acquainted with philosophers of the French Enlightenment. The Enlightenment idea of deism, when we are assured that there is an existent God, additional revelation, some dogmas are all excluded. Supplication and prayer in particular are fruitless, may only be thought of as an 'absentee landlord'. The belief that remains abstractively a vanishing point, as wintered in Diderot's remark that a deist is someone who has not lived long enough to become an atheist. Which can be imagined of the universe as a clock and God as the clockmaker, provided grounds for believing in a divine agency at the moment of creation? It also implied, however, that all the creative forces of the universe were exhausted at origins, that the physical substrates of mind were subject to the same natural laws as matter, and pure reason. In the main, Judeo-Christian has had an atheistic lineage, for which had previously been based on both reason and revelation, responded to the challenge of deism by debasing rationality as a test of faith and embracing the idea that the truth of spiritual reality can be known only through divine revelation. This engendered a conflict between reason and revelations that persists to this day. And it also laid the foundation for the fierce competition between the mega-narratives of science and religion as frame tales for mediating the relation between mind and matter and the manner in which the special character of each should be ultimately defined.

Obviously, here, is, at this particular intermittent interval in time no universally held view of the actual character of physical reality in biology or physics and no universally recognized definition of the epistemology of science. And it would be both foolish and arrogant to claim that we have articulated this view and defined this epistemology.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. The obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

Heidegger, and the work of Husserl, and Sartre became foundational to those of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two world dilemmas in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigm of the late nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Machs critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, relativistic notions.

Two held extremities of paramount selection take on or upon the theories veiling their fold the phenomenal yields, as did for Albert Einstein, who attributively appreciated that the special theory of relativity (1905) and, the calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons. In additional the continuities as afforded by the efforts of the imagination are made discretely available to any unsurmountable achievements. Thus, remaining obtainable, in affordance through the excavations underlying the artifactual circumstances that govern all principal forms or types in the involving evolutionary principles of the general theory of relativity (1915). Where the both special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics, yet before 1905 the purely relational nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole, evincing the progressive principle of order, for which are complemental relations represented by their sum of its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness evinces self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, it is reasonable to conclude, in philosophical terms at least, that the universe is conscious.

But since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever toward any conception of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

Uncertain issues surrounding certainty are especially connected with those concerning scepticism. Although Greek scepticism entered on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, or in any area whatsoever. Classical scepticism, springs from the observation that at best unify the methods by some visual appearances yet seemingly less contractual than areas of greater equivalence, but impart upon us, as a virtual motif, least of mention, a set for which a certain position is to enact upon their forming certainties, in that of holding placements with the truths, e.g., there is a gulf between appearances and reality, it frequently cites the conflicting judgements that our methods deliver, with the result that questions of truths overcoming undesirability. In classic thought the various examples of this conflict were systemized in the tropes of Aenesidemus. So that, the scepticism of Pyrrho and the new Academy was a system of argument and inasmuch as opposing dogmatism, and, particularly the philosophical system building of the Stoics.

As it has come down to us, particularly in the writings of Sextus Empiricus, its method was typically to cite reasons for finding our issue decidable (sceptics devoted particular energy to undermining the Stoics conception of some truths as delivered by direct apprehension or some katalepsis). As a result the sceptics conclude eposhé, or the suspension of belief, and then go on to celebrate a way of life whose object was ataraxia, or the tranquillity resulting from suspension of belief.

Fixed by its will for and of itself, the mere mitigated scepticism which accepts every day or commonsense belief, is that, not the delivery of reason, but as due more to custom and habit. Nonetheless, it is self-satisfied at the proper time, however, the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by the accentuations from Pyrrho through to Sextus Expiricus. Despite the fact that the phrase Cartesian scepticism is sometimes used, Descartes himself was not a sceptic, however, in the method of doubt uses a sceptical scenario in order to begin the process of finding a general distinction to mark its point of knowledge. Descartes trusts in categories of clear and distinct ideas, not far removed from the phantasiá kataleptikê of the Stoics.

Nonetheless, of the principle that every effect is a consequence of an antecedent cause or causes. For causality to be true it is not necessary for an effect to be predictable as the antecedent causes may be numerous, too complicated, or too interrelated for analysis. Nevertheless, in order to avoid scepticism, this participating sceptic has generally held that knowledge does not require certainty. Except for alleged cases of things that are evident for one just by being true, it has often been thought, however, that any thing known must satisfy certain criteria as well for being true. It is often taught that anything is known must satisfy certain standards. In so saying, that by deduction or induction, there will be criteria specifying when it is. As these alleged cases of self-evident truths, the general principle specifying the sort of consideration that will make such standard in the apparent or justly conclude in accepting it warranted to some degree.

Besides, there is another view, with which the absolute globular view that we do not have any knowledge of whatsoever, for whichever prehensile excuse the constructs in the development of functional Foundationalism that construed their structures, perhaps, a sensibly supportive rationalization can find itself to the decision of whatever manner is supposed, it is doubtful, however, that any philosopher seriously thinks of absolute scepticism. Even the Pyrrhonist sceptics, who held that we should refrain from accenting to any principled elevation of unapparent or unrecognizable attestation to any convincing standards that no such hesitancy about positivity or assured affirmations to the evident, least that the counter-evident situation may have beliefs of requiring evidence, only because it is warranted.

René Descartes (1596-1650), in his sceptical guise, never doubted the content of his own ideas. It’s challenging logic, inasmuch as of whether they corresponded to anything beyond ideas.

All the same, the Pyrrhonism and Cartesian outward appearance of something as distinguished from the substance of which it has made the creation to form and their unbending reservations by the virtual globular scepticism. In having been held and defended, that of assuming that knowledge is some form of true, if sufficiently warranted belief, it is the warranted condition that provides the truth or belief conditions, so that in providing the grist for the sceptics mill about. The Pyrrhonist will suggest that there is no counter-evidential-balance of empirical deference, the sufficiency of giving in but warranted. Whereas, a Cartesian sceptic will agree that no empirical standards about anything other than ones own mind and its contents are sufficiently warranted, because there are always legitimate grounds for doubting it. Inasmuch as, the essential difference between the two views concerns the stringency of the requirements for a belief being sufficiently warranted to take account of as knowledge.

A Cartesian requires certainty, but a Pyrrhonist merely requires that the standards in case are more warranted then its negation.

Cartesian scepticism was unduly influence for which Descartes agues for scepticism, than his reply holds, in that we do not have any knowledge of any empirical standards, in that of anything beyond the contents of our own minds. The reason is roughly in the position that there is a legitimate doubt about all such standards, only because there is no way to justifiably deny that our senses are being stimulated by some sense, for which it is radically different from the objects which we normally think, in whatever manner they affect our senses. Therefrom, if the Pyrrhonist is the agnostic, the Cartesian sceptic is the atheist.

Because the Pyrrhonist requires much less of a belief in order for it to be confirmed as knowledge than do the Cartesian, the argument for Pyrrhonism are much more difficult to construct. A Pyrrhonist must show that there is no better set of reasons for believing to any standards, of which are in case that any knowledge learnt of the mind is understood by some of its forms, that has to require certainty.

The view of human consciousness advanced by the deconstructionists is an extension of the radical separation between mind and world legitimated by classical physics and first formulated by Descartes. After the death of a god theologian, Friedrich Nietzsche, declaring the demise of ontology, the assumption that the knowing mind exists in the prison house of subjective reality became a fundamental preoccupation in Western intellectual life. Shortly thereafter, Husserl tried and failed to preserve classical epistemology by grounding logic in human subjectivity, and this failure served to legitimate the assumption that there was no real or necessary correspondence between any construction of reality, including the scientific, and external reality. This assumption then became a central feature of the work of the French atheistic existentialist and in the view of human consciousness advanced by the deconstructionalists and promoted by large numbers of humanists and social scientists.

The first challenge to the radical separation between mind and world promoted and sanctioned by the deconstructionists is fairly straightforward. If physical reality is on the most fundamental level a seamless whole. It follows that all manifestations of this reality, including neuronal processes in the human brain, can never be separate from this reality. And if the human brain, which constructs an emergent reality based on complex language systems is implicitly part of the whole of biological life and desires its existence from embedded relations to this whole, this reality is obviously grounded in this whole and cannot by definition be viewed as separate or discrete. All of this leads to the conclusion, without any appeal to ontology, that Cartesian dualism is no longer commensurate with our view of physical reality in both physics and biology, there are, however, other more prosaic reasons why the view of human subjectivity sanctioned by the postmodern mega-theorist should no longer be viewed as valid.

From Descartes to Nietzsche to Husserl to the deconstructionists, the division between mind and world has been construed in terms of binary oppositions premises on the law of the excluded middle. All of the examples used by Saussure to legitimate his conception of oppositions between signified and signifiers are premises on this logic, and it also informs all of the extensions and refinements of this opposition by the deconstructionists. Since the opposition between signified and signifiers is foundational to the work of all these theorists, what is to say is anything but trivial for the practitioners of philosophical postmodernism - the binary oppositions in the methodologies of the deconstructionists premised on the law of the excluded middle should properly be viewed as complementary constructs.

Nevertheless, to underlying and hidden latencies are given among the many derivative contributions as awaiting the presences to the future under which are among them who narrow down the theory of knowledge, but, nonetheless, the possibilities to identify a set of common doctrines, are, however, the identity whose discerning of styles of instances to recognize, in like manners, these two styles of pragmatism, clarify the innovation that a Cartesian approval is fundamentally flawed, even though of responding very differently but not for done.

Repudiating the requirements of absolute certainty or knowledge, as sustained through its connexion of knowledge with activity, as, too, of pragmatism of a reformist distributing knowledge upon the legitimacy of traditional questions about the truth-conditionals of our cognitive practices, and sustain a conception of truth objectives, enough to give those questions that undergo of gathering in their own purposive latencies, yet we are given to the spoken word for which a dialectic awareness sparks the flame from the ambers of fire.

Pragmatism of a determinant revolution, by contrast, relinquishing the objectivity of early days, and acknowledges no legitimate epistemological questions over and above those that are naturally kindred of our current cognitive conviction.

It seems clear that certainty is a property that can be assembled to either a person or a belief. We can say that a person, 'S' might be certain or we can say that its descendable alignment is coordinated to accommodate the connexion, by saying that 'S' has the right to be certain just in case the value of 'p' is sufficiently verified.

In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that can cast doubt back onto what was hitherto taken to be certainty. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.

However, in moral theory, the views that there are inviolable moral standards or absolute variable human desires or policies or prescriptions, and subsequently since the 17th and 18th centuries, when the science of man began to probe into human motivations and emotions. For writers such as the French moralistes, and political philosopher Francis Hutcheson (1694-1746), David Hume (1711-76), and both Adam Smith (1723-90) and Immanuel Kant (1724-1804), whereby the prime task to delineate the variety of human reactions and motivations, such inquiry would locate our propensity for moral thinking about other faculties such as perception and reason, and other tendencies, such as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of the evolutionary governing principles about us.

In some moral system notably that in personal representations as standing for the German and founder of critical philosophy was Immanuel Kant (1724-1804), through which times real moral worth comes only with acting rightly because it is right. If you do what you should but from some other motive, such as fear or prudence, no moral merit accrues to you. Yet, in turn, for which it gives the impression of being without necessarily being so in fact, in that to look in quest or search, at least of what is not apparent. Of each discount other admirable motivations, are such as acting from sheer benevolence or sympathy. The question is how to balance the opposing ideas, and also how to understand acting from a sense of obligation without duty or rightness beginning to seem a kind of fetish.

The entertaining commodity that rests for any but those whose abilities for vauntingly are veering to the variously involving differences, is that for itself that the variousness in the quality or state of being decomposed of different parts, elements or individuals with which are consisting of a goodly but indefinite number, much as much of our frame of reference that, least of mention, maintain through which our use or by means we are to contain or constitute a command as some sorted mandatorily anthropomorphic virility. Several distinctions of otherwise, diverse probability, is that the right is not all on one side, so that, qualifies (as adherence to duty or obedience to lawful authority), that together constitute the ideal of moral propriety or merit approval. These given reasons for what remains strong in number, are the higher mental categories that are completely charted among their itemized regularities, that through which it will arise to fall, to have as a controlling desire something that transcends ones present capacity for attainment, inasmuch as to aspire by obtainably achieving. The intensity of sounds, in that it is associated chiefly with poetry and music, that the rhythm of the music made it easy to manoeuver, where inturn, we are provided with a treat, for such that leaves us with much to go through the ritual pulsations in rhythmical motions of finding back to some normalcy, however, at this time we ought but as justly as we might, be it that at this particular point of an occupied position as stationed at rest, as its peculiarity finds to its reference, and, pointing into the abyssal of space and time. So, once found to the ups-and-downs, and justly to move in the in and pots of the dance. Placed into the working potentials are to be charged throughout the functionally sportive inclinations that manifest the tune of a dynamic contribution, so that almost every selectively populated pressure ought to be the particular species attributive to evolutionary times, in that our concurrences are temporally at rest. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, and the development of language is a signalling system, cooperatives and aggressive tendencies our emotional repertoire, our moral reactions, including the disposition to denote and punish those who cheat on agreements or who free-riders, on whose work of others, our cognitive intuition may be as many as other primordially sized infrastructures, in that their intrenched inter-structural foundations are given as support through the functionally dynamic resources based on volitionary psychology, but it seems that it goes of a hand-in-hand interconnectivity, finding to its voluntary relationship with a partially paralleled profession named as, neurophysiological evidences, this, is about the underlying circuitry, in terms through which it subserves the psychological mechanism it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociologist E.O. Wilson.

An explanation of an admittedly speculative nature, tailored to give the results that need explanation, but currently lacking any independent aggressively, especially to explanations offered in sociological and evolutionary psychology. It is derived from the explanation of how the leopard got its spots, etc.

In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which in its place are only to provide by or as if by formal action as the possessions of another who in which does he express to fail in responses to physical stress, nonetheless. The reflective projection, might be that: If you want to look wise, stay quiet. The inductive ordering to stay quiet only to apply to something into shares with care and assignment, gives of equalling lots among a number that make a request for their opportunities in those with the antecedent desire or inclination. If one has no desire to look, seemingly the absence of wise becomes the injunction and this cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, tell the truth (regardless of whether you want to or not). The distinction is not always signaled by presence or absence of the conditional or hypothetical form: If you crave drink, don't become a bartender may be regarded as an absolute injunction applying to anyone, although only activated in cases of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: act only on that maxim through which you can at the same times will that it should become universal law: (2) the formula you the laws of nature, act as if the maxim of your action were to commence to be, that from beginning to end your will (a desire to act in a particular way or have a particular thing), is the universal law of nature: (3) the formula of the end-in-itself: has in inertness or appearance the end or the ending of such ways that you have always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end? : (4) The formula of autonomy, or considering the will of every rational being as a will which makes universal law: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.

Even so, a proposition that is not a conditional 'p', may affirmatively and negatively, modernize the opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: 'X' is intelligent (categorical?) = if 'X' is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such as gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that are force fields pure potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space that differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be grounded in the properties of the medium.

The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Despite the fact that his equal hostility to action at a distance muddies the water, it is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant. Both of whose influenced the scientist Faraday, with whose work the physical notion became established. In his paper On the Physical Character of the Lines of Magnetic Force (1852), Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electromagnetic lines of force was evidence for the physical reality of the intervening medium.

Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a utility of accepting it. Communicable messages of thoughts are made popularly known throughout the interchange of thoughts or opinions through shared symbols. The difficulties of communication between people of different cultural backgrounds and exchangeable directives, only for which our word is the intellectual interchange for conversant chatter, or in general for talking. Man, alone is disquotational among situational analyses that only are viewed as an objection. Since, there are things that are false, as it may be useful to accept, and conversely give in the things that are true and consequently, it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connexion is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kants doctrine, and continued to play an influencing role in the theory of meaning and truth.

James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualists insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.

From his earliest writings, James understood cognitive processes in teleological terms. Thought, he held, assisted us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a beliefs benefits are relevant to its justification. His pragmatic method of analyzing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.

Such an approach to come or go near or nearer of meaning, yet lacking of an interest in concerns, justly as some lack of emotional responsiveness have excluded from considerations for those apart, and otherwise e elsewhere partitioning. Although the work for verification has seemed dismissively metaphysical, and, least of mention, were drifting of becoming or floated along to knowable inclinations that inclines to knowable implications that directionally show the purposive values for which we inturn of an allowance change by reversal for together is founded the theoretical closeness, that insofar as there is of no allotment for pointed forward. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience, James took pragmatic meaning to include emotional and matter responses, a pragmatic treat of special kind of linguistic interaction, such as interviews and a feature of the use of a language would explain the features in terms of general principles governing appropriate adherence, than in terms of a semantic rule. However, there are deep connections between the idea that a representative of the system is accurate, and the likely success of the projects and purposes of a system of representation, either perceptual or linguistic seems bound to connect success with evolutionary adaption, or with utility in the widest sense. Moreover, his, metaphysical standard of value, not a way of dismissing them as meaningless but it should also be noted that in a greater extent, circumspective moments James did not hold that even his broad sets of consequences were exhaustive of some terms meaning. Theism, for example, he took to have antecedently, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.

James theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.

Even so, to believe a proposition is to hold it to be true, that the philosophical problem is to align ones precarious states, for which a persons constituent representations form their personal beliefs, is it, for example, a simple disposition to behaviour? Or a more complicated, complex state that resists identification with any such disposition, is compliant with verbalized skills or verbal behaviourism which are essential to belief, concernedly by what is to be said about prelinguistic infants, or non-linguistic animals? An evolutionary approach asks how the cognitive success of possessing the capacity to believe things relates to success in practice. Further topics include discovering whether belief differs from other varieties of assent, such as acceptance, discovering whether belief is an all-or-nothing matter, or to what extent degrees of belief are possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills.

Nevertheless, for Peirces' famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We accept an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant to the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing. All the same, as the founding figure of American pragmatism, perhaps, its best expressage would be found in his essay How to Make our Idea s Clear, (1878), in which he proposes the famous dictum: The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth, and the object representation in this opinion are the real. Also made pioneering investigations into the logic of relations, and of the truth-functions, and independently discovered the quantifier slightly later that Frége. His work on probability and induction includes versions of the frequency theory of probability, and the first suggestion of a vindication of the process of induction. Surprisedly, Peirces scientific outlook and opposition to rationalize co-existed with admiration for Dun Scotus, (1266-1308), a Franciscan philosopher and theologian, who locates freedom in our ability to turn from desire and toward justice. Scotus characterlogical distinction has directly been admired by such different thinkers as Peirce and Heidegger, he was dubbed the doctor subtilis (short for Dunsman) reflects the low esteem into which scholasticism later fell between humanists and reformers.

To a greater extent, and most important, is the famed apprehension of the pragmatic principle, in so that, C.S. Pierce, the founder of American pragmatism, had been concerned with the nature of language and how it related to thought. From what account of reality did he develop this theory of semiotics as a method of philosophy. How exactly does language relate to thought? Can there be complex, conceptual thought without language? These issues that operate on our thinking and attemptive efforts to draw out the implications for question about meaning, ontology, truth and knowledge, nonetheless, they have quite different takes on what those implications are

These issues had brought about the entrapping fascinations of some engagingly encountered sense for causalities that through which its overall topic of linguistic transitions was grounded among furthering subsequential developments, that those of the earlier insistence of the twentieth-century positions. That to lead by such was the precarious situation into bewildering heterogeneity, so that princely it came as of a tolerable philosophy occurring in the early twenty-first century. The very nature of philosophy is itself radically disputed, analytic, continental, postmodern, Critical theory, feminist and non-Western are all prefixes that give a different meaning when joined to philosophy. The variety of thriving different schools, the number of professional philologers, the proliferation of publications, the developments of technology in helping reach all manifest a radically different situation to that of one hundred years ago. Sharing some common sources with David Lewis, the German philosopher Rudolf Carnap (1891-1970) articulated a doctrine of linguistic frameworks that was radically relativistic in its implications. Carnap was influenced by the Kantian idea of the constitution of knowledge: That our knowledge is in some sense the end result of a cognitive process. He also shared Lewis pragmatism and valued the practical application of knowledge. However, as empiricism, he was headily influenced by the development of modern science, regarding scientific knowledge s the paradigm of knowledge and motivated by a desire to be rid of pseudo-knowledge such as traditional metaphysics and theology. These influences remain constant as his work moved though various distinct stages and then he moved to live in America. In 1950, he published a paper entitled Empiricism, Semantics and Ontology in which he articulated his views about linguistic frameworks.

When an organized integrated whole made up of diverse but interrelated and interdependent parts, the capacity of the system precedes to be real that something that stands for something else by reason that being in accordance with or confronted to action we think it not as it might be an imperfection in character or an ingrained moral weakness predetermined to be agreed upon by all who investigate. The matter to which it stands, in other words, that, if I believe that it is really the case that p, then I except that if anyone were to inquire into the finding of its state of internal and especially the quality values, state, or conditions of being self-complacent as to poise of a comparable satisfactory measure of whether p, would arrive at the belief that p it is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that would-bees are objective and, of course, real.

If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that entitles firmly held points of view or way of regarding something capable of being constructively applied, that only to presuppose in the lesser of views or ways of regarding something, at least the conservative position is posited by the relevant discourse that exists or at least exists: The standard example is idealism, which reality is somehow mind-curative or mind-co-ordinated - that real objects comprising the external worlds are dependently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of idealism enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of some formative constellations and not of any mere understanding of the nature of the real bit even the resulting charger we attributively acknowledge for it.

Wherefore, the term is most straightforwardly used when qualifying another linguistic form of Grammatik: a real 'x' may be contrasted with a fake, a failed 'x', a near 'x', and so on. To that something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the unreal as a separate domain of things, perhaps, unfairly to that of the benefits of existence.

Such that nonexistence of all things, and as the product of logical confusion of treating the term nothing as itself a referring expression of something that does not exist, instead of a quantifier, wherefore, the important point is that the treatment holds off thinking of something, as to exist of nothing, and then kin as kinds of names. Formally, a quantifier will bind a variable, turning an open sentence with some distinct free variables into one with, n - 1 (an individual letter counts as one variable, although it may recur several times in a formula). (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as Nothing is all around us talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate is all around us has appreciation. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of nothing, is not properly the experience of anything, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between existentialist and analytic philosophy, on the point of what, whereas the former is afraid of nothing, and the latter think that there is nothing to be afraid of.

A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.

Whereas, the standard opposition between those who affirm and those who deny, for these of denial are forsaken of a real existence by some kind of thing or some kind of fact, that, conceivably are in accord given to provide, or if by formal action bestow or dispense by some action to fail in response to physical stress, also by their stereotypical allurement of affairs so that a means of determines what a thing should be, however, each generation has its on standards of morality. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centered round Anthony Dummett (1925), to which is borrowed from the intuitivistic critique of classical mathematics, and suggested that the unrestricted use of the principle of bivalence is the trademark of realism. However, this has to overcome counter examples both ways, although Aquinas was a moral realist, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence quite effectively in mathematics, precisely because it was only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects really exist and independent of us and our mental states) with transcendental idealism (the phenomenal world as whole reflects the structures imposed on it by the activity of our minds as we render its intelligibility to us). In modern philosophy the orthodox opposition to realism has been from the philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.

Assigned to the modern treatment of existence in the theory of quantification is sometimes put by saying that existence is not a predicate. The idea is that the existential quantify themselves as an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The paralleled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for its created by sentences like this exists where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. This exists is, therefore, unlike Tamed tigers exist, where a property is said to have an instance, for the word this and does not locate a property, but only correlated by an individual.

Describing events that haphazardly happen does not of themselves permits us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the will and free will. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing by doing another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?

Causation, least of mention, is not clear that only events are created by and for themselves. Kant mysteriously foresees the example of a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements that necessitation or determinacy of the future hold to events, as the Scottish philosopher, historian and essayist David Hume thought, that part of philosophy which investigates the fundamental structures of the world and their fundamental kinds of things that exist, terms like object, fact, property, relation and category are technical terms used to make sense of these most basic features of realty. Likewise this is a very strong case against deviant logic. However, just as with Hume against miracles, it is quite conservative in its implications.

How then are we to conceive of others? The relationship seems not too perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects is largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the must of causal necessitation. Particular examples of puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?

The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event C, there will be one antecedent state of nature N, and a law of nature L, such that given L, N will be followed by C. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state N an d the laws. Since determinism is recognized as universal, these in turn were tampering and damaged, and thus, were traveled backwards to events, for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?

Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should be and from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to fix upon one among alternatives as the one to be taken, accepted or adopted as of yours to make a choice, as having that appeal to a fine or highly refined compatibility, again, you chose as you did, if only to the finding in its view as irrelevance on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is more substantiative, real notions of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it is only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.

The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical sets of suppositional action, that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.

Once, again, the dilemma adds that if an action is not the end of such a chain, then either or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for its ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.

Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia - factoring its trued condition that one can come to a conclusion about.

A mental act of will or try is of whose presence is sometimes supposed as to make the difference, which substantiates its theories between philosophy and science, and hence is called naturalism, however, there is somewhat of a consistent but communal direction in our theories about the world, but not held by other kinds of theories. How this relates to scepticism is that scepticism is tackled using scientific means. The most influential American philosopher of the latter of the 20th century is Willard Quine (1908-2000), holds that this is not question-begging because the sceptical challenge arises using scientific knowledge. For example, it is precisely because the sceptic has knowledge of visual distortion from optics that he can raise the problem of the possibility of deception, the sceptical question is not mistaken, according to Quine: It is rather than the sceptical rejection of knowledge is an overreaction. We can explain how perception operates and can explain the phenomenon of deception also. One response to this view is that Quine has changed the topic of epistemology by using this approach against the sceptics. By citing scientific (psychological) evidence against the sceptic, Quine is engaged in a deceptive account of the acquisition of knowledge, but ignoring the normative question of whether such accounts are justified or truth-conductions. Therefore, he has changed the subject, and by showing that normative issues can and do arise in this naturalized context. Quines' conception holds that there is no genuine philosophy independent of scientific knowledge, nonetheless, there to be shown the different ways of resisting the sceptics setting the agenda for epistemology has been significant for the practice of contemporary epistemology.

The contemporary epistemology of the same agenda requirements as something wanted or needed in the production to satisfy the essential conditions for prerequisite reactivities held by conclusions end. Nonetheless, the untypical view of knowledge with basic, non-inferentially justified beliefs as these are the Foundationalist claims, otherwise, their lays of some non-typically holistic and systematic and the Coherentists claims? What is more, is the internalized-externalist debate. Holding that in order to know, one has to know that one knows, as this information often implies a collection of facts and data, a mans judgement cannot be better than the information on which he has based on. The reason-sensitivities under which a belief is justified must be accessible in principle to the subject holding that belief. Perhaps, this requirement proposes that this brings about a systematic application, yet linking the different meaning that expressions would have used at different articulations beyond that of any intent of will is to be able to desire an outcome and to purpose to bring it about. That what we believe may-be determined not as justly by its evidence alone, but by the utility of the resulting state of mind, therefore to go afar and beyond the ills toward their given advocacies, but complete the legitimization and uphold upon a given free-will, or to believe in God. Accountably, such states of mind have beneficial effects on the believer, least of mention, that the doctrine caused outrage from the beginning. The reactionist accepts the conflict and denies that of having real freedom or responsibility. However, even if our actions are caused, it can often be true or that you could have done otherwise, if you had chosen, and this may be enough to render you liable, in that previous events will have caused you to choose as you did, and in doing so has made applicably pointful in those whose consideration is to believe of their individual finding. Nonetheless, in Kant, while the empirical or phenomenal self is determined and not free, therefore, because of the definition of determinism breaks down, or postulating a special category of caused acts or volition, or suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, and it is only through confusing them that the problem seems urgent. None of these avenues had gained general popularity, but it is an error to confuse determinism and fatalism.

Only that the quality values or states for being aware or cognizant of something as kept of developments, so, that imparting information could authorize a dominant or significant causality, whereby making known that there are other ways or alternatives of talking about the world, so as far as good, that there are the resources in philosophy to defend this view, however, that all our beliefs are in principal revisable, none stand absolutely. There are always alternative possible theories compatible with the same basic evidence. Knowing is too difficult to obtainably achieve in most normal contexts, obtainably grasping upon something, as between those who think that knowledge can be naturalized and those who don't, holding that the evaluative notions used in epistemology can be explained in terms of something than to deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse.

Foundationalist theories of justification argue that there are basic beliefs that are justifiably non-inferential, both in ethics and epistemology. Its action of justification or belief is justified if it stands up to some kind of critical reflection or scrutiny: A person is then exempt from criticism on account of it. A popular ligne of thought in epistemology is that only a belief can justify another belief, as can the implication that neither experience nor the world plays a role in justifying beliefs leads quickly to Coherentism.

When a belief is justified, that justification is usually itself another belief, or set of beliefs. There cannot be an infinite regress of beliefs, the inferential chain cannot circle back on itself without viciousness, and it cannot stop in an unjustified belief. So that, all beliefs cannot be inferentially justified. The Foundationalist argues that there are special basic beliefs that are self-justifying in some sense or other - for example, primitive perceptual beliefs that don't require further beliefs in order to be justified. Higher-level beliefs are inferentially justified by means of the basic beliefs. Thus, Foundationalism is characterized by two claims: (1) there exist cases in which the best explanations are still not all that is convincing, but, maintain that the appropriated attitude is not to believe them, but only to accept them at best as empirically adequate. So, other desiderata than pure explanatory successes are understandable of justified non-inferential beliefs, and (2) Higher-level beliefs are inferentially justified by relating them to basic beliefs.

A categorical notion in the work as contrasted in Kantian ethics show of a language that their structure and relations amongst the things that cannot be said, however, the problem of finding a fundamental classification of the kinds of entities recognized in a way of thinking. In this way of thinking accords better with an atomistic philosophy than with modern physical thinking, which finds no categorical basis underlying the notions like that of a charge, or a field, or a probability wave, that fundamentally characterized things, and which are apparently themselves dispositional. A hypothetical imperative and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse from which it is placed and only given by some antecedent desire or project, If you want to look wise, stays quiet. The injunction to stay quiet is only applicable to those with the antecedent desire or inclination: If one has no desire to look wise, the narrative dialogues seem of requiring the requisite too advisably taken under and succumbing by means of, where each is maintained by a categorical imperative which cannot be so avoided, it is a requirement that binds anybody or anything, regardless of their inclination. It could be repressed as, for example, Tell the truth (regardless of whether you want to or not). The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: If you crave drink, don't become a bartender may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.

In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: act only on that maxim through which you can at the same time will that it should become universal law, (2) the formula of the law of nature: Act as if the maxim of your actions were to become thoroughly self-realized in that your volition is maintained by a universal law of nature, (3) the formula of the end-in-itself, Act in such a way that you always treat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end, (4) the formula of autonomy, or consideration; The wilfulness of every rational being that commends beliefs, actions, processes as appropriate, yet in cases of beliefs this means likely to be true, or at least likely to be true from within the subjective view. Nonetheless, the cognitive processes are rational insofar as they provide likely means to an end, however, on rational action, such as the ends themselves being rational, are of less than otherwise unidentified part of meaning. A free will is to reconcile our everyday consciousness of predetermining us as agents, with the best view of what science tells us that we are.

A central object in the study of Kant's ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kants own application of the notions is always convincing: One cause of confusion is relating Kants ethical values to theories such as; expressionism in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something unconditional or necessary such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of prescriptivism in fact equates the two functions. A further question is whether there is an imperative logic. Hump that bale seems to follows from Tote that barge and hump that bale, follows from Its windy and its raining: But, it is harder to say how to include other forms, does Shut the door or shut the window, with a strong following form Shut the window, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other purposive account of commanding that without satisfying the other would otherwise give cause to change or change cause of direction of diverting application and pass into turning it into a variation of ordinary deductive logic.

Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage in that morality as such has that of the Kantian base, that on given notions as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian and Aristotle as more involved with a separate sphere of responsibility and duty, than the simple contrast suggests.

The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and even reason, all of which are in principle capable of letting us down. This is eventually founded in the celebrated Cogito ergo sum: I think, therefore I am, think, for example, of Descartes attempt to find the rules for the direction of the mind. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of a various counter attack on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two differently dissimilar interacting substances. Descartes rigorously and rightly discerning for it, takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a clear and distinct perception of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: As Hume puts it, to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.

By dissimilarity, Descartes notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.

Although the structure of Descartes's epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defense of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense it may be instinctive in human beings to be social, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, it seems clear that our real or actualized self is not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the otherness of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, are by that of the first self-replicated molecule, under which were the ancestors of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century had provided scientists the opportunity to better of an understanding by means of understudies of how the classical paradigm in physical reality has graduated of results in the stark Cartesian division between mind and world. In that it became one of the most characteristic features of Western thought, least of mention, that this is not, just of another strident and ill-mannered diatribe against our misunderstandings, but to accept, its solitarily as drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory by objectifying myself as I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the I, that is the subject, as the only certainty, he defied materialism, and thus the concept of some res extensa. The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a res extensa and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.

By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivist did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical assemblage of subject-object, which has been the fundamental question in philosophy ever since. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a greater amount of material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.

Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?

If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other. The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages, but not currently much used for the study of formal logic. Generally, the study of logical form requires using particular schematic letters and variables (symbolic) to stand where terms of a particular category might occur in sentences. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement the meaning as we engage upon the encountering communications of the spoken exchange.

The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively become denotes in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is, only to find that the idea that there is an objective world and the idea that the subject is somewhere, and where things are given by what we can perceive.

Any doctrine holding that reality is fundamentally mental in nature, finds to their boundaries of such a doctrine that is not as firmly riveting, for example, the traditional Christian view that God is a sustaining cause, possessing greater reality than his creation, might just be classified as a form of idealism. The German philosopher, mathematician and polymath, is Gottfried Leibniz, his doctrine stipulates that the simple substances out of which all else is made are themselves perceiving of something as distinguished from the substance of which it is made of having or recognized and usually peremptorily assured of being constructively applied, least of mention, so that, in turn, express the nature of external reality. However, Leibniz reverts to an Aristotelean conception of nature as essentially striving to actualize its potential. Naturally it is not easy to make room for us to consider that which he thought of as substance or as a phenomenon or free will. Directly with those of Descartes and Spinoza, Leibniz had notably retained his stance of functional descriptions of his greatest of rationalist of the seventeenth-century. By his indiscernibility of identical states that if the principles are of A it seems to find its owing similarities with B, then every property that A has B has, and vice versa. This is sometimes known as Leibniz law.

A distinctive feature of twentieth-century philosophy has been a series of earlier periods. The slits between mind and body that dominated the contemporaneous admissions were attacked in a variety of different ways by twentieth-century thinkers, Heidegger, Meleau-Ponty, Wittgenstein and Ryle all rejected the Cartesian model, but did agree in quite distinctly different was. Other cherished dualists carry the problem as seen by the difference as allocated by non-participatorial interactions, yet to know that in all probability of occurring has already been confronted, in that an effective interaction - for example, the analytic - synthetic distinction, the dichotomy between theory and practice and the fact-value distinction. However, unlike the rejection of Cartesian dualism, these debates are still alive, with substantial support for either side. It was only toward the close of the century that a more ecumenical spirit began to arise on both sides. Nevertheless, despite the philosophical Cold War, certain curiously similar tendencies emerged on all sides during the mid-twentieth century, which aided the rise of cognitive relativism as a significant phenomenon.

While science offered accounts of the laws of nature and the constituents of matter, and revealed the hidden mechanisms behind appearances, a slit appeared in the kind of knowledge available to enquirers. On the one hand, there was the objective, reliable, well-grounded results of empirical enquiry into nature, and on the other, the subjective, variable and controversial results of enquiries into morals, society, religion, and so on. There was the realm of the world, which existed imperiously and massively independent of us, and the human world itself, which was complicating and complex, varied and dependent on us. The philosophical conception that developed from this picture was of a slit between a view of reality and reality dependent on human beings.

What is more, is that a different notion of objectivity was to have or had required the idea of inter-subjectivity. Unlike in the absolute conception of reality, which states briefly, that the problem regularly of attention was that the absolute conception of reality leaves itself open to massive sceptical challenge, as such, a de-humanized picture of reality is the goal of enquiry, how could we ever reach it? Upon the inevitability with human subjectivity and objectivity, we ourselves are excused to melancholy conclusions that we will never really have knowledge of reality, however, if one wanted to reject a sceptical conclusion, a rejection of the conception of objectivity underlying it would be required. Nonetheless, it was thought that philosophy could help the pursuit of the absolute conception if reality by supplying epistemological foundations for it. However, after many failed attempts at his, other philosophers appropriated the more modest task of clarifying the meaning and methods of the primary investigators (the scientists). Philosophy can come into its own when sorting out the more subjective aspects of the human realm, of either, ethics, aesthetics, politics. Finally, it goes without saying, what is distinctive of the investigation of the absolute conception is its disinterestedness, its cool objectivity, it demonstrable success in achieving results. It is purely theory - the acquisition of a true account of reality. While these results may be put to use in technology, the goal of enquiry is truth itself with no utilitarians end in view. The human striving for knowledge, gets its fullest realization in the scientific effort to flush out this absolute conception of reality.

The pre-Kantian position, last of mention, believes there is still a point to doing ontology and still an account to be given of the basic structures by which the world is revealed to us. Kants anti-realism seems to drive from rejecting necessity in reality: Not to mention, that the American philosopher Hilary Putnam (1926-) endorses the view that necessity is relative to a description, so there is only necessity in being relative to language, not to reality. The English radical and feminist Mary Wollstonecraft (1759-97), says that even if we accept this (and there are in fact good reasons not to), it still doesn't yield ontological relativism. It just says that the world is contingent - nothing yet about the relative nature of that contingent world.

Advancing such, as preserving contends by sustaining operations to maintain that, at least, some significantly relevant inflow of quantities was differentiated of a positive incursion of values, whereby developments are, nonetheless, intermittently approved as subjective amounts in composite configurations of which all pertain of their construction. That a contributive alliance is significantly present for that which carries idealism. Such that, expound upon those that include subjective idealism, or the position to better call of immaterialism, plus the meaningful associate with which the Irish idealist George Berkeley, has agreeably accorded under which to exist is to be perceived as transcendental idealism and absolute idealism. Idealism is opposed to the naturalistic beliefs that mind alone is separated from others but justly as inseparable of the universe, as a singularity with composite values that vary the beaten track whereby it is second to none, this permits to incorporate federations in the alignments of ours to be understood, if, and if not at all, but as a product of natural processes.

The pre-Kantian position - that the world had a definite, fixed, absolute nature that was not constituted by thought - has traditionally been called realism. When challenged by new anti-realist philosophies, it became an important issue to try to fix exactly what was meant by all these terms, such that realism, anti-realism, idealism and so on. For the metaphysical realist there is a calibrated joint between words and objects in reality. The metaphysical realist has to show that there is a single relation - the correct one - between concepts and mind-independent objects in reality. The American philosopher Hilary Putnam (1926-) holds that only a magic theory of reference, with perhaps noetic rays connecting concepts and objects, could yield the unique connexion required. Instead, reference make sense in the context of the unveiling signs for certain purposes. Before Kant there had been proposed, through which is called idealist - for example, different kinds of neo-Platonic or Berkeleys philosophy. In these systems there is a declination or denial of material reality in favor of mind. However, the kind of mind in question, usually the divine mind, guaranteed the absolute objectivity of reality. Kants idealism differs from these earlier idealisms in blocking the possibility of the verbal exchange of this measure. The mind as voiced by Kant in the human mind, And it isn't capable of unthinkable by us, or by any rational being. So Kants version of idealism results in a form of metaphysical agnosticism, nonetheless, the Kantian views they are rejected, rather they argue that they have changed the dialogue of the relation of mind to reality by submerging the vertebra that mind and reality is two separate entities requiring linkage. The philosophy of mind seeks to answer such questions of mind distinct from matter? Can we define what it is to be conscious, and can we give principled reasons for deciding whether other creatures are conscious, or whether machines might be made so that they are conscious? What is thinking, feeling, experiences, remembering? Is it useful to divide the functions of the mind up, separating memory from intelligence, or rationality from sentiment, or do mental functions form an integrated whole? The dominant philosopher of mind in the current western tradition include varieties of physicalism and functionalism. In following the same direct pathway, in that the philosophy of mind, functionalism is the modern successor to behaviouralism, its early advocates were the American philosopher Hilary Putnam and Stellars, assimilating an integration of guiding principle under which we can define mental states by a triplet of relations: What typically causes them effectually of specific causalities that they have on other mental states and what affects that they had toward behaviour. Still, functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or realization of the program the machine is running the principled advantages of functionalism, which include its calibrated joint with which the way we know of mental states both of ourselves and others, which is via their effectual behaviouralism and other mental states as with behaviouralism, critics charge that structurally complicated and complex items that do not bear mental states might. Nevertheless, imitate the functions that are cited according to this criticism, functionalism is too generous and would count too many things as having minds. It is also, queried to see mental similarities only when there is causal similarity, as when our actual practices of interpretation enable us to ascribe thoughts and derive to persons whose causal structure may be rather different from our own. It may then seem ad though beliefs and desires can be variably realized in causal architecture, just as much as they can be in different neurophysiological states.

The peripherally viewed homuncular functionalism seems to be an intelligent system, or mind, as may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in co-ordination with each other. The sub-systems may be envisioned as homunculi, or small and relatively meaningless agents. Wherefore, the archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make up a machine that can play chess, write dictionaries, etc.

Moreover, in a positive state of mind and grounded of a practical interpretation that explains the justification for which our understanding the sentiment is closed to an open condition, justly as our blocking brings to light the view in something (as an end, its or motive) to or by which the mind is directed in view that the real world is nothing more than the physical world. Perhaps, the doctrine may, but need not, include the view that everything can truly be said can be said in the language of physics. Physicalism, is opposed to ontologies including abstract objects, such as possibilities, universals, or numbers, and to mental events and states, insofar as any of these are thought of as independent of physical things, events, and states. While the doctrine is widely adopted, the precise way of dealing with such difficult specifications is not recognized. Nor to accede in that which is entirely clear, still, how capacious a physical ontology can allow itself to be, for while physics does not talk in terms of many everyday objects and events, such as chairs, tables, money or colours, it ought to be consistent with a physicalist ideology to allow that such things exist.

Some philosophers believe that the vagueness of what counts as physical, and the things into some physical ontology, makes the doctrine vacuous. Others believe that it forms a substantive meta-physical position. Our common ways of framing the doctrine are in terms of supervenience. Whilst it is allowed that there are legitimate descriptions of things that do not talk of them in physical terms, it is claimed that any such truth s about them supervene upon the basic physical facts. However, supervenience has its own problems.

Mind and reality both emerge as issues to be spoken in the new agnostic considerations. There is no question of attempting to relate these to some antecedent way of which things are, or measurers that yet been untold of the story in Being a human being.

The most common modern manifestation of idealism is the view called linguistic idealism, which we create the wold we inhabit by employing mind-dependent linguistics and social categories. The difficulty is to give a literal form to this view that does not conflict with the obvious fact that we do not create worlds, but find ourselves in one.

Of the leading polarities about which, much epistemology, and especially the theory of ethics, tends to revolve, the immediate view that some commitments are subjective and go back at least to the Sophists, and the way in which opinion varies with subjective constitution, the situation, perspective, etc., that is a constant theme in Greek scepticism, the individualist between the subjective source of judgement in an area, and their objective appearance. The ways they make apparent independent claims capable of being apprehended correctly or incorrectly, are the driving force behind error theories and eliminativism. Attempts to reconcile the two aspects include moderate anthropocentrism, and certain kinds of projectivism.

The standard opposition between those how affirmatively maintain of vindication and those, who manifest for something of a disclaimer and disavow the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of this dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals and moral or aesthetic properties, are examples. A realist about a subject-matter 'S' may hold (1) overmuch in excess that the overflow of the kinds of things described by S exist: (2) that their existence is independent of us, or not an artefact of our minds, or our language or conceptual scheme, (3) that the statements we make in S are not reducible to about some different subject-matter, (4) that the statements we make in S have truth conditions, being straightforward description of aspects of the world and made true or false by facts in the world, (5) that we are able to attain truth about 'S', and that it is appropriate fully to believe things we claim in 'S'. Different oppositions focus on one or another of these claims. Eliminativists think the 'S'; Discourse should be rejected. Sceptics either deny that of (1) or deny our right to affirm it. Idealists and conceptualists disallow of (2) reductionists object to all from which that has become of denial (3) while instrumentalists and projectivists deny (4), Constructive empiricalists deny (5) Other combinations are possible, and in many areas there are little consensuses on the exact way a reality/anti-reality dispute should be constructed. One reaction is that realism attempts to look over its own shoulder, i.e., that it believes that as well as making or refraining from making statements in 'S', we can fruitfully mount a philosophical gloss on what we are doing as we make such statements, and philosophers of a verificationist tendency have been suspicious of the possibility of this kind of metaphysical theorizing, if they are right, the debate vanishes, and that it does so is the claim of minimalism. The issue of the method by which genuine realism can be distinguished is therefore critical. Even our best theory at the moment is taken literally. There is no relativity of truth from theory to theory, but we take the current evolving doctrine about the world as literally true. After all, with respect of its theory-theory - like any theory that people actually hold - is a theory that after all, there is. That is a logical point, in that, everyone is a realist about what their own theory posited, precisely for what remains accountable, that is the point of the theory, to say what there is a continuing inspiration for back-to-nature movements, is for that what really exists.

There have been a great number of different sceptical positions in the history of philosophy. Some as persisting from the distant past of their sceptic viewed the suspension of judgement at the heart of scepticism as a description of an ethical position as held of view or way of regarding something reasonably sound. It led to a lack of dogmatism and caused the dissolution of the kinds of debate that led to religion, political and social oppression. Other philosophers have invoked hypothetical sceptics in their work to explore the nature of knowledge. Other philosophers advanced genuinely sceptical positions. Here are some global sceptics who hold we have no knowledge whatsoever. Others are doubtful about specific things: whether there is an external world, whether there are other minds, whether we can have any moral knowledge, whether knowledge based on pure reasoning is viable. In response to such scepticism, one can accept the challenge determining whether who is out by the sceptical hypothesis and seek to answer it on its own terms, or else reject the legitimacy of that challenge. Therefore some philosophers looked for beliefs that were immune from doubt as the foundations of our knowledge of the external world, while others tried to explain that the demands made by the sceptic are in some sense mistaken and need not be taken seriously. Anyhow, all are given for what is common.

The American philosopher C.I. Lewis (1883-1946) was influenced by both Kants division of knowledge into that which is given and which processes the given, and pragmatisms emphasis on the relation of thought to action. Fusing both these sources into a distinctive position, Lewis rejected the shape dichotomies of both theory-practice and fact-value. He conceived of philosophy as the investigation of the categories by which we think about reality. He denied that experience conceptualized by categorized realities. That way we think about reality is socially and historically shaped. Concepts, he meanings that are shaped by human beings, are a product of human interaction with the world. Theory is infected by practice and facts are shaped by values. Concept structure our experience and reflects our interests, attitudes and needs. The distinctive role for philosophy, is to investigate the criteria of classification and principles of interpretation we use in our multifarious interactions with the world. Specific issues come up for individual sciences, which will be the philosophy of that science, but there are also common issues for all sciences and non-scientific activities, reflection on which issues is the specific task of philosophy.

The framework idea in Lewis is that of the system of categories by which we mediate reality to ourselves: 'The problem of metaphysics is the problem of the categories' and 'experience doesn't categorize itself' and 'the categories are ways of dealing with what is given to the mind.' Such a framework can change across societies and historical periods: 'our categories are almost as much a social product as is language, and in something like the same sense.' Lewis, however, didn't specifically thematize the question that there could be alterative sets of such categories, but he did acknowledge the possibility.

Sharing some common sources with Lewis, the German philosopher Rudolf Carnap (1891-1970) articulated a doctrine of linguistic frameworks that was radically relativistic its implications. Carnap had a deflationist view of philosophy, that is, he believed that philosophy had no role in telling us truth about reality, but rather played its part in clarifying meanings for scientists. Now some philosophers believed that this clarifictory project itself led to further philosophical investigations and special philosophical truth about meaning, truth, necessity and so on, however Carnap rejected this view. Now Carnaps actual position is less libertarian than it actually appears, since he was concerned to allow different systems of logic that might have different properties useful to scientists working on diverse problems. However, he doesn't envisage any deductive constraints on the construction of logical systems, but he does envisage practical constraints. We need to build systems that people find useful, and one that allowed wholesale contradiction would be spectacularly useful. There are other more technical problems with this conventionalism.

Rudolf Carnap (1891-1970), interpreted philosophy as a logical analysis, for which he was primarily concerned with the analysis of the language of science, because he judged the empirical statements of science to be the only factually meaningful ones, as his early efforts in The Logical Structure of the World (1928; trans. 1967) for which his intention was to have as a controlling desire something that transcends ones present capacity for acquiring to endeavor in view of a purposive point. At which time, to reduce all knowledge claims into the language of sense data, whereby his developing preference for language described behavior (physicalistic language), and just as his work on the syntax of scientific language in The Logical Syntax of Language (1934, translated 1937). His various treatments of the verifiability, testability, or confirmability of empirical statements are testimonies to his belief that the problems of philosophy are reducible to the problems of language.

Carnaps principle of tolerance, or the conventionality of language forms, emphasized freedom and variety in language construction. He was particularly interested in the construction of formal, logical systems. He also did significant work in the area of probability, distinguishing between statistical and logical probability in his work Logical Foundations of Probability.

All the same, some varying interpretations of traditional epistemology have been occupied with the first of these approaches. Various types of belief were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derived from perception were proposed by many as immune to doubt. But what they all had in common were that empirical knowledge began with the data of the senses that it was safe from sceptical challenge and that a further superstructure of knowledge was to be built on this firm basis. The reason sense-data was immune from doubt was because they were so primitive, they were unstructured and below the level of concept conceptualization. Once they were given structure and conceptualized, they were no longer safe from sceptical challenge. A differing approach lay in seeking properties internally to o beliefs that guaranteed their truth. Any belief possessing such properties could be seen to be immune to doubt. Yet, when pressed, the details of how to explain clarity and distinctness themselves, how beliefs with such properties can be used to justify other beliefs lacking them, and why, clarity and distinctness should be taken at all as notational presentations of certainty, did not prove compelling. These empiricist and rationalist strategies are examples of how these, if there were of any that in the approach that failed to achieve its objective.

However, the Austrian philosopher Ludwig Wittgenstein (1889-1951), whose later approach to philosophy involved a careful examination of the way we actually use language, closely observing differences of context and meaning. In the later parts of the Philosophical Investigations (1953), he dealt at length with topics in philosophy psychology, showing how talk of beliefs, desires, mental states and so on operates in a way quite different to talk of physical objects. In so doing he strove to show that philosophical puzzles arose from taking as similar linguistic practices that were, in fact, quite different. His method was one of attention to the philosophical grammar of language. In, On Certainty (1969) this method was applied to epistemological topics, specifically the problem of scepticism.

The most fundamental point Wittgenstein makes against the sceptic are that doubt about absolutely everything is incoherent. To even articulate a sceptical challenge, one has to know that to know the meaning of what is said If you are certain of any fact, you cannot be certain of the meaning of your words either. Doubt only makes sense in the context of things already known. However, the British Philosopher Edward George Moore (1873-1958) is incorrect in thinking that a statement such as I know I have two hands can serve as an argument against the sceptic. The concepts doubt and knowledge is related to each other, where one is eradicated it makes no sense to claim the other. But why couldn't one reasonably doubt the existence of ones limbs? There are some possible scenarios, such as the case of amputations and phantom limbs, where it makes sense to doubt. However, Wittgensteins point is that a context is required of other things taken for granted, It makes sense to doubt given the context of knowledge about amputation and phantom limbs, it doesn't make sense to doubt for no-good reason: Doesn't one need grounds for doubt?

For such that we are who find of value in Wittgensteins thought but who reject his quietism about philosophy, his rejection of philosophical scepticism is a useful prologue to more systematic work. Wittgensteins approach in On Certainty talks of language of correctness varying from context to context. Just as Wittgenstein resisted the view that there is a single transcendental language game that governs all others, so some systematic philosophers after Wittgenstein have argued for a multiplicity of standards of correctness, and not a single overall dominant one.

William Orman von Quine (1908-2000), who is the American philosopher and differs in philosophies from Wittgensteins philosophy in a number of ways. Nevertheless, traditional philosophy believed that it had a special task in providing foundations for other disciplines, specifically the natural science, for not to see of any bearing toward a distinction between philosophical scientific work, of what seems a labyrinth of theoretical beliefs that are seamlessly intuited. Others work at a more theoretical level, enquiring into language, knowledge and our general categories of reality. Yet, for Quine, there are no special methods available to philosophy that aren't there for scientists. He rejects introspective knowledge, but also conceptual analysis as the special preserve of philosophers, as there are no special philosophical methods.

By citing scientific (psychological) evidence against the sceptic, Quine is engaging in a descriptive account of the acquisition of knowledge, but ignoring the normative question of whether such accounts are justified or truth-conducive. Therefore he has changed the subject, but, nonetheless, Quineans reply by showing that normative issues can and do arise in this naturalized context. Tracing the connections between observation sentences and theoretical sentences, showing how the former support the latter, are a way of answering the normative question,

For both Wittgenstein and Quine have shown ways of responding to scepticism that doesn't take the sceptics challenge at face value. Wittgenstein undermines the possibility of universal doubt, showing that doubt presupposes some kind of belief, as Quine holds that the sceptics use of scientific information to raise the sceptical challenge that allows the use of scientific information in response. However, both approaches require significant changes in the practice of philosophy. Wittgensteins approach has led to a conception of philosophy as therapy. Quines conception holds that there is no genuine philosophy independent of scientific knowledge.

Post-positivistic philosophers who rejected traditional realist metaphysics needed to find some kind of argument, other than verificationism, to reject it. They found such arguments in philosophy of language, particularly in accounts of reference. Explaining how is a reality structured independently of thought, although the main idea is that the structures and identity condition we attributed to reality derive from the language we use, and that such structures and identity conditions are not determined by reality itself, but from decisions we make: They are rather revelatory of the world-as-related-to-by-us. The identity of the world is therefore relative, not absolute.

Common-sense realism holds that most of the entities we think exist in a common-sense fashion really do exist. Scientific realism holds that most of the entities postulated by science likewise exist, and existence in question is independent of my constitutive role we might have. The hypothesis of realism explains why our experience is the way it is, as we experience the world thus-and-so because the world really is that way. It is the simplest and most efficient way of accounting for our experience of reality. Fundamentally, from an early age we come to believe that such objects as stones, trees, and cats exist. Further, we believe that these objects exist even when we are perceiving them and that they do not depend for their existence on our opinions or on anything mental.

The parallel between biological evolution and conceptual or epistemic evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the evolution of cognitive mechanic programs, by Bradie (1986) and the Darwinian approach to epistemology by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).

On the analogical version of evolutionary epistemology, called the evolution of theories program, by Bradie (1986). The Spenserians approach (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding ones knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding ones knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extraordinary issues lie to awaken the literature that involves questions about realism, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called hypothetical realism, a view that combines a version of epistemological scepticism and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the truth-topic sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and (Ruse, 1986) including, (Stein and Lipton, 1990) all have argued, nonetheless, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that p is knowledge just in case it has the right causal connexion to the fact that p. Such a criterion can be applied only to cases where the fact that p is a sort that can reach causal relations, as this seems to exclude mathematically and their necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects environments.

For example, Armstrong (1973), predetermined that a position held by a belief in the form This perceived object is F is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is F, that is, the fact that the object is F contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject χ and perceived object y, if χ has those properties and believed that y is F, then y is F. (Dretske (1981) offers a rather similar account, in terms of the beliefs being caused by a signal received by the perceiver that carries the information that the object is F).

Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is globally and locally reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for us, that we can know our evidence eliminates al the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptics alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The interesting thesis that counts as a causal theory of justification (in the meaning of causal theory intended here) are that: A belief is justified in case it was produced by a type of process that is globally reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.

This proposal will be adequately specified only when we are told (I) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let us look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.

(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when believing that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ears inward and other brain states on which the production of the belief depended: It does not include any events in the telephone, or the sound waves travelling between it and my ears, or any earlier decisions made, that were responsible for being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal oneness proximate to the belief. Why? Goldman does not tell us. One answer that some philosophers might give is that it is because a beliefs being justified at a given time can depend only on facts directly accessible to the believers awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldmans answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.

(2) Once the reliabilist has told us how to delimit the process producing a belief, he needs to tell us which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your believing that you see a book before you. One very broad type to which that process belongs would be specified by coming to a belief as to something one perceives as a result of activation of the nerve endings in some of ones sense-organs. A constricted type, in which that unvarying processes belong would be specified by coming to a belief as to what one sees as a result of activation of the nerve endings in ones retinas. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retinas particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?

If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is the narrowest type that is casually operative. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. We need to say some here rather than any, because, for example, when I see an oak or maple tree, the particular like-minded material bodies of my retinal image is causally clear towards the worked in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, oak or maples, ones that would have produced the same belief.

(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and careened sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.

Goldmans solution (1986) is that the reliability of the process types is to be gauged by their performance in normal worlds, that is, worlds consistent with our general beliefs about the world . . . about the sorts of objects, events and changes that occur in it. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.

However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a beliefs being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state (B) always causes one to believe that one is in brained-state (B). Here the reliability of the belief-producing process is perfect, but we can readily imagine circumstances in which a person goes into grain-state B and therefore has the belief in question, though this belief is by no means justified (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureaus forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureaus prediction and of its evidential force: I can advert to any disavowable inference that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureaus prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.

Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.

One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.

If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In Principia, Newton laid down as his first Rule of Reasoning in Philosophy that nature does nothing in vain . . . for Nature is pleased with simplicity and affects not the pomp of superfluous causes. Leibniz hypothesized that the actual world obeys simple laws because Gods taste for simplicity influenced his decision about which world to actualize.

The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the certain principles of physical reality, said Descartes, not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes conclude that all quantitative aspects of reality could be traced to the deceitfulness of the senses.

The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical frame-work based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in theology by Platonic and Neoplatonic philosophy.

Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical forms resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.

At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.

LaPlace is recognized for eliminating not only the theological component of classical physics but the entire metaphysical component as well. The epistemology of science requires, he said, that we proceed by inductive generalizations from observed facts to hypotheses that are tested by observed conformity of the phenomena. What was unique about LaPlaces view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlaces view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truth about nature are only the quantities.

As this view of hypotheses and the truth of nature as quantities was extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlaces assumptions about the actual character of scientific truth seemed correct. This progress suggested that if we could remove all thoughts about the nature of or the source of phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.

The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was the science of nature. This view, which was premised on the doctrine of positivism, promised to subsume all of nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call scientific and makes no substantive assumption about the way the world is.

A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connexion between simplicity and high probability.

Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper or Quine's arguments.

Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connexion between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.

Principles of parsimony and simplicity mediate the epistemic connexion between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).

This local approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.

It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has occurred over a wider summation of literature under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave us puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves us worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.

Coming up with an adequate characterized inferences, and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.

Traditionally, a proposition that is not a conditional, as with the affirmative and negative, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: X is intelligent (categorical?) Equivalent, if X is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

Its condition of some classified necessity is so proven sufficient that if p is a necessary condition of q, then q cannot be true unless p; is true? If p is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that A causes B may be interpreted to mean that A is itself a sufficient condition for B, or that it is only a necessary condition fort B, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.

What is more that if any proposition of the form if p then q. The condition hypothesized, p. Is called the antecedent of the conditionals, and q, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of material implication, merely telling that either not-p, or q. Stronger conditionals include elements of modality, corresponding to the thought that if p is truer then q must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.

It follows from the definition of strict implication that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to q follows from p, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.

The Humean problem of induction is that if we would suppose that there is some property A concerning and observational or an experimental situation, and that out of a large number of observed instances of A, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property B. Suppose further that the background proportionate circumstances not specified in these descriptions have been varied to a substantial degree and that there is no collateral information available concerning the frequency of B's among As or concerning causal or nomologically connections between instances of A and instances of B.

In this situation, an enumerative or instantial induction inference would move rights from the premise, that m/n of observed As are B's to the conclusion that approximately m/n of all As are B's. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of As should be taken to include not only unobserved As and future As, but also possible or hypothetical As (an alternative conclusion would concern the probability or likelihood of the adjacently observed A being a B).

The traditional or Humean problem of induction, often referred to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true ‒or even that their chances of truth are significantly enhanced?

Humes discussion of this issue deals explicitly only with cases where all observed As are B's and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent ligne of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as Humes fork), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.

Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or experimental, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that the course of nature may change, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).

An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Humes argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.

The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Humes argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or vindications of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Humes dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:

(1) As a method for arriving at posits regarding, i.e., the proportion of As remain additionally of B's. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.

The gamblers bet is normally an appraised posit, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a blind posit: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of As are in addition of B's converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.

What we can know, according to Reichenbach, is that if there is a truth of this sort to be found, the inductive method will eventually find it. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of As additionally constitute B's. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives us our best chance for success, our best gamble in a situation where there is no alternative to gambling.

This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other methods for arriving at posits for which the same sort of defence can be given-methods that yield the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it . . . is true than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.

An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Poppers view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.

(2) The ordinary language response to the problem of induction has been advocated by many philosophers, none the less, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.

The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.

Understood in this way, Strawsons response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves reasonable and our evidence strong, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.

(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.

One problem with this sort of move is that even if circularity is avoided, the movement to Higher and Higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next Higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.

(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.

Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise is truer, then the conclusion is likely to be true does not fit the standard conceptions of analyticity. A consideration of these matters is beyond the scope of the present spoken exchange.

There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve turning induction into deduction, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.

Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of As in addition that occur of, but B's is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring way in laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long running pattern of evidence in which a certain stable proportion of observed As are B's ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).

Goodmans new riddle of induction purports that we suppose that before some specific time t (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term grue to mean green if examined before t and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.

The obvious alternative suggestion is that grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that green and blueness does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue may be defined in terms if, green and blue, but green an equally well be defined in terms of grue and green (blue if examined before t and green if examined after t).

The grued, paradoxes demonstrate the importance of categorization, in that sometimes it is itemized as gruing, if examined of a presence to the future, before future time t and green, or not so examined and blue. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For grue is unprojectible, and cannot transmit credibility from known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, grue is entrenched, lacking such a history, grue is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables us to utilize our cognitive resources best. Its prospects of being true are worse than its competitors and its cognitive utility is greater.

So, to a better understanding of induction we should then literaturized its term for which is most widely used for any process of reasoning that takes us from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . where a, b, cs, are all of some kind G, it is inferred that G's from outside the sample, such as future G's, will be F, or perhaps that all G's are F. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same objects future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.

The rational basis of any inference was challenged by Hume, who believed that induction presupposed belief in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving us the evidence, the application of ancillary beliefs about the order of nature, and so on.

Nevertheless, the fundamental problem remains that and experience condition by application show us only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.

Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some-body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his Logical Foundations of Probability (1950). Carnaps idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the range of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.

Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.

Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection lesson of it would be: The displayed sentence is false.

Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the surprise examination paradox: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner.

This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943. Although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.

Initial analyses of the subjects argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödels incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following self-referential paradox, the Knower. Consider the sentence: (S) The negation of this sentence is known (to be true).

Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.

This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence This sentence is false and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers to achieve the effect of self-reference yields important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarskis Theorem) or of knowledge (Montague, 1963).

These meta-theorems still leave us; with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference - as one mighty does if a logic of these concepts is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.

Explicitly, the assumption about knowledge and inferences are:

(1) If sentences A are known, then a.

(2) (1) is known?

(3) If B is correctly inferred from A, and A is known, then B is known.

To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we must add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.

The usual proposals for dealing with the Liar often have their analogues for the Knower, e.g., that there is something wrong with a self-reference or that knowledge (or truth) is properly a predicate of propositions and not of sentences. The relies that show that some of these are not adequate are often parallel to those for the Liar paradox. In addition, one can try here what seems to be an adequate solution for the Surprise Examination Paradox, namely the observation that new knowledge can drive out knowledge, but this does not seem to work on the Knower (Anderson, 1983).

There are a number of paradoxes of the Liar family. The simplest example is the sentence This sentence is false, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences This sentence is not true, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying This sentence on the back of this T-shirt is false, and one on the back saying The sentence on the front of this T-shirt is true. It is clear that each sentence individually is well formed, and were it not for the other, might have said something true. So any attempt to dismiss the paradox by settling in that of the sentence involved are meaningless will face problems.

Even so, the two approaches that have some hope of adequately dealing with this paradox is hierarchy solutions and truth-value gap solutions. According to the first, knowledge is structured into levels. It is argued that there be one-careened notion expressed by the verb; knows, but rather a whole series of notions, of the knowable knows, and so on (perhaps into transfinite), stated ion terms of predicate expressing such ramified concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the truth-value gap solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connexion with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that strengthened or super versions of the paradoxes tend to reappear when the solution itself is stated.

Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notion that satisfy these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as is known by an omniscient God and concludes that there is no careened single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.

Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically stratified concepts. It would seem that we must simply accept the fact that these (and similar) concepts cannot be assigned of any-one fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.

Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its shows that there is something about our reasoning and of concepts that we do not understand. Famous families of paradoxes include the semantic paradoxes and Zeno’s paradoxes. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the Sorites paradox has lead to the investigations of the semantics of vagueness and fuzzy logics.

It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called the paradox of analysis. Thus, consider the following proposition:

(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood. (1) If true, illustrates an important type of philosophical analysis. For convenience of exposition, I will assume (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that: (2) To be an instance of knowledge is to be as an instance of knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings on analysis suggests a second paradoxical analysis (Moore, 1942).

(3) An analysis of the concept of being a brother is that to be a

brother is to be a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and tat:

(4) An analysis of the concept of being a brother is that to be a brother is to be a brother

would also have to be true and in fact, would have to be the same proposition as (3?). Yet (3) is true and (4) is false.

Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are the same concept. Both these assumptions are explicit in Moore, but some of Moores remarks hint at a solution to that of another statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).

Elsewhere, of such ways, as a solution to the second paradox, to which is explicating (3) as: (5) - An analysis is given by saying that the verbal expression χ is a brother expresses the same concept as is expressed by the conjunction of the verbal expressions χ is male when used to express the concept of being male and χ is a sibling when used to express the concept of being a sibling. (Ackerman, 1990). An important point about (5) is as follows. Stripped of its philosophical jargon (analysis, concept, χ is a . . . ), (5) seems to state the sort of information generally stated in a definition of the verbal expression brother in terms of the verbal expressions male and sibling, where this definition is designed to draw upon listeners antecedent understanding of the verbal expression male and sibling, and thus, to tell listeners what the verbal expression brother really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moores intuitive requirement that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?

We must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysand are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern us here.) One way to recognize the difference between the two types of analysis concerning us here is to focus on the difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably salva veritate whenever used in propositional attitude context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and analysantia raising the first paradox is interchangeable. One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum is concepts that are different but that bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.

(I) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an instance of the other.

(ii) The analysand and analysandum are knowable theoretical to be coextensive.

(iii) The analysandum is simpler than the analysands a condition whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.

(iv) The analysand do not have the analysandum as a constituent.

Condition (iv) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (iv) is a necessary condition, and partial analysis, for which it is not.

These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. , such as the concept of being 6 and the concept of the fourth root of 1296. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows. 'J' investigates the analysis of 'K's' concept 'Q' (where 'K' can but need not be identical to 'J' by setting 'K' a series of armchair thought experiments, i.e., presenting 'K' with a series of simple described hypothetical test cases and asking 'K' questions of the form If such-and-such where the case would this count as a case of 'Q'? J then contrasts the descriptions of the cases to which; 'K' answers affirmatively with the description of the cases to which 'K' does not, and 'J' generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their mode of combination that constitute the analysand of 'K's' concept 'Q'. Since 'J' need not be identical with 'K', there is no requirement that K himself be able to perform this generalization, to recognize its result as correct, or even to understand the analysand that is its result. This is reminiscent of Walton's observation that one can simply recognize a bird as a blue jay without realizing just what feature of the bird (beak, wing configurations, etc.) form the basis of this recognition. (The philosophical significance of this way of recognizing is discussed in Walton, 1972) 'K' answers the questions based solely on whether the described hypothetical cases just strike him as cases of 'Q'. 'J' observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that 'K' will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should other things being equal be resolved in favour of the simpler case. 'J' makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. 'J' does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables 'J' to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if 'K' correctly believes that all and only 'P's' are 'R's', the question of whether the concepts of 'P', 'R', or both enter the analysand of his concept 'Q' can be investigated by asking him such questions as Suppose (even if it seems preposterous to you) that you were to find out that there was a 'P' that was not an 'R'. Would you still consider it a case of 'Q'?

Taking all this into account, the necessary conditions for this sort of analysand-analysandum relations is as follows: If 'S' is the analysand of 'Q', the proposition that necessarily all and only instances of S are instances of 'Q' can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations. It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition 'p' is one that can be expressed in form 'not-p', or, if 'p' can be expressed in the form 'not-q', then a contradiction is one that can be expressed in the form 'q'. Thus, e.g., if p is 2 + 1 = 4, then, 2 + 1 ≠4 is the contradictory of 'p', for 2 + 1 ≠ 4 can be expressed in the form not (2 + 1 = 4). If p is 2 + 1 ≠4, then 2 + 1 - 4 is a contradictory of 'p', since 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, 'r', 'not-r'. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if p is true, not-p is false, no proposition p can be at once true and false (otherwise both 'p' and its contradictories would be false?). In particular, for any predicate 'p' and object 'χ', it cannot be that 'p'; is at once true of 'χ' and false of 'χ'? This is the classical formulation of the principle of contradiction, but it is nonetheless, that we cannot now fault either demonstrates. We would eventually hope to be able to solve the antinomy by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.

The conjunction of a proposition and its negation, where the law of non-contradiction provides that no such conjunction can be true: not (p & not-p). The standard proof of the inconsistency of a set of propositions or sentences is to show that a contradiction may be derived from them.

In Hegelian and Marxist writing the term is used more widely, as a contradiction may be a pair of features that together produce an unstable tension in a political or social system: a 'contradiction' of capitalism might be the aérosol of expectations in the workers that the system cannot require. For Hegel the gap between this and genuine contradiction is not as wide as it is for other thinkers, given the equation between systems of thought and their historical embodiment.

A contradictarian approach to problems of ethics asks what solution could be agreed upon by contradicting parties, starting from certain idealized positions (for example,, no ignorance, no inequalities of power enabling one party to force unjust solutions upon another, no malicious ambitions). The idea of thinking of civil society, with its different distribution of rights and obligations, as if it were established by a social contract, derives from the English philosopher and mathematician Thomas Hobbes and Jean-Jacques Rousseau (1712-78). The utility of such a model was attacked by the Scottish philosopher, historian and essayist David Hume (1711-76), who asks why, given that non-historical event of establishing a contract took place. It is useful to allocate rights and duties as if it had; he also points out that the actual distribution of these things in a society owes too much to contingent circumstances to be derivable from any such model. Similar positions in general ethical theory, sometimes called contradictualism: see the right thing to do so one that could be agreed upon in hypothetical contract.

Somewhat loosely, a paradox arises when a set of apparent incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve either showing that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparent unacceptable conclusion can, in fact, be tolerated. Paradoxes are themselves important in philosophy, for until one is solved it shows that there is something that we do not understand. Such are the paradoxes as compelling arguments from unexceptionable premises to an unacceptable conclusion, and more strictly, a paradox is specified to be a sentence that is true if and only if it is false: For example of the latter would be: 'The displayed sentence is false.

It is easy to see that this sentence is false if true, and true if false. A paradox, in either of the senses distinguished, presents an important philosophical challenge. Epistemologist are especially concerned with various paradoxes having to do with knowledge and belief.

Moreover, paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-non-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the Critique of Pure Reason, Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of pure reason unconditioned by sense experience.

At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its character.

Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational content. (Unless otherwise indicated, experience will be reserved for their contentual representations.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in Macbeth saw a dagger. This is, however, ambiguous between the perceptual claim There was a (material) dagger in the world that Macbeth perceived visually and Macbeth had a visual experience of a dagger (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).

As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience represents and the properties that it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a non-shaped square, of which is a mental event, and it is therefore not itself, or finds to some irregularity or is it square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.

Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change: Physical objects remain constant.

Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell us, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching ones left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.

Character and content are none the less irreducibly different, for the following reasons. (1) There are experiences that completely lack content, e.g., certain bodily pleasures. (2) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (4) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content singing bird only after the subject has learned something about birds.

According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one phenomenological and the other semantic.

In an outline, or projective view, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to us-is that it is an individual thing, an event, or a state of affairs.

The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (1) Simple attributions of experience, e.g., Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square, this seems to be relational. (2) We appear to refer to objects of experience and to attribute properties to them, e.g., The after-image that John experienced was certainly odd. (3) We appear to quantify ov er objects of experience, e.g., Macbeth saw something that his wife did not see.

The act/object analysis comes to grips with several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data - private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rocks moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.

These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present us with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.

According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences none the less appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term sense-data is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G.E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are indirectly aware) are always distinct from objects of experience (of which we are directly aware). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongians acceptance of impossible objects is too high a price to pay for these benefits.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)

In view of the above problems, the case for the act/object analysis should be reassessed. The Phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present us with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connexion with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, The after-image that John experienced was colourfully appealing becomes Johns after-image experience was an experience of colour, and Macbeth saw something that his wife did not see becomes Macbeth had a visual experience that his wife did not have.

Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Julie's experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.

This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.

The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.

The relevant intuitions are (1) that when we say that someone is experiencing an A, or has an experience of an A, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.

Perhaps, the most important criticism of the adverbial theory is the many property problem, according to which the theory does not have the resources to distinguish between, e.g.,

(1) Frank has an experience of a brown triangle

and:

(2) Frank has an experience of brown and an experience of a triangle.

Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience that is both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) is equivalent to:

(1*) Frank has an experience of something being both brown and triangular.

And (2) is equivalent to:

(2*) Frank has an experience of something being brown and an experience of something being triangular,

and the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The Adverbialists may use this to answer the many-property problem by arguing that the phrase a brown triangle in (1) does the same work as the clause something being both brown and triangular in (1*). This is perfectly compatible with the view that it also has the adverbial function of modifying the verb has an experience of, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).

A final position that should be mentioned is the state theory, according to which a sense experience of an A is an occurrent, non-relational state of the kind that the subject would be in when perceiving an A. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.

Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture shown which it only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our minds eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.

Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let us set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something else, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as acquaintance. Using such a notion, we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious venison of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expressions knowledge by acquaintance and knowledge by description, and the distinction they mark between knowing things and knowing about things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as logical constructions or logical fictions, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russells The Analysis of Mind, the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but An Inquiry into Meaning and Truth (1940) represents a more empirical approach to the problem. Yet, philosophers have perennially investigated this and related distinctions using varying terminology.

Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of definite descriptions. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as the first person born at sea only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.

Because one can interpret the relation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to direct realism rules out those views defended under the cubic of critical naive realism, or representational realism, in which there is some non-physical intermediary -usually called a sense-datum or a sense impression -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is immediately perceived, than mediately perceived. What relevance does illusion have for these two forms of direct realism?

The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears as an illusory spatial elliptic circularity, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as a physical objects or theory.

So far, if the argument is relevant to any of the direct realises distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?

We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of events or sorted, conflicting affairs but the object perceived as itself the event in cause, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get us in touch with the real nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way things look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.

No comments:

Post a Comment